id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
28589883
|
pes2o/s2orc
|
v3-fos-license
|
Challenges of matrix support as educational practice : mental health in primary healthcare
This study investigated the educational dimension of matrix support practices for mental health within primary care. Using an interpretative-explanatory qualitative approach, professionals involved in matrix support for mental health in a municipality in the state of Sao Paulo, Brazil, were interviewed. The data were compared with matrix support frameworks with two pedagogical trends: directive and constructivist. The analysis on this content was incorporated with the interviews and two themes could be identified: “matrix supporter’s profile”, and “challenges for construction of matrix supporter’s practice.” The subjects’ perceptions regarding supporters’ competence profiles were coherent with matrix support assumptions, whereas their educational practices related mainly to the directive trend. The challenge of implementing constructivist practice was only partially recognized, since this requires a critical and transformative stance regarding the hegemonic educational practices within healthcare. Keywords: Mental health. Primary healthcare. Educational models. Support for human resources development. Assessment of healthcare needs.
Matrix support practices
The matrix support proposals in mental health originated from the fact that primary care needed to incorporate knowledge and practices of the mental health specialties (Dimenstein et al 1 ).According to Campos et al. 2 , matrix support is, at the same time, an organizational arrangement and a work methodology, and it aims to offer assistance backup and technical-pedagogical support to the reference teams, which are responsible for a generalist approach to the care provided for users.Cunha et al. 3 refer to matrix support as the development of the capacity of dialog between the objectives of each discipline and the therapeutic proposal of intersections between diagnoses and treatments.Based on the construction of communication spaces and clinical and sanitary guidelines, matrix support creates close bonds between specialists and reference teams, which, in a shared way, become responsible for the care provided for a given population's health needs (Martins Junior 4 ).
Fostering the exchange of knowledge between reference teams and specialist professionals, matrix support is grounded in the idea that no specialist in isolation will be able to ensure a comprehensive approach to health.The need to articulate each professional category's specificity in the promotion of comprehensive care is associated with the "intertwining" between network services and social equipment to guarantee continuity of care (Nunes 5 ; Dimenstein et al 1 ; Amorim 6 ; Domitti 7 ).However, matrix support practices involving primary care teams and users have started to gain space only in recent publications, by means of the proposal for the reorganization of the teamwork process (Morais et al. 8 ; Santos et al. 9 ); of the potentiality to enlarge the professionals' view and improve care practices (Schatschineider 10 ); and of the challenges to be faced (Morais et al. 8 ; Vasconcelos et al. 11 ; Minozzo et al. 12 ).
International experiences have endorsed the need to pay attention to the fragmentation of health care.Despite the different terminologies, they have also defended the reorganization of work and of the relationships between specialists and generalists through devices that promote the interaction of distinct practices (Kotter 13 ; Kimberley 14 ; Gask 15 ; NHS 16 ).
In Brazil, the establishment of Núcleos de Apoio à Saúde da Família (NASF -Family Health Support Nuclei) has favored the reorganization of specialists in the Family Health Teams 17 .In a technical publication, the Ministry of Health has determined that the matrix support teams have two types of responsibilities: towards the population and towards the primary care team.In addition, it offers indicators to assess the result of these actions 18 .
Brazilian experiences concerning the matrix support proposal have been developed by means of the reorganization of work between specialties and primary care, aiming at the provision of comprehensive care.Nevertheless, professionals can put this proposal into practice through distinct forms (Franco et al. 19 ; Hubner et al. 20 ; Baduy 21 ; Dimenstein et al 1 ).Ballarin et al. 22 argue that dialog spaces are opportunities to learn and exchange knowledge horizontally.They propose to analyze matrix support by means of four dimensions and stress the pedagogical one.Following this path, the educational dimension of the supporters' practice can be considered any action that involves interaction and knowledge share, and not only classic pedagogical activities.
According to Libâneo 23 , the way in which educators work, either selecting and organizing content or choosing a teaching and evaluation approach, is directly linked to their educational presuppositions.
These reflect the understanding of how people interact and learn.Therefore, it is possible to conclude that knowledge about these presuppositions can subsidize the way in which matrix support is operated in its educational dimension.
This study focuses exactly on the educational dimension of matrix support practices in mental health in the scope of primary care, and aims to amplify the understanding of the elements that make it viable.With the purpose of contributing to enhance the employment of this device, the interaction between specialists and primary care professionals is highlighted by means of knowledge exchange and in the promotion of comprehensive care.
Characterization of the scenario
The study investigates the health services network in a municipality located in the interior of the State of São Paulo, which was chosen due to its pioneering adhesion to the Psychiatric Reform process and to its historical intertwining between primary care and mental health, which has been fostered by the matrix support device (Figueiredo et al. 24 ).
During the research, it was found that the local health care network was structured in five districts, and the municipality was a regional reference center.The districts had psychosocial care centers (CAPs III) and mental health teams in primary care, and the municipality had a NASF that was responsible for three health centers.
The CAPs III and mental health teams in primary care worked together with the health centers, practicing a matrix support model in the construction of knowledge exchange moments among the professionals from the services involved in the provided care.The services that adopted this model have produced singular ways of operationalizing knowledge exchange, promotion of bonds, and joint responsibility.
Methodological path
The research, approved by the Research Ethics Committee of the Federal University of São Carlos, opinion no.20970/2012, was a study oriented to a specific problem, whose methodological design was defined according to the qualitative and constructivist approach.Based on the principle that reality is apprehended differently by the subjects, this singularity can be mediated by the values, emotions and sociocultural repertoires brought by the subjects in confrontation with reality (Bulmer 25 ; Guba et al. 26 ; Minayo 27 ).
With the selection of the problem to be investigated, the proposed study was of the interpretiveexplanatory type, so that it was possible to construct a comprehensive frame of a given phenomenon, based on the perspectives of the subjects involved, and trying to identify how these perspectives influence its production (Navarrete et al. 28 ).
Semi-structured interviews were used because they enable the emergence of many narratives and interpretations 27 , which were recorded in audio and, subsequently, transcribed for analysis.The selected participants were eighteen health professionals included in the mental health network, in the management and/or care sphere, and connected with matrix support.Six regional mental health supporters and the coordinator of the area were chosen in order to characterize the matrix support practices in the municipality.
The regional supporters are professionals allocated in health districts, usually psychologists and occupational therapists.Their functional is managerial and is targeted at intra and intersectoral articulation, with the purpose of consolidating mental health care in the municipality.Considering the ten professionals in this function, we requested that those who accepted to participate in the study addressed the diversity of the five health districts.
COMUNICAÇÃO SAÚDE EDUCAÇÃO 2015; 19(54):491-502
Based on the testimonies, inclusion criteria were identified for the selection of services that used the matrix support device.Four of them were developed to recognize the districts with more structured matrix support practices in mental health: (i) regular case discussions, jointly with professionals from the health centers; (ii) establishment of shared interventions with the health center teams, based on the case discussions; (iii) identification of the teams' learning needs; and (iv) promotion of reflections on the practice.
The application of these criteria led to the selection of two health districts.Due to the fact that the matrix support work was still incipient in the NASF at the time the research was conducted, this team was excluded from the sample, which was circumscribed to the CAPs and to the mental health teams in primary care.Thus, eleven professionals, five from the CAPs and six from the mental health teams in primary care, were indicated as having matrix support practices aligned with the inclusion criteria.
In the case of the CAPs, each team collectively indicated one professional and, in the case of the mental health teams, the interviewed regional supporters indicated the matrix supporters whose practices were aligned with the stipulated criteria.The list of indicated professionals was randomly organized for the interviews.Finally, a saturation criterion was used to establish the sufficiency of the utilized sample, which resulted in three professionals per district.
The thematic modality of content analysis was applied to the interpretation of the interviews (Bardin 29 ).According to this communication analysis technique, the presence and frequency of discourses and words, as much as their absence, are relevant.After the testimonies were obtained, similar ideas were grouped into meaning nuclei and these, into themes.This set was the basis for the interpretation of the presented ideas through the application of the above-mentioned conceptual references [27][28][29] .
For the analysis of the educational dimension of the matrix supporter's practice, we used an adaptation (Table 1) of the classifications proposed by many authors who study the area of education (Libâneo 23 ; Santos 30 ; Gauthier et al. 31 ; Becker 32 ).The trend called liberal or traditional 23 includes a set of pedagogical approaches -traditional, renewed progressive, renewed non-directive and technicist pedagogies -that correspond to the directive pedagogy 32 .Likewise, the trend called progressive 23 Matrix support has a conceptual affinity with the progressive or constructivist pedagogy, as it is grounded in the critical analysis of realities and implicitly supports the active and social-political bias of education.This line considers experience as the basis of the educational relationship within pedagogical self-management.
Results and discussion
The analysis of the content of the interviews revealed two basic themes: (i) "the profile of the matrix supporter" and (ii) "challenges to the construction of the matrix supporter's practice".
(i) The profile of the matrix supporter The "profile of the matrix supporter" presented three meaning nuclei: "matrix supporter's attitudes", "matrix supporter's knowledge" and "matrix supporter's skills".These results were related to the capacities that were considered necessary for a competent practice.
According to Hager et al. 33 , the capacities can also be called attributes or resources and are categorized as: attitudinal or affective; cognitive or knowledge-related; psychomotor or skills.The articulation of these attributes in view of a problem of the professional reality supports a certain practice.Competence, however, is expressed only when these capacities are put into action and, depending on the context, generate results of excellence (Lima 34 ).
Meaning nucleus: Matrix supporter's attitudes Availability and openness were highlighted, in the sense of being available and offering, actively, opportunities to share experiences in the follow-up of the cases that were treated jointly with primary care, as it can be seen in the testimony below: "I feel that it's related to availability, to being open for the encounter, to doing together, being available, getting the phone, producing a confidence relationship... an active availability, not that availability of 'you can call me'; that case is ours" (Regional supporter 4).
Flexibility, empathy, sheltering and commitment are also included in the list of attitudinal capacities of this practice, as well as the capacity of receiving criticisms and listening.Through this bias, the supporters argued that these attributes should be supported by the acceptance of the other as legitimate.In fact, and according to Maturana 35 , when we accept the other, we can control pre-concepts and establish a relationship based on respect.
To other interviewees, the supporter should be a specialist who puts his/her knowledge in the service of the enhancement of the team's capacities, and, at the same time, values diversity and stimulates creation.Campos 36 argues that the supporter must have an interactive posture that provides subjects with conditions to reflect critically on their practice.
Thus, in light of the educational dimension of the encounter among subjects, Freire 37 joins education to the opportunity of knowledge construction, in which the educator must be available to what is new and to the other.Respect to each person's autonomy and worldview is considered an ethical imperative that enables a relationship in which everybody participates actively.
From the notion of dialog as interaction among distinct points of view of knowledge, Morin 38 considers the specificities and differences of each element in a recurrent and complementary relation that is instituted, for example, between educator and student; specialist and generalist; mental health and primary care; discipline and interdisciplinarity.Thus, relations that appear to be contradictory can be understood as complementary.
Meaning nucleus: Matrix supporter's knowledge Mastering the knowledge referring to matrix support, to the territory and to the municipality's health care network, as well as to collective and mental health, were also mentioned as capacities of the supporter: "I think that one of the objectives is to get out of the clinic, of the nucleus, [...] he's there as a mental health worker who will attempt to promote a more qualified discussion about what health care is, with the bias of mental health, but thinking about the subject's integral health" (Regional supporter 2).
To Campos 39 , the nucleus dimension in professional work is constituted of knowledge and attributions that are specific to each profession or specialty.The author, based on Pierre Bourdieu's concept of field, attributes to health work a dimension of action field: the field translates a situational notion and indicates the set of knowledge and tasks, outside the professional nucleus, that complements practice.In fact, it represents an enlargement of the specific identity -given by the nucleus -towards interdisciplinarity and interprofessionality, characteristics that are intrinsic to the area of health.
By reorganizing the relation between specialists and primary care, matrix support aims to face work fragmentation which, to some extent, reproduces the tight separation of disciplines in the scope of professional education (Feuerwerker 40 ), in the health services, and in the scope of care (Merhy et al 41 ; Campos et al. 2 ).Putting the specialists' knowledge in the service of primary care teams through the joint discussion of cases and the handling of concrete situations offers clear advantages to practice, as it qualifies the care that is provided, strengthens bonds, and amplifies accountability and users' likelihood to adhere to the proposed treatments (Cunha 42 ).
In this sense, the discourse of matrix and regional supporters was similar, as they defended that professionals must look in a comprehensive way at people who are receiving care, extending their action beyond mental health.Part of them believes that the specialist's knowledge adds value to the discussion of cases and to care projects when it circulates horizontally and enables that all professionals, specialists or not, contribute to health work.
The need to break with the idea of the supporter as someone who prescribes interventions
Meaning nucleus: Matrix supporter's skills
Concerning the supporter's skills, translated as knowledge of some devices, case discussion emerged as the main tool.The professionals highlighted the importance of transferring the constructed knowledge to other situations as a way of supporting teams' autonomy.
"Because in the majority of the cases, it is 'the case in itself', which is interesting, solves a large part of the problems, but does not increase much the teams' power of autonomy" (Regional supporter 4).
The discussions of clinical cases in matrix support also emerge from national and international experiences in the scientific literature 19-20-15-16 .The reading of a concrete reality to be analyzed, and in which one wishes to intervene, allows the establishment of different relations between facts and objects and offers greater possibilities of active transformation of reality 37 .In matrix support, the capacity of observing and critically analyzing the object/practice allows to reconstruct knowledge that can be used in different situations (Mitre et al 41 ; Batista et al 44 ).
The concept of transfer of learning, outlined by Bransford, Brown and Cocking 45 (p.82), considers that "transfer is affected by the degree to which people learn with understanding rather than merely memorize sets of facts or follow a fixed set of procedures".Therefore, the reference team could have more autonomy to use the knowledge and experiences acquired by the group in a given case.
Other matrix supporters argued that the clinical case should value the subject's context and the teams' action.The theory emerges to be approached based on the cases, and it is used to solve the subject's needs, and to amplify the team's understanding of the phenomena in question: "I particularly think that this makes much more sense, because they saw the case, they felt the case under their skin, so they get much more interested than if we say: I have a case of alcohol and drugs and we'll explain what each drug is" (Matrix supporter 5).
Reflection, after the actions were performed, was reported as being part of case discussion, in order to analyze the strategies used and build a line of reasoning.Moments like this would provide the opportunity to learn with the mistakes and to consolidate achievements.In the three meaning nuclei of the profile, we observed coherence between the capacities that the supporters considered necessary to matrix support and the objectives of this practice.Although the set of capacities is closer to the constructivist educational trends, its presence in the knowledge sharing process revealed distinct conceptions about the roles of professionals and educators in the practice.These distinctions were grouped into a second theme and represent challenges to the matrix support practice.
(ii) Challenges to the construction of the matrix supporter's practice In this theme, the meaning nuclei revolved around the "fragilities" and "obstacles" that hinder the transformation of the matrix supporter's intentions into actions.Still regarding practice, some interviewees considered the supporter as someone who should transmit knowledge, select the cases previously and prepare the contents for discussion, independently of the specific context.This is an indication of the close contact between the supporter's educational practice and the presuppositions of the liberal or directive pedagogy, as the supporter is perceived as a transmitter of knowledge without context, in the form of generic themes that are considered relevant.In these situations, the professional becomes a passive receptor who must apprehend the procedures in order to repeat them 30-31-32 .
According to Libâneo 23 , liberal pedagogy, as a typical manifestation of the class society of the capitalist system, defends the predominance of freedom and of individual interests in society.In the last 50 years, Brazilian education has presented clearly liberal tendencies, which have been intensified mainly in the pedagogical level when the roles of educators and professionals are consolidated in knowledge construction.
This pedagogical approach and the many examples of practices reported by the interviewees illustrate the effective mismatch, in the matrix support action, of the objectives that were pointed to the supporter's profile.This gap configures an ambiguity in the practices, which sometimes are close to the desired profile, and sometimes are distant.This tension has been portrayed by some supporters who, in the absence of a critical reflection on their activities, could not realize that this way of organizing matrix support puts obstacles to the autonomy and development of the reference teams.
COMUNICAÇÃO SAÚDE EDUCAÇÃO 2015; 19(54):491-502
Meaning nucleus: Obstacles to the matrix supporter's practice The interviewees who identified a gap between intention and action in the matrix support practices attributed it to the characteristics of their education and to the lack of permanent education processes for health professionals.The organization of the working process was also identified as an obstacle, as the lack of primary care professionals plays a preponderant role in the construction of the specialist's unidirectional action, especially in emergencies.
The frequency of emergencies is a factor that reiterates the overvaluation of the specialist to the detriment of the interaction among other professionals' knowledge: in crises, there is a tendency of viewing the specialist as the predominant professional.Thus, the reference team is reduced to a mere observer of the specialist's action and only reproduces the knowledge of the specialist who deals with the situations."[...] in some of the cases, you go there to listen and you have to give an answer immediately, but I think only supervisors with an extraordinary view can do this and are able to give an answer; many times, a ready answer" (Matrix supporter 1).
At the same time, some interviewees reported that matrix support should also be approached as a specialization: "because being a psychologist or an occupational therapist in primary care in this perspective of matrix support, we can't say we learned this at university.There are things that are much more connected with it: Which strategy do I use now to try to sensitize the team?, which is different from Which strategy do I use to deal with the patient z or y? (Matrix supporter 5).
To face the challenge of improving the education of health professionals, the current Diretrizes Curriculares Nacionais (DCN -National Curriculum Guidelines) for the curricula of undergraduate health programs contain the amplification of the concept of health, recommend education in real contexts, and include the dimensions of management and education in the profile, beyond health care 46 "the transformation of the educational processes, of the pedagogical and health practices, and at the organization of the services, in an combined work between the health system, in its various management spheres, and the educational institutions" 48 .
Based on these results, it was evident that the challenges to the construction of the matrix support practice are related to the recognition of the systems of values and meanings that ground the education and the organization of the working process in the area of health, with the purpose of giving an answer to the fragilities and obstacles to a comprehensive practice of care.
Final remarks
In the current context of the health services, matrix support has become an important tool to include specialists in primary care, as this model demands new capacities from health professionals beyond the clinical work.Due to this, we believe that the educational dimension must be present in the work of matrix supporters through the articulation between an assistance backup and technicalpedagogical support, as both axes are permeated by educational relations and involve interactions among professionals with distinct backgrounds.
The analysis of how knowledge circulates between specialists and reference teams in the testimonies allowed us to conclude that there is a distance between intention and action in the different forms of conducting matrix support.In the examples of matrix support reported here, it was possible to notice a strong presence of liberal or traditional principles, as knowledge transmission was seen as the libertarian, liberating and critical-social pedagogy of contents -corresponds to the relational or COMUNICAÇÃO SAÚDE EDUCAÇÃO 2015; 19(54):491-502 constructivist pedagogy 32 .
proved to be directly proportional to the need of constructing shared care plans: "When I see the intervention of the community agent, in my next practices I'll include this in my list of tools.I'll have this knowledge, I'll legitimate this and the contrary, too, […] it means getting together and talking, seeing what went right and what went wrong" (Matrix supporter 6).
Meaning nucleus: Fragilities in the matrix supporter's practice Concerning fragilities, the absence of discussions about the educational dimension in the COMUNICAÇÃO SAÚDE EDUCAÇÃO 2015; 19(54):491-502 education of specialists and reference team professionals was the most cited one.According to the testimonies, this is the origin of difficulties in conducting the discussions, as feelings of dispute and confusion are awakened in the supporters in relation to their role."It's not something smooth neither to mental health, nor to a large part of the workers.No one had this in their education process […] There is a certain mystique, as people fear to go there and fail to know what to say.This is very interesting.So, the CAPs says, "If I go there and the matter is children, I won't know what to say".So, it's as if he had to go there and have a ready answer" (Ar.1).
. The Sanitary Reform had already revealed the need to change the professionals' education in order to guide it COMUNICAÇÃO SAÚDE EDUCAÇÃO 2015; 19(54):491-502towards a profile that is generalist, reflective and committed to the principles of Brazil's National Healthcare System (SUS)47 .Although the Ministries of Health and Education have instituted diverse initiatives targeted at the implementation of the DCN and at the teaching-service articulation, this challenge is still far from being fully tackled40 .As a result, Permanent Education, instituted as a national policy in 2004, emerges as a possibility of development of new capacities, supported by the reflection on the daily routine of the work.The conception of Permanent Education brought by the policy aims at:
Table 1
Macro pedagogical trends according to the role of knowledge, of the professional and of the educator.
|
2017-09-15T18:28:50.366Z
|
2015-09-01T00:00:00.000
|
{
"year": 2015,
"sha1": "9aa2c1a11543c88fa6ba4dd8a535af4e9a7e4bee",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/icse/a/b6m6Rzhn3mk54nG6WWqxq5n/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9aa2c1a11543c88fa6ba4dd8a535af4e9a7e4bee",
"s2fieldsofstudy": [
"Education",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11503509
|
pes2o/s2orc
|
v3-fos-license
|
CAPELLA ’ S GASTROPLASTY : metabolites and early phase proteins changes in midline and bilateral arciform approaches
BACKGROUND
Obesity has adverse health effects. Dietary reeducation does not seem to offer sustained weight loss. For appropriately selected patients, surgery may be beneficial.
AIM
To evaluate early postoperative metabolic response to surgery in patients submitted to Capella's gastroplasty using two different surgical approaches to the abdominal cavity.
PATIENTS/METHOD
Twenty patients (9 males and 11 females, aged 21 to 53 years) were randomized prior to submission to either one of the surgical access incisions (bilateral arciform or supra-umbilical midline incisions). Blood samples were collected at the beginning and end of the operation, 12 (T-12 h) and 24 hours (T-24 h) postoperatively. Dieresis and synthesis time, blood loss, planimetry of operative field, operative time, hospital stay, hemoglobin, hematocrit, lymphocytes, potassium, albumin, erythrocyte sedimentation rate, C-reactive protein, glucose, pyruvate, lactate and ketone bodies were analyzed.
RESULTS
Dieresis time was significantly decreased when median approach was used. Total operating time, hospital stay, hematocrit, hemoglobin, lymphocyte count, potassium and albumin concentrations were similar in both groups. C-reactive protein (T-12 h), glucose and pyruvate concentrations (T-24 h) increased significantly after completion of surgical procedure. Ketone bodies concentrations were significantly decreased 24 hour following completion of surgical procedure.
CONCLUSION
Capella's gastroplasty induces metabolic and inflammatory changes in blood parameters. There is no evidence of technical superiority of arciform over midline incisions in this study.
INTRODUCTION
Obesity is defined as a condition where the body weight is above normal pattern for height and skeleton frame associated with an excessive ingestion of calories and decreased consumption leading to significant weight gain with adverse health effects and diminished longevity.
Human obesity is accompanied by an outstanding increase of the number of adipocytes.Weight loss may decrease cellular volume; however, cell number remains elevated.Patients who develop early obesity present an increased number of adipose cells when compared to those with late onset of the illness (13) .
Adipocytes work together as a single unit in the release of many substances such as leptin, adipsin, angiotensinogen, prostaglandins and tumor necrosis factor (TFN-alpha) among others.Leptin has been acknowledged as an adipocyte-derived signal molecule, able to limit food intake and increase energy expenditure by interacting with specific leptin receptors located in the central nervous system and in peripheral tissues.Leptin concentration in humans is directly proportional to the mass of fatty tissue.There is a clear association between weight loss and decrease of leptin (10% weight loss may occur if leptin concentration drops to 50% of original values) (5) .
Even mild degrees of obesity have adverse health effects and are associated with decreased longevity.There is already considerable evidence of links between increased production of some adipocyte factors and the metabolic and cardiovascular complications of obesity (11) .Patients with body mass index exceeding 40 have medically significant obesity and a substantial risk of health hazards with concomitant reduction in life expectancy.Dietary reeducation does not seem to offer sustained weight loss.For appropriately selected patients, surgery may be beneficial (9) .
Bariatric surgery seems to be the treatment of choice for well-informed and motivated obese patients with acceptable operative risks, who strongly desire substantial weight loss or who have severe impairments because of their weight (1) .
This study aims to evaluate early postoperative metabolic response to surgery and the usefulness of two different surgical approaches (transverse arciform and midline incisions) to the abdominal cavity in morbid obesity patients submitted to Capella's gastroplasty.
Patients and study design
The study population comprised of 20 patients (9 men and 11 women aged 21 to 53 years, mean 35.5 years).Written consent was obtained from all patients before the study.The study was conducted in accordance with the Declaration of Helsinki and
Patient's randomization
Patients were randomized prior to submission to either one of the surgical access incisions (transverse arciform or supra-umbilical midline incisions).The first patient was selected by single card drawing and was submitted to standard midline incision.Next patient was submitted to transverse arciform incision.The remaining odd-even patients were allocated in an alternated fashion.
Patient's care
Preoperative care included complete evaluation of general health by a team of clinical specialists and multiple preoperative exams (upper digestive tract endoscopy, spirometry, gasometry, electrocardiography, abdominal ultrasound sonography and radiological examination of lungs and atlantooccipital articulation).Laboratory exams included complete blood count, coagulogram, blood glucose, urea, creatinine, total proteins and fractions, transaminases, gama-glutamyltransferase, viral markers for hepatitis, anti-HIV test, total bilirrubin and fractions, alkaline phosphatase, total cholesterol and fractions, triglycerides, calcium, potassium and zinc determinations and urinalysis.All surgical procedures were performed by the same operating team (head surgeon, two auxiliary surgeons, a scrub nurse and two anesthesiologists).Postoperative care was carried out by a multidisciplinary team composed of intensive care, cardiology, nutrition, physiotherapy and psychology specialists along with a general care nurse.
METHODS
All patients were submitted to general inhalatory anesthesia.Preanesthetic medication included lorazepam, ranitidine, cephazoline and fraxiparine.Surgical approach was carried out according to the procedure selected during randomization.All patients were submitted to Capella's gastroplasty (combination of vertical banded gastroplasty and Roux-en-Y gastric bypass) (3) .Blood samples were collected at the beginning and at the end of the surgical procedure (T-0 and T-F, respectively) and 12 and 24 hours later.
The following parameters were considered when comparing incisions: time of dieresis (from skin to parietal peritoneum incisions); time of synthesis (from parietal peritoneum to skin sutures); dry and wet sponge weights for blood loss determinations; planimetry of operative field, length of time required for jejunojejunostomy and making of gastric pouch; visualization of gastroesophageal junction.
Biochemical determinations
Heparinized blood samples collected for enzymatic determinations were deproteinized in vials containing HCl 4 (10%) and kept cold until centrifuged.Following neutralization supernatant fractions were used as samples for enzymatic analyses (blood concentrations of glucose, pyruvate, lactate, acetoacetato and 3-hydroxibutirate).Glucose concentrations were measured after SLEIN's (10) method.Pyruvate and acetoacetate concentrations were measured after HOHORST et al. (6) and WILLIAMSON et al. (14) methods.Lactate concentrations were measured after HOHORST (7) method and β-hydroxybutirate was measured after WILLIAMSON et al. (14) methods.Additional heparinized blood samples were used for complete blood count, albumin, potassium, sedimentation rate and C-reactive protein (CRP) determinations.
Statistical analyses
Friedman's and Mann-Whitney statistical tests were used for statistical analyses.Results were expressed as mean ± SEM.Values of P <0.05 were accepted as statistically significant.1.There was a significant decrease (P <0.05) in dieresis time when median approach was used.Blood loss was similar in both groups (Table 1).There was no significant statistical difference between groups when comparing surgical exposure area (planimetry), abdominal incision closure time (synthesis), length of time required for jejunojejunostomy or gastric pouch making and visualization of gastroesophageal junction.Total operating time and hospital stay were similar in both groups.
Mean values for time spent on abdominal wall dieresis is shown in Table
Hematocrit (Table 2) and hemoglobin (Table 3) were significantly decreased (P <0.001) 12 hours later (T-12 h) when compared to T-0 when all cases were included in one large group (n = 20) despite of absence of significant differences between groups.The same results were found when comparing lymphocyte count (Table 4).
Potassium concentrations were alike in patients submitted to median or arciform incisions.However there was a decrease in kalemia T-24h compared to T-0 when all (n = 20) patients were analyzed together (Table 5).
Blood albumin concentration (Table 6) and sedimentation rate (Table 7) were significantly decreased in T-F compared to T-0.CRP concentrations increased significantly 24 hours after completion of surgical procedure (Table 8).
DISCUSSION
The ideal abdominal incision should allow easy access to any area of the abdominal cavity, appropriate operative field for safe surgical maneuvers and minimal trauma to tissues, preserving neurovascular structures.
The explanation for decreased dieresis time and blood loss for midline incisions compared to arciform incisions rests on the anatomy.Midline incisions are of easier execution, there is less bleeding and if there is need for enlargement it can be carried out easily.Besides, the access to any organ of the abdominal cavity or retroperitoneal space is facilitated.On the other hand arciform incisions require muscle transection, resulting in greater blood loss and a longer period of time for its implementation.
The fall in hematocrit and hemoglobin levels in both groups may be explained by hemodilution secondary to venous hydration and trans-operative oliguria, with increase of plasma volume.Decrease in lymphocyte count in both groups during postoperative period is related to surgical trauma which may lead to immunosupression.BEZERRA (2) in a comparative study (laparotomic versus laparoscopic procedures) in colorectal operations, reported the fall of leukocyte count in early operative time followed by significant elevation of the leukocyte count at the end of the operation, in both groups studied, concluding that the two methods alter inflammatory response.
Potassium concentrations in both midline and arciform incisions presented no statistical significance despite the fact that the trauma imposed by a transverse abdominal incision is known to be superior to that one observed in midline incisions.Muscle section and possible potassium liberation with rise in serum potassium would be expected, due to tissue lesion.In contrast to this expectation, a decrease in serum potassium was observed 24 h postoperatively as compared to T-0 when all 20 patients were considered.
Albumin is responsible for the maintenance of plasma osmotic pressure and the transportation of small molecules.Protein catabolism (proteolysis) due to trauma supplies amino acids for the synthesis of protein and glucose.Increased glucose requirements secondary to trauma shift the metabolic pathway to gluconeogenesis, leading to accelerated albumin consumption (hypoalbuminemia) postoperatively (13) .Albumin is also a negative phase reactant protein and has its plasma concentration decreased following trauma due to decreased hepatic synthesis which is diverted to the production of other positive phase reactant proteins such as protein C reactive (8) .The fall in albumin concentrations at the end of the surgical procedure (T-F) may be explained by decreased hepatic synthesis during surgery.
Erythrocyte sedimentation rate (ESR) is not a good inflammatory indicator for surgical trauma, as it may be affected by many factors.Elevation of ESR is usually related to increase of CRP or leukocytosis (12) .The significant fall of erythrocyte sedimentation rate at the end of the surgical procedure followed by return to normal levels during the postoperative period might be related to its unspecific response to trauma.
Systemic inflammatory response to trauma causes increased production of acute phase proteins.CRP is an acute phase and it reaches its highest concentration within 24 hours as has occurred in the present study (4) .
Glucose concentrations were similar in both groups.Significant postoperative (T-12 h) increase in glucose concentration may be explained by increased liver glycogenolysis.This may be related to glucagon and epinephrine inhibition of insulin secretion and peripheral carbohydrate breakdown by growth hormone.
The significant Increase of blood pyruvate concentrations 12 h postoperatively suggests decreased utilization of this metabolite for energy production by peripheral tissues in the early postoperative period.The absence of alterations in lactacemia suggests that surgical trauma was mild as severe trauma is accompanied by increase in lactacemia due to activation of Cori Cycle (8) .
Ketone bodies (acetoacetate and 3-hydroxybutyrate) are important alternative sources of energy to glucose.They are formed by a specific hepatic biochemical pathway (ketogenesis).Acetyl CoA is the main substrate for their synthesis.Fasting alone leads to increased concentrations of ketone bodies (hyperketonemia) in the first 24-48 hours.However when fasting is accompanied by trauma the hyperketonemic response to fast fails to occur.The catabolic phase of trauma promotes, via interleukin-1, elevation of insulin which in turn leads to a fall in hepatic ketogenesis (4) .The significant decrease in ketone bodies concentrations measured at the end of 24 hours following the surgical procedure may be explained by failure of the hyperketonemic response to this fasting period related bariatric trauma.
CONCLUSIONS
Capella's gastroplasty induces metabolic and inflammatory changes in blood parameters, with significant fall of hematocrit, hemoglobin, potassium, albumin, 3-hydroxybutirate and ketone bodies along with significant elevation of erythrocyte sedimentation rate, C-reative protein, glucose and pyruvate concentrations.
FIGURE 1 -FIGURE 2 -*FIGURE 3 -
FIGURE 1 -Postoperative glucose concentrations according to type of incision and postoperative time
FIGURE 4 -
FIGURE 4 -Postoperative ketone bodies concentrations according to type of incision and postoperative time erythrocyte sedimentation rate, C-reactive protein, glucose, pyruvate, lactate and ketone bodies were analyzed.Results -Dieresis time was significantly decreased when median approach was used.Total operating time, hospital stay, hematocrit, hemoglobin, lymphocyte count, potassium and albumin concentrations were similar in both groups.C-reactive protein (T-12 h), glucose and pyruvate concentrations (T-24 h) increased significantly after completion of surgical procedure.Ketone bodies concentrations were significantly decreased 24 hour following completion of surgical procedure.Conclusion -Capella's gastroplasty induces metabolic and inflammatory changes in blood parameters.There is no evidence of technical superiority of arciform over midline incisions in this study.HEADINGS -Obesity, morbid, surgery.Obesity, morbid, metabolism.Gastroplasty.
ABSTRACT -Background -Obesity has adverse health effects.Dietary reeducation does not seem to offer sustained weight loss.For appropriately selected patients, surgery may be beneficial.Aim -To evaluate early postoperative metabolic response to surgery in patients submitted to Capella's gastroplasty using two different surgical approaches to the abdominal cavity.Patients/Method -Twenty patients (9 males and 11 females, aged 21 to 53 years) were randomized prior to submission to either one of the surgical access incisions (bilateral arciform or supra-umbilical midline incisions).Blood samples were collected at the beginning and end of the operation, 12 (T-12 h) and 24 hours (T-24 h) postoperatively.Dieresis and synthesis time, blood loss, planimetry of operative field, operative time, hospital stay, hemoglobin, hematocrit, lymphocytes, potassium, albumin,
TABLE 1 -
Type of incision and time spent on abdominal wall dieresis and blood loss during surgical procedure * P <0.05 compared to (A)
TABLE 2 -
Types of incision and hematocrit values during and after (p.o.) surgical procedure
TABLE 3 -
Types of incision and hemoglobin values during and after (p.o.) surgical procedure
TABLE 4 -
Types of incision and lymphocyte count during and after (p.o.) surgical procedure Moura Jr LG, Guimarães SB, Castro-Filho HF, Machado HF, Feijó FC, Vasconcelos PRL.Capella's gastroplasty: metabolites and early phase proteins changes in midline and bilateral arciform approaches
TABLE 5 -
Types of incision and potassium concentrations during and after (p.o.) surgical procedure * P <0.05 when compared to T-0
TABLE 7 -
Types of incision and erythrocyte sedimentation rate during and after (p.o) surgical procedure
TABLE 6 -
Types of incision and albumin concentrations during and after (p.o.) surgical procedure * P <0.001 when compared to T-0
TABLE 8 -
Types of incision and C-reactive protein concentrations during and after (p.o) surgical procedure T-12 h /T-24 h :12 and 24 h p.o. * P <0.001 when compared to T-0 * P <0.001 compared to T-0
|
2018-04-03T00:25:45.939Z
|
2004-10-01T00:00:00.000
|
{
"year": 2004,
"sha1": "19c5b46a07d80f7d16e62b0bc10516fa8be63979",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/ag/a/W8Rn66Tn8GFSxYLZ8KCPNKG/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "19c5b46a07d80f7d16e62b0bc10516fa8be63979",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245060474
|
pes2o/s2orc
|
v3-fos-license
|
Priming effect of exogenous ABA on heat stress tolerance in rice seedlings is associated with the upregulation of antioxidative defense capability and heat shock-related genes
Heat stress is a major restrictive factor that suppresses rice production. In this study, we investigated the potential priming effect of exogenous abscisic acid (ABA) on heat tolerance in rice seedlings. Seedlings were pretreated with 10 µM ABA by root-drenched for 24 h and then subjected to heat stress conditions of 40 °C day/35 °C night. ABA pretreatment significantly decreased leaf withering by 2.5–28.5% and chlorophyll loss by 12.8–35.1% induced by heat stress in rice seedlings. ABA pretreatment also mitigated cell injury, as shown by lower malondialdehyde content, relative electrolytic conductivity, and expression of cell death-related genes OsKOD1, OsCP1, and OsNAC4, while expression of OsBI1, a cell death-suppressor gene, was upregulated by ABA pretreatment. Moreover, ABA pretreatment improved antioxidant defense capacity, as shown by an obvious upregulation of ROS-scavenging genes and a decrease in ROS content (O2− and H2O2), and downregulation of the OsRbohs genes. Application of fluridone, an ABA biosynthesis inhibitor, increased membrane injury and the accumulation of ROS under heat stress. Exogenous antioxidants (proanthocyanidins) significantly alleviated leaf withering by decreasing ROS overaccumulation and membrane injury induced by heat stress. In addition, ABA pretreatment significantly superinduced the expression of ABA-responsive genes SalT and OsWsi18, the ABA biosynthesis genes OsNCED3 and OsNCED4, and the heat shock-related genes OsHSP23.7, OsHSP17.7, OsHSF7, and OsHsfA2a. Taken together, these results suggest that exogenous ABA has a potential priming effect for enhancing heat stress tolerance of rice seedlings mainly by improving antioxidant defense capacity and heat shock-related genes.
Introduction
Global warming has become a severe ecological and environmental problem due to the development of industry and population growth. Heat stress is caused by an extremely high temperature or by a lasting threshold of high-temperature weather, which has resulted from global warming (Quint et al. 2016;Xu et al. 2018). Heat stress results in a serious threat to crop production worldwide, and the yields of wheat, rice, and maize were decreased by 6.0%, 3.2%, and 7.4% Communicated by Hong-Xia Zhang.
* Xiaolong Liu lxl032202@163.com 1 due to 1 °C increase in the mean temperature, respectively (Zhao et al. 2017a;Janni et al. 2020). Rice (Oryza sativa L.) is a staple food crop for most of the world's population, while the main temperature during the growing season of rice occurs during the annual long hot summer season (Shi et al. 2015). Rice production during this time suffers from heat stress which directly inhibits the survival and transplanting of late rice, as well as the heading, flowering, and grain-filling of early season rice (Yang et al. 2020). Furthermore, heat stress results in significantly decrease in grain yield, total milled rice yield, head rice yield and total milling revenue with the increasing average growing temperature (Lyman et al. 2013). Thus, the demand for an expanding population under the current situation of global warming remains a huge challenge for food production.
The effect of heat stress on rice production occurs during the entire growth stage. The germination of rice seeds was suppressed when the growth temperatures was ≥ 35 °C, as shown by a significant decrease in the germination rate and growth in buds (Yang et al. 2021). The optimal temperature of rice seedlings ranges from 22 to 28 °C, while the growth of rice seedlings is severely inhibited under growth temperature of ≥ 35 °C (Soda et al. 2018). Heat stress results in withering, browning, abnormalities, and water loss of rice leaves, suppresses the growth of seedlings and roots and causes complete wilting of rice plants (Liu et al. 2016;Kilasi et al. 2018;. Heat stress suppresses photosynthetic efficiency by decreasing chlorophyll content, disturbing the combination of chlorophyll-protein complexes, and damaging the photosystem structure Fan et al. 2017). In addition, heat stress causes excess accumulation of reactive oxygen species (ROS), which results in severe damage to the membrane and cell death (Zhao et al. 2017b). Rice plants are more sensitive to heat stress at the reproductive and grain-filling stages. Heat stress results in the degradation of flowering florets, abortion of pollen, and devitalization of pollen viability of rice at the booting stage Zhang et al. 2018), and suppresses the grain-filling rate in rice at the grain-filling stage Wang et al. 2019). Thus, it is indeed necessary to explore an effective pathway for the improvement of heat tolerance in rice.
Plants generate and accumulate ROS such as superoxide anions (O 2 − ) and hydrogen peroxide (H 2 O 2 ) under various environmental stress conditions, such as drought, salinity, alkalinity, and extreme temperature (Choudhury et al. 2017). These ROS molecules play an important role in the regulation of development and adaptation to the environment (Mittler 2017). However, excess accumulation of ROS causes oxidative stress in plants, which results in severe damage to the plant cellular membrane, RNA, DNA, and proteins (Sewelam et al. 2016). We previously reported that overaccumulation of ROS in rice roots is a main limiting factor in cell damage and plant growth inhibition in rice seedlings under alkaline stress conditions . ROS accumulation is an important harmful pathway in the physiological effects of heat stress on plants . Heat stress causes a remarkable upregulation of respiratory burst oxidase homolog (RBOH) genes and an accumulation of ROS in rice seedlings, and excessive ROS levels disturb the balance of ROS production and scavenging, which further results in membrane lipid peroxidation, cell damage, changes in a series of antioxidant enzymes, and even plant death (Liu et al. 2019). Previous studies have shown that excessive accumulation of ROS induced by heat stress causes the decline of germination rate, pollen viability, spikelet fertility, and grain chalkiness in rice (Suriyasak et al. 2017;Zhao et al. 2017b;Yang et al. 2021), indicating that ROS level is involved in the regulation of yield formation of rice under heat stress condition and enhancing the antioxidative defense capacity promotes anther development and yield formation under heat stress conditions (Dwivedi et al. 2019). Therefore, studies on the improvement of rice tolerance to various stress factors by reducing oxidative stress induced by overaccumulation of ROS are still required for further insight and will provide a potentially useful pathway in rice field production in the future (Kerchev et al. 2015).
The phytohormone abscisic acid (ABA) plays an important role in the regulation of plant growth and adaptive to various stress factors (Dar et al. 2017;Lang et al. 2021). One important pathway of ABA action in stress tolerance is the priming effect on plants (Vishwakarma et al. 2017;Wei et al. 2017;Liu et al. 2019), which confer the potential enhanced ability to mount defense responses to impending stress factors (Aranega-Bou et al. 2014). Priming is progress that plants were pretreated with a range of chemical compounds before being treated in various external stresses and the state of chemically treated plants are referred to as "primed" (Beckers and Conrath 2007;Aranega-Bou et al. 2014). The priming effect of ABA has been demonstrated as shown by rice seeds or seedlings pretreated with ABA improved plants survival, growth, and grain yield under saline-alkaline stress conditions (Gurmani et al. 2011;Wei et al. 2015Wei et al. , 2017. Further analysis on the underlying mechanism of ABA priming showed that ABA priming potentiated to improve the downstream antioxidant defense capacity and stress tolerance-related gene expression for an increased adaptive response to alkaline stress (Liu et al. 2019).
ABA also functions in crops' response to heat stress and plays a vital role in crop production under climate warming conditions (Suzuki et al. 2016;Li et al. 2021). Application of exogenous ABA improves plant heat tolerance as shown by keeping water balance, regulating stomatal conductance, and the regulation of gene expression Li et al. 2014). Additionally, ABA also functions in the regulation of rice in the reproductive and grain-filling stages (Islam 1 3 et al. 2018;Li et al. 2021). The ROS signal pathway plays an important role in plant response to environmental stress and excess accumulation induced by various stress factors results in membrane injury, root damage, and even the death of plants Qiu et al. 2019). ABA confers to regulate ROS levels in plants' response to environmental stress factors Liu et al. 2019), as well as under high-temperature conditions (Hu et al. 2010). And the ABA-deficient mutants exhibited more sensitivity to heat stress (Larkindale et al. 2005). These studies demonstrate a strong correlation between ABA application and regulation of ROS levels in plants' response to stress conditions (Suzuki et al. 2016;Liu et al. 2019). Current studies on the relationship between ABA and heat stress in rice were mainly focused on the ABA-dependent signal pathway and spraying of exogenous ABA . Research of ABA priming effect on stress tolerance in rice was mainly validated in the response to salt or alkaline stress (Wei et al. 2015(Wei et al. , 2017Liu et al. 2019).
This study aimed to gain insights into the effect and mechanism of ABA priming on heat tolerance in rice by focusing on the effect of ABA priming on ROS-formation or ROS-scavenging pathway. We hypothesized that the priming effect of exogenous ABA on heat stress tolerance in rice seedlings is associated with improvement of antioxidative defense capacity and expression of heat stress-tolerant genes.
Plant material and growth conditions
Rice cultivar Huanghuazhan, an elite cultivar suitable to be spread in eastern China, was used in this study. It was bred by crossing 'Huangxinzhan' with 'Fenghuazhan' and was resistant to heat stress (China Rice Data Center). Rice seeds were surface-sterilized with 75% (v/v) alcohol for 5 min and rinsed with deionized water five times. After that, seeds were immersed in water for 2 days and then were sprinkled onto a petri dish with wet filter paper for pre germination in an incubator under dark conditions at 28 °C for 24 h. Eighteen uniformly germinated seeds were transplanted onto a multiwell plate floating on a 320-mL cup containing deionized water for 7 days and then grown in half-strength Kimura B nutrient solution (Miyake and Takahashi 1983) for another 7 days. All rice seedlings were grown in a controlled growth chamber under the following conditions: 28 °C day/22 °C night, 12 h photoperiod, and 350 mmol photons m − 2 s − 1 light intensity.
ABA pretreatment and heat stress treatment
ABA (Sigma, Inc., St, Louis, MO, USA) was dissolved in a small amount of absolute ethanol and then diluted with deionized water to the desired concentrations (Wei et al. 2015;Liu et al. 2019). Rice seedlings at the approximately three-leaf stage were pretreated with 10 µM ABA or deionized water by root-drench for 24 h, respectively. Then these two sets of rice seedlings were transferred to the control or heat stress conditions after being rinsed with deionized water, respectively. Thus, four treatments were set: root-drench with deionized water in unstressed condition (CK); root-drench with 10 µM ABA in unstressed condition (ABA); root-drench with deionized water in heat stress (HS), root-drench with 10 µM ABA in heat stress condition (ABA + HS). We used growth temperature of 40 °C to simulate heat stress. The temperature of unstressed condition was set as 28 °C day/22 °C night, and the temperature of heat stress was set as 40 °C day/35 °C night. This growth temperature was set according to the description of rice response to heat stress in South China by Shi et al. (2015) and Huang et al. (2017).
Treatment of rice seedlings with exogenous Fluridone and Proanthocyanidins (PC)
In this study, exogenous fluridone and PC was used to examine the mechanism of the ABA priming effect. Fluridone is an ABA biosynthesis inhibitor and inhibited rice seedlings' growth under alkaline stress (Wei et al. 2015). Proanthocyanidin is an antioxidant that effectively scavenged superoxide anion radicals and hydroxyl radicals, and alleviated alkalinity-induced suppression of rice seedling growth by inhibiting ROS overaccumulation (Rue et al. 2017;Zhang et al. 2017). Two-week-old rice seedlings were pretreated in the solution with deionized water, 10 µM fluridone, and 1% PC, by root-drench for 24 h, respectively; and then transferred to control or heat stress conditions aforementioned. The treatments were set as follows: CK, Fluridone, PC, HS, Fluridone + HS, PC + HS, respectively.
Measurement of seedling growth
Photograph of the growth condition of rice seedlings was taken at the indicated treatment hours. The withered leaf rate was investigated at 48, 72, and 96 h of heat stress, respectively. The withered leaf rate was recorded as 1 if the whole leaf was dry and brown, while it was recorded as 0.5 if half of the leaf was dry and brown, respectively (Liu et al. 2020). The withered leaf rate of each treatment was represented by the proportion of withered leaves of whole leaves in one cup.
Measurement of chlorophyll content
The chlorophyll content was measured according to the theory as described by Wellburn and Lichtenthaler (1984), with some modifications as described by Liu et al. (2019). Leaf samples (0.1 g) were extracted using a 10 mL mixture of ethanol (5 mL) and acetone (5 mL) under dark conditions. The absorbance of the supernatant was determined at 645 and 663 nm using a spectrophotometer (UV-2700, Shimadzu, Kyoto, Japan) until the whole leaves whitened. The total chlorophyll content unit fresh weight was calculated using the following formula: (20.29×A 645 + 8.05×A 663 ) V/ (1000×W).
Measurement of malondialdehyde (MDA) content and relative electrolytic conductivity (REC)
The MDA content was determined by the thiobarbituric acid reaction as described by Heath and Packer (1968). Leaf samples were homogenized in 1 mL of 50 mM phosphate buffer (pH 7.8) after being smashed at the refrigerated condition with liquid nitrogen and centrifuged at 12,000×g for 15 min. Subsequently, 400 µL of supernatant was mixed with 1 mL of 0.5% thiobarbituric acid, and the mixture was boiled for 20 min. The absorbance of the resulting supernatant after cooled and centrifuged was measured at 532, 600, and 450 nm using a spectrophotometer (UV-2700, Shimadzu, Kyoto, Japan). The MDA concentration was calculated using the following formula: 6.45× (A 532 − A 600 ) − 0.56 × A 450 . Finally, the MDA content in the leaf was calculated according to the fresh weight of the leaf of each treatment.
The relative electrolytic conductivity (REC) was an important index for evaluating the membrane injury (Tantau and Dörffling 1991;Wei et al. 2015). Relative electrolytic conductivity was represented by the electrolytic conductivity of the effusion with leaf in it before and after boiling (Wei et al. 2015). Rice leaves (2 g fresh weight) were randomly selected from each treatment group, washed with deionized water to remove surface-adhered electrolytes. Leaf samples were submerged in 15 mL of deionized water in 50 mL conical tubes and kept at room temperature for 1 h. The electrical conduction of the effusion was then measured with a DDS-12 conductivity meter (Lida Inc., Shanghai, China) and recorded as R1. The tissue samples were killed by heating tubes in a boiling bath for 40 min, cooled to room temperature, and the electrical conduction of the effusion was measured again which was recorded as R2. The REC was evaluated using the formula REC (%) = R1/R2 × 100%.
Measurement of ROS levels
The O 2 − contents were measured as described by Elstner and Heupel (1976) by monitoring nitrite formation from hydroxylamine in the presence of O 2 − , with some modifications as described by Jiang and Zhang (2001). For the determination of O 2 − contents, the fresh leaves (0.1 g) were loaded in a 2 mL tube and frozen in liquid nitrogen, then homogenized with 1 mL of 50 mM potassium phosphate buffer (pH 7.8) and centrifuged at 10,000×g for 10 min at low temperatute of 4 °C, and then collected the supernatant. The incubation mixture contained 0.9 mL of 50 mM phosphate buffer (pH 7.8), 0.4 mL of 10 mM hydroxylamine hydrochloride, and 1 ml of the supernatant were mixed for incubation at room temperature for 20 min. Then, 0.3 mL of 17 mM sulfanilamide and 0.3 mL of 7 mM α-naphthylamine were added to the incubation mixture. After reaction at room temperature for 20 min, ethyl ether with the same volume was added and centrifuged at 8000×g for 5 min. The absorbance values in the aqueous solution were read at 530 nm to calculate the contents of O 2 − from the chemical reaction of O 2 − and hydroxylamine. The H 2 O 2 contents were measured as described by monitoring the A 415 of the titanium-peroxide complex (Brennan and Frenkel 1977). For the determination of H 2 O 2 contents, the fresh leaves (0.1 g) were loaded in a 2 mL tube and frozen in liquid nitrogen, then homogenized with 1 mL of acetone and centrifuged at 8000×g for 10 min at low temperature of 4 °C, and collected the supernatant. The incubation mixture contained 1 mL of the supernatant, 0.1 mL of titanium sulfate, and 0.2 mL of stronger ammonia water and then was centrifuged at 4000×g for 10 min at room temperature. The precipitate was solubilized in 1 mL of 2 mol/L H 2 SO 4 and then reacted at room temperature for 5 min. The absorbance values in the aqueous solution were read at 415 nm to calculate the contents of H 2 O 2 . The analytical reagent used to measure the H 2 O 2 and O 2 − contents were acquired from the determination kit, according to the manufacturer's instructions (Comin Biotechnology Co., Ltd. Suzhou, China) Liu et al. 2019).
The housekeeping gene β-actin (GenBank ID: X15865.1) was used as an internal standard. PCR was conducted in a 20 µL reaction mixture containing 1.6 µL of cDNA template (50 ng), 0.4 µL of 10 mM specific forward primer, 0.4 µL of 10 mM specific reverse primer, 10 µL of 2× SYBR® Premix Ex Taq™ (TaKaRa, Bio Inc.), and 7.6 µL of double-distilled H 2 O in a PCR machine (qTOWER2.2. Analytic Jena. GER). The procedure was performed as follows: 1 cycle for 30 s at 95 °C, 40 cycles for 5 s at 95 °C, and 20 s at 60 °C, and 1 cycle for 60 s at 95 °C, 30 s at 55 °C, and 30 s at 95 °C for melting curve analysis. The level of relative expression was computed using the 2 −△△CT method (Livak and Schmittgen 2001).
Experimental design and statistical analyses
All of the experiments were conducted in a controlled growth chamber with five biological replicates, each consisting of 3 cups of rice seedlings, with eighteen seedlings each cup. Statistical analyses were performed using the statistical software SPSS 21.0 (IBM Corp., Armonk, NY). Based on a one-way analysis of variance (ANOVA), Duncan's multiple range test (DMRT) was used to compare differences in the means among treatments. The significance level was P < 0.05.
ABA pretreatment rescued rice seedlings from wilting and death under heat stress
There was no significant difference in leaf withering with or without ABA pretreatment at 24 and 48 h of heat stress (Fig. 1A, B). While pretreatment with exogenous ABA significantly rescued rice seedlings from wilting and death as shown by the lower withered leaf rates of seedlings pretreated with ABA at 72 and 96 h of heat stress (Fig. 1C-E), the withered leaf rate of rice seedlings was decreased by 28.5% and 15.8% by ABA pretreated under heat stress condition (Fig. 1F). And this mitigative effect was sustained to 108 h of heat stress (Fig. S1). Almost the whole leaves were withered and there was no significant difference with or without ABA application, after 120 h of heat stress (Fig. S1). Rice seedlings pretreatment with ABA significantly increased chlorophyll content by 11.3%, 25.9%, and 13.8%, compared with without ABA pretreatment under heat stress conditions (Fig. 1G).
ABA pretreatment mitigated membrane injury induced by heat stress
Exogenous ABA pretreatment significantly mitigated cell injury as shown by a lower accumulation of MDA and REC (Figs. 2A, B and S2). Compared to HS treatment, MDA content was decreased by 22.4%, 22.1%, and 10.8%, and REC was decreased by 14.1%, 13.2%, and 7.9%, at 48, 72, and 96 h of heat stress, respectively ( Fig. 2A, B). In addition, a cell death suppressor, OsBI1, was significantly downregulated and the cell death-related genes, OsKOD1, OsCP1, OsNAC4, were significantly upregulated by ABA pretreated under heat stress conditions (Fig. 2C-F). The relative expression level was increased by 37.1%, 47.2%, and 50.2% with ABA pretreatment at 48, 72, and 96 h of heat stress condition, respectively (Fig. 2C).
ABA pretreatment decreased ROS accumulation and improved ROS-savaging capacity under heat stress
Pretreatment with exogenous ABA significantly inhibited ROS accumulation as shown by lower O 2 − and H 2 O 2 content in rice leaves under heat stress conditions (Fig. 3). Compared to HS treatment, the content of O 2 − was decreased by 5.9%, 19.6%, and 22.2% (Fig. 3A), and content of H 2 O 2 was decreased by 8.3%, 16.5%, and 16.1% with ABA pretreatment at 48, 72 and 96 h, respectively (Fig. 3B).
ABA pretreatment also suppressed the transcriptional expression of OsRboh genes. As shown in Fig. 4, an obvious upregulation was shown in the OsRbohs family genes by heat stress, and the relative expression level of OsRboh1, OsRboh4, OsRboh5, OsRboh6, and OsRboh7 was reached to a higher level. Among these OsRbohs family genes, OsR-boh2, OsRboh3, OsRboh5, and OsRboh7 were significantly suppressed by ABA pretreatment (Fig. 4).
We further analyzed the relative expression levels of 20 ROS-scavenging genes, as shown in Fig. 5, almost all ROSscavenging genes were upregulated by heat stress. Furthermore, ABA pretreatment significantly super-upregulated the expression level of 16 ROS-scavenging genes except for R5, R7, R13, and R20 (Fig. 5).
Exogenous ABA biosynthesis inhibitor (Fluridone) suppressed rice seedlings' growth under heat stress
As shown in Fig. 6A, the growth condition of rice seedlings with the application of fluridone (treatment of Fluridone + HS), an ABA biosynthesis inhibitor, was similar to the HS treatment. The withered leaf rate and chlorophyll content were not statistically significant between Fluridone + HS and HS treatment (Fig. 6B, C), as well as the accumulation of O 2 − and H 2 O 2 (Fig. 6F, G). Accumulation of MDA and MI with the application of fluridone was significantly higher than that of HS treatment at 48, 72 h of heat stress condition, respectively (Fig. 6D, E).
Application of exogenous antioxidant (Proanthocyanidins, PC) rescued rice seedlings from leaf withering induced by heat stress
In this study, application of exogenous PC significantly rescued rice seedlings from leaf withering as shown by a decrease of 47.8%, 44.7%, and 33.5% of leaf withered rate at 48, 72, and 96 h, compared to HS treatment (Fig. 6A, B). Content of chlorophyll was increased by 13.6%, 31.3%, and 34.8% with the application of PC under heat stress conditions, compared to HS treatment (Fig. 6C). In addition, membrane injury was significantly mitigated by PC as shown by a lower accumulation of MDA content and MI in rice seedlings of PC + HS treatment (Fig. 6D, E). Consistently, the accumulation of O 2 − and H 2 O 2 was decreased by 25.3-41.1% and 39.8-45.6% with the application of PC, compared to HS treatment (Fig. 6F, G). Values are means ± SD, n = 5. Different letters on the column represent significant differences (P < 0.05) between different treatments based on Duncan's test Two-week-old rice seedlings were root-drenched with or without 10 µM ABA for 24 h, and then subjected to unstress or heat stress conditions. Malondialdehyde (MDA) content (A) and relative electrolytic conductivity (REC) (B) of rice seedlings were measured at the indicated treatment hours. Values are means ± SD, n = 5. Expression levels of cell death-related genes, OsBI1 (C), OsKOD1 (D), OsCP1 (E), and OsNAC4 (F) were measured at the indicated treatment hours. A quantitative real-time polymerase chain reaction was performed using OsACT1 as an internal standard. The expression levels of unpretreated control (CK) were set as the unit to calculate the expression levels, shown as fold changes relative to the CK. Values are means ± SD, n = 3. Different letters on the column represent significant differences (P < 0.05) between different treatments based on Duncan's test
ABA pretreatment upregulated stress tolerance-related genes under heat stress
ABA signaling pathway was exactly activated by heat stress and ABA pretreatment as shown by an upregulation of two ABA-responsive genes, Salt and OsWsi18 (Fig. 7A, B). While the expression levels of Salt and OsWsi18 were significantly superinduced by 34.2-47.8% and 25.9-26.9% with ABA pretreatment, compared to HS treatment (Fig. 7A, B). The expression levels of two ABA biosynthesis genes, OsNCED3 and OsNCED4, were increased by 40.8-71.3% and 32.5-54.0% by ABA pretreatment, compared to HS treatment (Fig. 7C, D).
To gain further insights into the mechanism of ABA pretreatment for heat stress, two heat shock protein (HSP) genes, OsHSP23.7 and OsHSP17.7, and two heat shock transcription factors (HSF), OsHSF7 and OsHsfA2a, were analyzed in this study. All these stress tolerance-related genes were significantly upregulated by ABA, HS, and ABA + HS treatment, while the relative expression levels of these four genes were significantly super-upregulated by ABA pretreatment under heat stress conditions (Fig. 7E-H). The gene relative expression levels of OsHSP23.7, OsHSP17.7, OsHSF7 and OsHsfA2a was increased by 28.4-36.9%, 32.9-49.7%, 31.0-42.6% and 33.3-50.5% with ABA pretreatment under heat stress condition, respectively (Fig. 7E-H). Two-week-old rice seedlings were root-drenched with or without 10 µM ABA for 24 h, and then subjected to unstress or heat stress conditions. Expression levels of ROS generation-related genes, OsRboh1 OsRboh8 (H), and OsRboh9 (I) were measured at the indicated treatment hours. A quantitative real-time poly-merase chain reaction was performed using OsACT1 as an internal standard. The expression levels of unpretreated control (CK) were set as the unit to calculate the expression levels, shown as fold changes relative to the CK. Values are means ± SD, n = 3. Different letters on the column represent significant differences (P < 0.05) between different treatments based on Duncan's test
Discussion
Heat stress is characterized by an extreme or lasting hightemperature climate for a long time, which has become an enormous meteorological disaster for crop production . Heat stress results in severe inhibition in crop growth and yield formation as shown by increasing leaves withering and death (Wei et al. 2012;Kilasi et al. 2018;Liu et al. 2018), damaging cell membrane and photosynthetic structure (Essemine et al. 2017;Soda et al. 2018), impairing pollen swelling (Das et al. 2014;Wang et al. 2019), reducing spikelets Zhang et al. 2016Zhang et al. , 2018 and ABA priming increased ROS-scavenging capacity under heat stress. Two-week-old rice seedlings were root-drenched with or without 10 µM ABA for 24 h, and then subjected to unstress or heat stress conditions. Expression levels of 20 ROS-scavenging related genes (R1-R20) were measured at the indicated treatment hours. A quantitative real-time polymerase chain reaction was per-formed using OsACT1 as an internal standard. The expression levels of unpretreated control (CK) were set as the unit to calculate the expression levels, shown as fold changes relative to the CK. Values are means ± SD, n = 3. Different letters on the column represent significant differences (P < 0.05) between different treatments based on Duncan's test the grain filling (Chen et al. 2017;Suriyasak et al. 2017). Recently, it was shown that the application of exogenous phytohormones alleviated heat-induced damage in plants and enhanced plant heat tolerance . ABA plays an important role in crops' response to environmental stress. Previous studies have reported the priming effect of exogenous ABA on tolerance to alkaline stress in rice seedlings (Wei et al. 2015(Wei et al. , 2017Liu et al. 2019). And we previously showed rice seeds soaked with exogenous ABA significantly improved seed growth under lasting heat stress conditions (Yang et al. 2021). In the present study, rice seedlings pretreated with exogenous ABA significantly mitigated the heat-induced leaf withering (Fig. 1), membrane injury (Fig. 2), and overaccumulation of ROS (Figs. 3 and 4), and improved ROS-scavenging capability (Fig. 5). In addition, there was some evidence that showed that application of the ABA biosynthesis inhibitor, fluridone, compromised tolerance to heat stress in rice seedlings (Fig. 6), while application of the antioxidant, PC, improved tolerance of the seedlings to heat stress (Fig. 6). Pretreatment with ABA also upregulated gene expression levels related to ABA signal and heat shock and transcription factor (Fig. 7). These data collectively suggest that pretreatment with exogenous ABA enhanced heat tolerance in rice seedlings mainly by improving ROS-scavenging capability and upregulating heat shockrelated genes (Fig. 8).
ABA is an important "stress phytohormone" in plants, which has been evidenced by the action in various stress conditions such as drought, salt, alkali, cold, and high temperature (Dar et al. 2017;Vishwakarma et al. 2017). Exogenous ABA plays a vital in the improvement of stress tolerance by multiple methods, such as foliage spray, adding into solution, or seed soaking (Gurmani et al. 2011;). An important mechanism of ABA for enhancing stress tolerance in plants is the priming effect, which helps plants to acquire a potential capacity to enhance defense response to subsequent stress factors (Aranega-Bou et al. 2014;Wei et al. 2017). This priming effect has recently been validated in rice response to salt or alkali stress as shown by seed presoaking or root drenching with exogenous ABA Fig. 6 Effect of exogenous ABA biosynthesis inhibitor (Fluridone) and antioxidant (Proanthocyanidins, PC) on rice seedlings' growth under heat stress conditions. Two-week-old rice seedlings were rootdrenched with or without 10 µM fluridone or 1% proanthocyanidins for 24 h, and then subjected to unstress or heat stress conditions. Photographs of seedling growth (A) were taken at 48 h, 72 h, and 96 h, respectively. Withered leaf rate (B), chlorophyll content (C), REC (D), MDA content (E), O 2 − content (F), and H 2 O 2 content (G) of rice seedlings were counted at 48, 72, and 96 h, respectively. Values are means ± SD, n = 5. Different letters on the column represent significant differences (P < 0.05) between different treatments based on Duncan's test significantly improved the survival rate, plant growth, and grain yield of rice (Gurmani et al. 2011;Wei et al. 2015Wei et al. , 2017. Application of exogenous ABA plays an active effect in plants' response to heat stress (Islam et al. 2018). However, few studies have reported on the priming effect of ABA in the heat stress responses. We previously reported that ABA primes rice seeds for enhanced heat stress tolerance as shown by ABA presoaking improved ROS-scavenging capacity, inhibiting ROS overaccumulation and mitigating membrane injury (Yang et al. 2021). Results of the present study showed that the ABA-responsive genes, Salt and OsWsi18 (Fig. 7A, B), and ABA-biosynthesis genes, OsNCED3 and OsNECD4 (Fig. 7C, D), were significantly upregulated by heat stress, and application of the ABA inhibitor suppressed rice seedlings growth (Fig. 6), which indicated that the ABA signal indeed participated in the Fig. 7 ABA priming upregulated transcriptional expression of stress tolerance-related genes under heat stress. Two-week-old rice seedlings were rootdrenched with or without 10 µM ABA for 24 h, and then subjected to unstress or heat stress conditions. Expression levels of ABA-responsive genes, Salt (A) and OsWsi18 (B), ABA biosynthesis genes OsNCED3 (C) and OsNCED4 (D), and stress tolerance-related genes OsHSP23.7 (E), OsHSP17.7 (F), OsHSF7 (G), and OsHs-fA2a (H) were measured at 72 h. A quantitative real-time polymerase chain reaction was performed using OsACT1 as an internal standard. The expression levels of unpretreated control (CK) were set as the unit to calculate the expression levels, shown as fold changes relative to the CK. Values are means ± SD, n = 3. Different letters on the column represent significant differences (P < 0.05) between different treatments based on Duncan's test 1 3 response to heat stress in rice seedlings. However, rice seedlings were withered or eventually dead induced by heat stress for more than 4 days (Fig. 1), indicating that these activation levels of ABA-induced by heat stress may not be sufficient to effectively cope with the heat stress factor. Nevertheless, ABA pretreatment upregulated the expression of ABA-responsive and ABA-biosynthesis genes to a higher degree (Fig. 7), as well as a great increase of ROS-scavenging genes (Fig. 5), heat shock-related genes (Fig. 7), and a remarkable decrease of ROS accumulation (Fig. 3) and cell death in rice seedlings under heat stress condition (Fig. 2). These results suggest that exogenous ABA enhances tolerance to heat stress in rice seeds or seedlings by the priming effect which potentiates multiple downstream pathways response to heat stress.
ROS plays a vital role in the regulation of plants' response to various stress factors (Choudhury et al. 2017;Mittler 2017). ROS serves as the signaling messenger in a series of physiological processes required for the growth regulation and stress response at low levels (Sewelam et al. 2016;Mittler 2017). However, environmental stress induces overaccumulation of ROS in cells, which results in oxidative stress and even cell death in plants (Choudhury et al. 2017;Zhang et al. 2017). In rice, excessive accumulation of ROS has been identified as a key causal factor in the inhibition of seed germination and seedlings growth under various stress conditions due to the result of oxidative stress, especially for the severe cellular damage to roots (Guan et al. 2017;Zhang et al. 2017). Heat stress caused multiple physiological effects to rice including membrane and photosynthesis damage, disturbance of ROS accumulation, and carbohydrate . We previously showed that increasing intracellular ROS levels was the primary for the inhabitation of seed germination and bud growth under lasting heat stress conditions (Yang et al. 2021). In the present study, heat stress caused a remarkable increase of ROS in rice seedlings as shown by a gradually rising accumulation of O 2 − and H 2 O 2 in leaves at the indicated time (Fig. 3A, B), as well as the upregulation of a series of OsRboh genes (Fig. 4). Meanwhile, rice seedlings presented a significant membrane injury as shown by the increase of MDA and REC ( Fig. 2A, B), as well as several cell death-related genes, OsKOD1, OsCP1, and OsNAC4, (Fig. 2D-F) under heat stress condition. In addition, several ROS-scavenging genes were significantly upregulated by heat stress (Fig. 5). These results indicated that the ROS signal pathway was activated in the response to heat stress in rice seedlings; however, the ROS levels were too high in turn led to severe injury to the cell membrane, and finally resulted in withering and even death of rice seedlings (Fig. 1). Application of exogenous antioxidant, PC, significantly rescued rice seedlings from death by decreasing the ROS content and membrane injury (Fig. 6), indicating that overaccumulation of ROS is an important mechanism for inhibiting rice seedlings induced by heat stress. Nevertheless, rice seedlings pretreatment with ABA significantly improved antioxidative defense capacity as shown by a series of ROS-scavenging genes (Fig. 5), and decreased the ROS accumulation (Fig. 3A, B) and membrane injury (Fig. 2), which achieved a similar effect with the PC. On the contrary, the application of fluridone was ineffective to decrease the ROS accumulation and membrane injury (Fig. 6). These data demonstrate that ABA primes for enhanced heat tolerance in rice seedlings mainly by improving the ROS-scavenging capacity (Fig. 8), which is accordant to our previous study in alkaline stress (Liu et al. 2019).
The ROS levels in plants are codetermined by the ROS formation which is mainly regulated by the RBOH genes, and the scavenging pathway that is constituted by antioxidant enzymes (Choudhury et al. 2017). The ROS formation may be induced by various stress factors, as well as ABA, while ROS levels would affect the ABA biosynthesis and catabolism (Ishibashi et al. 2015;Suriyasak et al. 2017). Thus, the "cross-effect" of ROS and ABA levels play a vital role in plants' response to environmental stress conditions (Ye et al. 2011;Liu et al. 2019;Zhao et al. 2021). In this study, almost all these OsRboh genes were upregulated Fig. 8 A schematic mechanism of ABA in response to heat stress in rice seedlings. Heat stress-induced overaccumulation of ROS in leaves caused severe membrane injury in rice seedlings and even the plant death. Exogenous ABA application super-induced the ABA signal in rice, to improve the antioxidant defense capability to inhibit ROS overaccumulation and upregulate the expression of heat shockrelated genes for enhanced tolerance to heat stress in rice during the heat stress process, indicating that heat stress resulted in the accumulation of ROS by inducing the transcriptional expression of OsRboh genes in rice seedlings. Among these OsRboh genes, OsRboh1, OsRboh4, OsR-boh6, OsRboh8, and OsRboh9 were induced by ABA priming and heat stress, which indicated that ABA-induced the expression of OsRboh genes to increasing ROS levels in the regulation of plant growth and response to stress factors . Nevertheless, OsRboh2, OsRboh3, OsRboh5, and OsRboh7 was suppressed by ABA pretreatment under heat stress (Fig. 4), which may demonstrate another potential mechanism in the "cross-effect" of ROS and ABA in plants response to environmental stress, that was ABA priming may inhibit the expression of OsRboh2, OsRboh3, OsRboh5, and OsRboh7 to decrease ROS formation under heat stress condition (Fig. 8). In further studies, it would be interesting to gain further insights into the correlation between ROS formation and ABA levels by using the mutants or transgenic plants in ROS or ABA pathways.
Reprograming the gene expression is an important pathway for plants to cope with multiply environmental stress conditions. Recently, several genes have been identified for enhancing heat tolerance in plants (Hoang et al. 2019;Su et al. 2019). Heat shock proteins and heat shock transcription factors are known as the vital defense mechanism for plants or animals to resist heat stress conditions and numerous studies have demonstrated that overexpression of the HSP or HSF genes contributed to improving stress tolerance in rice (Cheng et al. 2015;Liu et al. 2015). ABA has a potential regulation effect in the genetic network in plants' response to the stresses (Liu et al. 2019). In this study, the ABA-responsive genes and ABA biosynthesis genes were superinduced by ABA pretreatment under heat stress ( Fig. 7A-D), indicating the ABA signal pathway was indeed activated by ABA priming. In addition, two HSP genes, OsHSP23.7 and OsHSP17.7; and two HSF genes, OsHSF7 and OsHsfA2a were significantly upregulated by ABA-priming under heat stress ( Fig. 7E-H). These data represented another important mechanism of the ABA-priming effect, indicating that ABA was involved in the multiple gene transcriptional regulatory network in plants under heat stress conditions. As shown in Fig. 8, heat stress-induced the expression of OsRboh genes in rice seedlings, which caused overaccumulation of ROS in leaves and further resulted in severe membrane injury as shown by higher cell and plant death of rice seedlings. Exogenous ABA application super-induced the ABA signal, to inhibit expression of OsRboh2, OsR-boh3, OsRboh5, and OsRboh7 and upregulate the antioxidant defense capability to decrease ROS accumulation in leaves for enhanced tolerance to heat stress, which achieved the same effect with antioxidant (PC). In addition, ABA application upregulated the expression of HSP and HSFrelated genes for activating the activity of heat shock protein.
In summary, in this study, ABA priming super-increased the ABA signal in rice seedlings under heat stress, to greatly upregulate ROS-scavenging capability and expression of heat shock-related genes, for increasing adaptive response to heat stress.
|
2021-12-12T16:37:59.991Z
|
2021-12-09T00:00:00.000
|
{
"year": 2022,
"sha1": "3e1f2cf76e34e0fc24bb1439f5f32631a867be99",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1098035/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "b46b5e34a3e48c02fb899d650ebbaf47d4c7252b",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
201693394
|
pes2o/s2orc
|
v3-fos-license
|
Heart Rate Characteristic Index Monitoring for Early Detection of Infections in Very Low Birth Weight Infants
The aim of this study is 1) to determine the sensitivity and specificity of continuous heart rate characteristics (HRC) monitoring in detection of infections and 2) to evaluate whether HRC monitoring detects infections prior to onset of clinical symptoms in very low birth weight (VLBW) infants. A retrospective cohort study was conducted analyzing HRC scores and episodes of infection for VLBW infants in the Neonatal Intensive Care Unit (NICU) at Cincinnati Children’s Hospital Medical Center from January 2015 through May 2016. HRC scores were acquired using the HRC monitor system and entered into the electronic medical record by bedside staff. Culture-positive and culture-negative clinical infections were recorded. Positive HRC scores were defined as an increase 1 point above the baseline or the first rise above 2. HRC scores within 24 hours and also within the 5-day period before the start of antibiotics for infections were analyzed for sensitivity, specificity, positive predictive value and negative predictive value for detection of neonatal infection. The HRC score increase 1 point above the baseline or the first rise above 2 in the 24 hours before the start of antibiotics for infectious events had a sensitivity of 68.0%, a specificity of 10.8%. Positive predictive value (PPV) and negative predictive value (NPV) were 34.0% and 33.3%, respectively. The PPV and NPV were modestly higher for elevated HRC scores during the 5-day period before infections, 41.1% and 66.7%, respectively. In our single-center retrospective study, elevated HRC scores had limited ability to detect infection. More than half of the positive monitor events were not related to infection. The potential clinical impact of the monitor to detect infection before the onset of clinical symptoms was limited and the risk for unnecessary evaluation and treatment was high.
Introduction
Improvement in neonatal care has enabled the survival of increasing numbers of very low birth weight (VLBW) infants [1]. As the number of VLBW infants grows, healthcare professionals face the struggle of not only decreasing mortality, but also improving quality of life. VLBW newborns are particularly susceptible to infections due their immature immune system [2]. Infections have a high rate of mortality and morbidity in this population [3]. Not only are infections associated with unfavorable outcomes in premature infants, their early diagnosis presents a challenge because symptoms are often subtle and nonspecific. Heart rate variability has been studied as a potential approach to enable detection of sepsis in infants prior to onset of clinical deterioration. Heart rate variability is a physiological finding regulated by the autonomic nervous system [4]. Its disturbance and correlation with different pathologies have Very Low Birth Weight Infants been established [5][6][7][8]. A heart rate characteristics (HRC) score was developed based on diminished heart rate variability and transient decelerations observed in septic infants [9]. A device, called the HeRO tm monitoring system (Medical Predictive Science Corporation, Charlottesville VA), is commercially available that uses electrocardiograph findings to calculate the risk of developing a septic event within the next 24 hours. The use of this technology has been reported to reduce mortality in VLBW infants [10,11]. The objective of this study is to analyze if use of the Hero monitor improves early detection of sepsis in VLBW infants in a quaternary level Neonatal Intensive Care Unit. HRC monitoring is routinely used in all VLBW infants admitted to our NICU. The monitor provides non-invasive and continuous information on the probability of developing infection in the population of high risk for infection throughout NICU course. A rising score alerts the medical team of a possible onset of infection and prompts close clinical observation to determine if the patient should be evaluated for infection and started on antibiotic treatment. We hypothesized that these alerts would help to diagnose and treat infections earlier and before clinical deterioration.
Methods
A retrospective cohort study was designed to retrieve data from all VLBW infants<1500 grams who were submitted to HRC monitoring in the Neonatal Intensive Care Unit (NICU) at Cincinnati Children's Hospital Medical Center, a quaternary level facility. All patients were outborn. The study included patients admitted to the NICU during the period from January 2015 to May 2016. Approval from the Institutional Review Board was obtained. The suggested care of patients undergoing HRC monitoring in our unit is to clinically examine the patient when there is an increase of one point above his prior baseline or when the score rises above 2 for the first time. Based on the clinical evaluation, the provider decides if the patient should be tested for sepsis and started on antibiotics. Medical records were reviewed to identify HRC scores for each patient. The scores are generally recorded hourly in the electronic medical record for each patient. HRC monitoring was considered positive for each 24-hour period in which the score was either one point above baseline or it rose above 2 the first time. All available positive scores were recorded during data collection.
For infants who received evaluation for infection, data was collected on the presence of clinical symptoms as well as laboratory studies that were obtained for assessment, including CD64 measurement, bacterial cultures and radiographs. Consecutive days on antibiotics were recorded. Clinical symptoms that were assessed included hyper/ hypothermia, apnea, respiratory distress, need for increased respiratory support, feeding intolerance, abdominal distension, lethargy and hyperglycemia. CD64 index cut off for our laboratory is 2.3 [12]. Clinical infections in the absence of positive cultures were included if they prompted at least 5 days of antibiotics and were associated with evidence of a systemic inflammatory response [13]. They were categorized in two groups: culture proven infection when blood, urine, cerebrospinal fluid or peritoneal fluid were culture positive or clinical sepsis, such as necrotizing enterocolitis (NEC), pneumonia, ventilator associated pneumonia (VAP) and tracheitis. Data on elevated scores without further infection screening was also collected. It included searching for non-infectious causes for heart rate variability such as perinatal asphyxia, Grade 3 or 4 of intraventricular hemorrhage (IVH), surgery, retinopathy of prematurity exam or intervention.
To assess the diagnostic ability of the Hero monitor for early detection of sepsis in VLBW infants: sensitivity, specificity, positive predictive value and negative predicted value were calculated with 95% confidence intervals determined using the Wilson score method [14]. For infants with infection, classification of positive HRC scores were assessed at specified timepoints (24-hour and 5-day intervals) before the start of an antibiotic. For infants without infection, however, a predefined timepoint cannot be determined. Thus, for these infants, positive HRC scores were classified at any timepoint during data collection. Sensitivity and specificity calculations were subject to bias since the number of screening timepoints differ by group, in light of the continuous screening and calculations based on infection and not the individual. To mitigate bias, sensitivity and specificity were analyzed at the level of the infant using the first infection for those patients with multiple infections. SAS version 9.3 was used for all analyses.
Results
Characteristics of the study cohort are presented in Table 1. Indications for transfer to this quaternary care NICU include preterm delivery at a regional delivery hospital requiring neonatology services or the need for pediatric subspecialty care such as cardiology, genetics, neurosurgery or pediatric surgery. All 62 VLBW infants received HRC monitoring as a component of their care during the study period.
A total of 73,808 scores were registered in the electronic medical record over a median of 54 days (range 1 to 123 days). Of these, 2730 scores were linked to patients with missing data and were therefore excluded from analysis. Only 15% of HRC monitored days had at least one positive score (508 in 3356). Infection was the most frequent event related to HRC score increase as 45% of the positive days occurred during treatment for culture positive or clinical infection. Non-infectious events associated with positive HRC scores included 5 minutes Apgar score under 7 (39%), same day medical procedure (15%), and grade 3 or 4 IVH (10%).
Fifty-one infectious events occurred in 25 patients which included 19 culture-proven infections and 32 episodes of clinical infection. Of culture-positive infections 9 cases were bloodstream infection, 9 cases were urinary tract infections, and there was one case of peritonitis caused by Candida albicans isolated from peritoneal fluid obtained during laparotomy. Pathogens detected in culture-proven infections are listed in Table 2. No cases of meningitis were detected. Cases of clinical infection included clinical sepsis (5), NEC (10), peritonitis (1), pneumonia (2), tracheitis (13) and ventilator-associated pneumonia (1).
We compared infants at their first episode of infection (n=25) to those with no episodes of infection (n=37). Among 25 infants with infections there were 11,293 HRC scores and of these 503 HRC scores were positive within the 24-hour interval and 2,442 HRC scores were positive within the 5-day interval preceding infection and antibiotic administration, Figure 1. Table 3).
Discussion
This study analyzed the performance of the HRC monitoring index in a quaternary level NICU. The index was developed comparing patients with sepsis and sepsis-like illness (negative blood culture) versus the patients without sepsis [9][10][11]15]. In the hands of these investigators the use of bedside HRC monitoring as a predictive tool of sepsis in preterm neonates was associated with decrease risk of mortality. Presumably, recognition of autonomic dysfunction, measured by HRC, may have led to earlier diagnosis of sepsis prompting earlier initiation of therapies, shortening the duration of organ dysfunction and reducing mortality. While our study was not powered to examine mortality, the HRC monitor performed poorly as a screening test in our population using a threshold level one point above baseline or first rise above 2, at both 24-hour and 5-day intervals before onset of infection. The low positive and negative predictive values at both time intervals reflect the continuous screening for a relatively rare event.
Predictive monitoring of HRC is a potential tool to improve outcomes in infected VLBW patients. It can be obtained continuously and non-invasively through electrocardiogram recordings that are already routinely monitored in the NICU setting. Reportedly, variation in HRC is prevalent in pediatric sepsis and correlates with disease severity and trajectory [16]. However, HRC monitoring had poor predictive capabilities for neonates in our quaternary unit. This might be due in part to the high prevalence of morbidities and procedures in our unit that elevate HRC scores thus limiting their utility. In addition, HRC is known to vary with factors such as age, gender, medications, and mechanical ventilation [17][18][19][20]. Nevertheless, a tighter protocol regarding the action towards patients with increasing scores might clarify the utility of HRC monitoring in early identification of infections in our NICU patients. It is likely that a protocol mandating clinical response to elevated HRC scores would increase laboratory testing and antibiotic usage as a secondary effect.
There were several limitations to our study inherent in its design using a retrospective review of the medical records. First, precise timing of events and discussions how HRC scores were incorporated into clinical decision-making were not discernable from review of the medical records. For example, this study was not able to measure the impact of increase in awareness towards a patient with a rising score. Sepsis symptoms in newborn infants are subtle and nonspecific during early phases. The presence of only one symptom will rarely trigger a screening for infection but a combination often will. The HeRO monitor is one more parameter that can help raise awareness for the possibility of infection. Second, there is not a mandatory intervention for a rising score. A positive score might have increased awareness, but not necessarily prompted immediate infection screening or initiation of antibiotic therapy. Third, the study population included a relatively small number of outborn VLBW infants referred to a regional quaternary center. The cohort included infants with complications of prematurity including spontaneous intestinal perforation, necrotizing enterocolitis, post-hemorrhagic hydrocephalus and chronic lung disease requiring subspecialty care. The incidence of bloodstream infection in the VLBW cohort in this study was 15%, lower than what was previously reported (21%) by the NICHD Neonatal Research Network; however, other sites of infection were identified in our study cohort. The mortality rate was higher than that reported in the literature, 19.4% compared to 10% 3 . This finding reflects the risk of a referral population, referred for surgical issues, and the known risks for community-born VLBW infants [21][22][23][24]. The utility of HRC scoring reported in our study might not be broadly generalizable to care for VLBW infants in other hospital settings since our center is a referral hospital and not a delivery hospital.
Studies to improve on the predictive capabilities of HRC monitoring are ongoing. Investigators are currently evaluating models combining HR and oxygen saturation variables in early detection of infection [25]. These investigators report a combined model (mean SpO2, standard deviation of HR, and cross-correlation of HR-SpO2) provided additive value to an established HR characteristics index for illness prediction. Other measures of autonomic nervous system function such as respiratory rate variability, continuous temperature monitoring, automated pupillometry, and non-invasive baroreflex and chemoreflex sensitivity could potentially have application as effective biomarkers in neonatal sepsis, but further study is necessary [26]. Translating these metrics to real-time bedside displays and testing their impact on outcomes in randomized clinical trials is an essential next step.
Conclusion
In our single-center retrospective study conducted, elevated HRC scores had poor predictive capabilities to detect infection. More than half of the positive monitor events were not related to infection. Although the utility of HRC scoring reported in our study might not be broadly generalizable to care for VLBW infants in other hospital settings, the potential clinical impact of the monitor to detect infection before the onset of clinical symptoms in our quaternary level NICU was limited and the risk for unnecessary evaluation and treatment was high.
|
2019-09-27T06:33:37.040Z
|
2019-08-28T00:00:00.000
|
{
"year": 2019,
"sha1": "a159c86a545f21a52fda3a13941ddc65925bde2d",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajp.20190503.25.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "649aa6ace2b2ed08d499c47c3f80f323ae81daed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
232087296
|
pes2o/s2orc
|
v3-fos-license
|
Bioactive Compounds in Salicornia patula Duval-Jouve: A Mediterranean Edible Euhalophyte
Many halophytes have great nutritional and functional potential, providing chemical compounds with biological properties. Salicornia patula Duval-Jouve is a common euhalophyte from saline Mediterranean territories (Spain, Portugal, France, and Italy). In the present work we quantified for the first time the bioactive compounds in S. patula (total phenolic compounds and fatty acids), from Iberian Peninsula localities: littoral-coastal Tinto River basin areas (southwest Spain, the Huelva province), and mainland continental territories (northwest and central Spain, the Valladolid and Madrid provinces). Five phenolic acids including caffeic, coumaric, veratric, salicylic, and transcinnamic have been found with differences between mainland and coastal saltmarshes. S. patula contain four flavonoids: quercetin-3-O-rutinoside, kaempferol/luteolin, apigenin 7-glucoside, and pelargonidin-3-O-rutinoside. These last two glycosylated compounds are described for the first time in this genus of Chenopodiaceae. The fatty acid profile described in S. patula stems contains palmitic, oleic, and linoleic acids in high concentrations, while stearic and long-chain fatty acids were detected in low amounts. These new findings confirm that S. patula is a valuable source of bioactive compounds from Mediterranean area.
Introduction
Species in the Chenopodiaceae family have been proven to have a high content of minerals, polyphenols, fatty acids, and other compounds. The presence of bioactive compounds in this type of plant represents a challenge due to its possible uses both at a culinary and industrial level [1]. Salicornia species (Chenopodiaceae, Salicornioideae) respond to saline stress by making osmotic adjustments and anatomical, physiological, and metabolic adaptions, and are considered succulent euhalophytes. Salicornia patula Duval-Jouve is a frequent species in saline ecosystems on the Iberian Peninsula, on the coasts of France, Portugal, and Italy [2].
In terms of their elemental content, they accumulate inorganic salts, highlighting Na concentrations, with values higher than 20,000 mg/kg DW (dry weight) [3]. Significant values of K, Ca, and Mg are found in S. patula from Tinto river locations and other Mediterranean areas [4]. One of the bioactive compounds of interest in Salicornia is polyphenol, which in the field of nutrition has been of interest in recent decades. Polyphenols have been proven to have therapeutic effects for cardiovascular diseases, neurodegenerative disorders, cancer, and obesity [5].
The main classes of phenolic compounds are phenolic acids, flavonoids, stilbenes, and lignans. Flavonoids are present in plants as pigments that protect the body from damage caused by oxidizing agents [6]. In Salicornia europaea L. and Salicornia herbacea L. the presence of flavonoids such as quercetin, kaempferol, or catechin, among others, has been demonstrated [7].
Fatty acids are other bioactive compounds present in vegetables, and some of them, such as essential fatty acids, must be acquired through the diet since humans cannot 2 of 12 synthesize them [6]. The presence of fatty acids in the Salicornia genus are found both in its stems and seeds, and some species such as S. europaea have been studied as a source of linoleic and oleic acids [6]. However, traditionally the consumption of Salicornia has been carried out through a mixture of different halophytes due to the taxonomic complexity involved in the Salicornioideae group [8]. This fact reduces functionality at the species and even genus level, where the diversity of bioactive compounds and their specificity in different groups of plants is of the utmost importance.
There is great interest in the study of phenolic compounds in "lesser-known" wild plants, since several studies have revealed the important role they played in human health [9]. Therefore, knowing the importance of new uses of S. patula, which has a long history of being gathered from the wild as a source of food [10], the objective of this work is to analyze for the first time the phenolic compounds and fatty acids in S. patula, both in the coastline marshes and inland salt marshes of the Iberian Peninsula. Table 1 includes the data on the collection and geographical locations of S. patula selected samples from the Iberian Peninsula (Spain). Fleshy stems were selected until approximately 500 mg per sample was obtained for the analysis of phenolic compounds (phenolic acids and flavonoids) and fatty acids.
Preparation of the Methanol Extract
Five hundred milligrams of plant sample were selected with a solution of methanol in water (50 mL/50 mL) at ambient temperature, 25 • C. The extracts were filtered using a Whatman N • 4 filter. The solid residue was then extracted with 30 mL of methanol in water. The extracts were filtered again and re-dissolved with 10 mL of methanol in water. The extractions were performed in three replicates of each sample.
Total Phenolic Compounds (TPC)
Total phenol content was determined by the Folin-Ciocalteu method [11] using gallic acid (GA) as per the recommended standard [12]. From these solutions, an aliquot of 0.5 mL of methanolic extract was taken and 2.5 mL of the Folin-Ciocalteu reagent was added and it was left to react for 3 min, then 2 mL of Na 2 CO 3 solution was added and mixed in a shaker (Heidolph, Berlin, Germany). The solution was incubated at a temperature of 40 • C in a dark stove for 1 h. The absorbance was measured at 765 nm using a spectrophotometer and the results were expressed in gallic acid equivalents, using a gallic acid (0.05-0.5 mg/mL) standard curve (Figure 1). To prepare a calibration curve 0.017 g of gallic acid was added, as control or standard, into 100 mL volumetric flasks, and then diluted to a volume of 100 mL with water. These solutions will have a known phenol concentration of 1 mM of gallic acid, the effective range of the assay (Table 2).
Electrospray Ionization Mass Spectometry (LC-ESI-MS/MS)
Flavonoids were determined using an HPLC-MS Agilent 1200 (Santa Clara, CA, USA) in a C20 column at 35 °C. The composition of the gradient solvent was as follows. The solvent system used was a gradient of acetonitrile (solvent A) and formic acid 2% (solvent B): 0 min, 4% of solvent A; 10 min, 10% of solvent A; 20 min, 20% of solvent A; 30 min, 40% of solvent A; 35 min, 40% of solvent A; 40 min, 60% of solvent A; 45 min, 60% of solvent A; and 55 min, 4% of solvent A. The flow rate was 1 mL/min and runs were monitored with a UV-visible photodiode array detector set at 280 nm (phenolic acids) and 360 nm (flavonols), for a total chromatogram time of 50 min. An injection volume of 5 µL was taken from a 1.2 mg/2 mL. This technique was used to identity the flavonoids in the extract according to their protonation [M + H] + (Table 2), to calculate the relative retention time of each peak in the chromatograms obtained by HPLC.
Statistical Analysis
Statistical analysis was performed in the Statgraphics 18.0 program. Mean and standard deviations were calculated. To test the possible differences between three or more groups, they were compared using an ANOVA analysis of variance for total phenolic compounds.
Total Phenolic Compounds (TPC)
Samples with the highest content of phenolic compounds are from the La Rábida Tinto river (4.209 mg GA/g plant DW (dry weight)) ( Table 3).
Liquid Chromatography-Mass Spectrometry (LC-MS)
Chromatographic separation was performed as follows: all of the supplied volume was first dried in a rotary evaporator (Rotavapor Fischer) and later in a lyophilizer Telstar. The total amount of sample obtained was weighed. They were derivatized with Meth-Prep Fisher (EEUU). Meth-Prep II is a 0.2 N methanolic solution of m-trifluoromethylphenyl trimethylammonium hydroxide. This one-step reagent simplifies the transesterification of triglycerides to methyl esters. They were injected into GC/MS.
The chromatographic separation was done in a HPLC-MS Agilent 6120 (Santa Clara, CA, USA) using a C20 column maintained at 35 • C. The chromatography mass spectrometry was carried out at the Servicio Interdepartamental de Investigación from the Universidad Autónoma de Madrid (UAM).
Electrospray Ionization Mass Spectometry (LC-ESI-MS/MS)
Flavonoids were determined using an HPLC-MS Agilent 1200 (Santa Clara, CA, USA) in a C20 column at 35 • C. The composition of the gradient solvent was as follows. The solvent system used was a gradient of acetonitrile (solvent A) and formic acid 2% (solvent B): 0 min, 4% of solvent A; 10 min, 10% of solvent A; 20 min, 20% of solvent A; 30 min, 40% of solvent A; 35 min, 40% of solvent A; 40 min, 60% of solvent A; 45 min, 60% of solvent A; and 55 min, 4% of solvent A. The flow rate was 1 mL/min and runs were monitored with a UV-visible photodiode array detector set at 280 nm (phenolic acids) and 360 nm (flavonols), for a total chromatogram time of 50 min. An injection volume of 5 µL was taken from a 1.2 mg/2 mL. This technique was used to identity the flavonoids in the extract according to their protonation [M + H] + (Table 2), to calculate the relative retention time of each peak in the chromatograms obtained by HPLC.
Statistical Analysis
Statistical analysis was performed in the Statgraphics 18.0 program. Mean and standard deviations were calculated. To test the possible differences between three or more groups, they were compared using an ANOVA analysis of variance for total phenolic compounds.
Total Phenolic Compounds (TPC)
Samples with the highest content of phenolic compounds are from the La Rábida Tinto river (4.209 mg GA/g plant DW (dry weight)) ( Table 3). The results of the statistical analysis are homogeneous and do not show significant differences between the mean values of the studied samples. Sample 6 is the only one that shows a higher standard deviation.
Phenolic Acids
Transcinnamic acid is present in all S. patula samples. The materials from Tinto, Odiel, and Piedras rivers (Huelva) show relative transcinnamic acid contents between 21% and 39%. The samples collected in mainland territories from the Iberian Peninsula contain between 28% and 37% of the relative percentage (Figures 2 and 3, and Table 4).
Samples from Aldemayor de San Matín (Valladolid) and Colmenar de Oreja (Madrid) show differences regarding their conservation, with 4.091 mg GA/g plant DW and 4.172 mg GA/g plant DW for dry material. For fresh material these samples contain 2.117 mg GA/g plant FW (fresh weight) and 1.313 mg GA/g plant FW.
The results of the statistical analysis are homogeneous and do not show significant differences between the mean values of the studied samples. Sample 6 is the only one that shows a higher standard deviation.
Phenolic Acids
Transcinnamic acid is present in all S. patula samples. The materials from Tinto, Odiel, and Piedras rivers (Huelva) show relative transcinnamic acid contents between 21% and 39%. The samples collected in mainland territories from the Iberian Peninsula contain between 28% and 37% of the relative percentage (Figures 2, 3, and Table 4). In the material from the Tinto river coastal saltmarsh in the Iberian southwest, salicylic acid stands out with more than 60% relative content. Samples from the mainland territories present no more than 22% (Figures 2 and 3, and Table 4).
Material from Aldeamayor de San Martín (samples 10 and 11) contains veratric acid with a relative content of 40-60%, and a transcinnamic acid relative content of 28-37%. Sample 10 also shows the presence of salicylic acid, with a relative content of 20%.
The material from Colmenar de Oreja (sample 13) contains veratric acid with a relative content of about 40%, transcinnamic acid with a relative content of more than 30%, and salicylic acid with a relative content of slightly more than 20% (Figures 2 and 3, and Table 4). Samples 4 and 5 from Moguer and El Terrón also present veratric, coumaric, and caffeic acid with percentages close to 26%, 17%, and 8%, respectively (Figure 2). Material from Aldeamayor de San Martín (samples 10 and 11) contains veratric acid with a relative content of 40-60%, and a transcinnamic acid relative content of 28-37%. Sample 10 also shows the presence of salicylic acid, with a relative content of 20%.
The material from Colmenar de Oreja (sample 13) contains veratric acid with a relative content of about 40%, transcinnamic acid with a relative content of more than 30%, and salicylic acid with a relative content of slightly more than 20% (Figure 2, 3, and Table 4).
Flavonoids
All samples of S. patula (Table 5) present quercetin-3-O-rutinoside or rutin, and apigenin 7-glucoside. Samples 4, 5, 10, 12, and 13 also present pelargonidin-3-O-rutinoside and luteolin/kaempferol (it has not been possible to determine with precision which of these two compounds are present since there is no standard and both have the same molecular weight (286.24 g/mol)) ( Table 6). The chemical structures of these compounds were extracted from Phenol Explorer and are shown in Figure A1. Table 5. Tentative identification flavonoids in S. patula.
The materials from Colmenar de Oreja (Madrid), samples 12 and 13, also present arachidic acid and lignoceric acid with a relative content of around 6% for both compounds.
Phenolic Compounds
S. patula shows an increase in TPC values when the material is kept dry while the frozen or fresh material presents more modest values. The TPC data of dried S. patula range between 2.9 and 4.2 mg GA/g DW plant and are higher than those found in stems of S. europaea collected in Romania [13], with 1.04 mg G.A./g plant DW. Other authors describe in S. europaea higher values in stems samples from Turkey [14].
Phenolic Acids
The content of phenolic compounds in different halophytes has been related to their ability to survive in high salinity conditions. The wide diversity of different phenolic acids in S. patula may be understood as an adaptation mechanism to saline environments [15]. Among these, salicylic acid is present in the highest percentage in S. patula, with 70% relative content in the material collected in Huelva. It is the main phenolic acid in S. europaea [14,16]. Salicylic acid is involved with plant growth and acts as a genetic regulator against salt stress [17].
Transcinnamic acid is present in all of the studied samples of S. patula and ranges between 20% and 40% in content, standing out in interior provinces. Friedman and Jurgens [18] found that transcinnamic acid in plant samples can change depending on the pH of the soil. The differences in the pH values may explain the variation in the content of this acid in S. patula samples.
Veratric acid stands out in the material from inland provinces (Colmenar de Oreja and Aldeamayor de San Martín) with relative contents of up to 40%. Samples from Piedras and Odiel estuary rivers contain a lower percentage of this same bioactive compound. Veratric acid has antibacterial, anti-inflammatory, and antihypertensive activities [19].
Coumaric acid in S. patula has only been identified in the material collected in the Piedras and Odiel rivers, with a relative content of 19%. Our values for coumaric acid are higher than those published by Zengin et al. in S. europaea [14]. On the other hand, caffeic acid is the minor phenolic acid found in S. patula. In other Salicornia species, higher data have been described, such as in S. europaea from Turkey [14].
Flavonoids
Most flavonoids are present in plants as esters, glycosides, or polymers. The chemical structure of flavonoids determines the absorption range. For this reason, glycosylation guarantees that some flavonoids are absorbed, giving them prebiotic actions [20].
All the samples of S. patula studied have apigenin 7-glucoside and pelargonidin-3-Orutinoside, both identified for the first time in Salicornia genus. Likewise, all samples have quercetin-3-O-rutinoside, commonly called rutin, an antioxidant compound that improves tolerance to salinity, already identified in other halophytes [14,21,22] The presence of this flavonol in all Salicornia samples suggests it is an important adaptation to saline environments. Some of the S. patula samples (4, 5, 10, 12, and 13) also present luteolin or kaempferol. Luteolin has been previously identified in S. europaea [8].
Fatty Acids
Our results show that palmitic acid appears in the highest proportion in S. patula and can exceed 70%. In other Salicornia species, such as Salicornia ramosissima Woods this acid exceeded 20% [23], as in S. europaea [16], and Salicornia bigelovii Torr [24]. In other halophytes like Arthrocnemum indicum is also demonstrated the presence of this acid, with a relative content of more than 18% [25].
Among the monounsaturated fatty acids, palmitoleic acid is only present in the material collected from the southwest Iberian Peninsula. S. patula also contains oleic acid, another monounsaturated fatty acid, that has been described in S. europaea [16] and S. bigelovii [24]. These bioactive compounds prevent the development of cardiovascular disorders, reduce insulin resistance, and strengthen the immune system [8].
Polyunsaturated acids such as linolenic acid have been found in proportions of more than 5% in sample 11, as in other Salicornia species such as S. bigelovii [24]. In S. ramosissima the values for this fatty acid are lower than those found here [26]. Other authors in Salicornia brachiate Roxb reveal a content of 29% [26].
Linoleic acid showed a content of 56% in the samples collected in Aldeamayor de San Martín, Valladolid province. The total polyunsaturated acid content in this population is 61%, a value similar to that described by Patel et al. [27] for S. brachiata. Polyunsaturated fatty acids are bioactive compounds with antifungal activity, and additionally they inhibit carcinogenesis and the progression of atherosclerosis [28].
Long-chain fatty acids, such as arachidonic acid, behenic acid, and lignoceric acid, do not exceed a content of 6% in S. patula. El-Araby et al. [24] provide similar values in S. bigelovii. In other Salicornia species, long-chain fatty acids have been described to appear in lower values, such as S. ramosissima [22].
For all of this, the study of the evaluation of the antioxidant activity of S. patula extracts, through the appropriate tests, 2,2-diphenyl-1-picrylhydrazyl (DPPH) the 2,2 -azino-bis (3-ethylbenzothiazoline-6-sulphonic acid (ABTS) and the ferric reducing antioxidant power (FRAP), is necessary to complete the information on bioactive compounds and the differences between species of various origins. We are approaching the study of antioxidant activity in the Salicornia genus in the near future.
Conclusions
For the first time, the bioactive compounds (phenolic compounds and fatty acids) present in S. patula are described. The phenolic acids identified in all samples are salicylic, transcinnamic, and veratric acid. Samples from the coastal salt marshes and the Odiel, and Piedras rivers also presented caffeic and coumaric acid. The composition of phenolic acids and flavonoids, like rutin, may be related to adaptation mechanisms in saline environments. Other flavonoids detected for the first time in this species are apigenin-7glucoside and pelargonidin 3-O-rutinoside, with glycosylated structures that confer their prebiotic properties.
The lipid profile shows the presence of palmitic, stearic, oleic, and linoleic as the main fatty acids. Lauric, myristic, and palmitoleic fatty acids were only detected in the material from the coastal salt marshes in lower proportions.
Finally, we show that S. patula is a source of bioactive compounds with important positive biological effects. Its consumption both in a traditional way, or as an additional ingredient, make this Mediterranean euhalophyte a functional food.
|
2021-03-03T05:20:25.837Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "8502392ada58fae54e45148f7cd5ffb4cb356fe7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/10/2/410/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8502392ada58fae54e45148f7cd5ffb4cb356fe7",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265040635
|
pes2o/s2orc
|
v3-fos-license
|
Minimal data requirement for realistic endoscopic image generation with Stable Diffusion
Purpose Computer-assisted surgical systems provide support information to the surgeon, which can improve the execution and overall outcome of the procedure. These systems are based on deep learning models that are trained on complex and challenging-to-annotate data. Generating synthetic data can overcome these limitations, but it is necessary to reduce the domain gap between real and synthetic data. Methods We propose a method for image-to-image translation based on a Stable Diffusion model, which generates realistic images starting from synthetic data. Compared to previous works, the proposed method is better suited for clinical application as it requires a much smaller amount of input data and allows finer control over the generation of details by introducing different variants of supporting control networks. Results The proposed method is applied in the context of laparoscopic cholecystectomy, using synthetic and real data from public datasets. It achieves a mean Intersection over Union of 69.76%, significantly improving the baseline results (69.76 vs. 42.21%). Conclusions The proposed method for translating synthetic images into images with realistic characteristics will enable the training of deep learning methods that can generalize optimally to real-world contexts, thereby improving computer-assisted intervention guidance systems.
Introduction
Computer-assisted intervention (CAI) is a research field focused on enhancing the safety, efficiency, and costeffectiveness of medical procedures by minimizing errors and complications [15].Within CAI, laparoscopic cholecystectomy (LC) has gained significant attention as a widely performed minimally invasive procedure for gallbladder removal [19].However, LC presents technical challenges due (GANs) have been proposed to mitigate this limitation [2].However, these techniques still require a substantial amount of annotated data.
Recently, latent diffusion models (LDMs) have shown promise in generating highly detailed images while preserving semantic structure [9].LDMs employ an iterative process involving noise addition and reverse learning to recover original data.In the medical field, LDMs have been extensively utilized for tasks such as image translation, generation, preprocessing, segmentation, and classification [9].Compared to other DL techniques like GANs, LDMs can be fine-tuned effectively with smaller datasets and combined with support methods for controlled generation.The widely used Stable Diffusion (SD) LDM model offers efficient conditioning of the generation process through text prompts [24].
In general, no existing work uses LDMs instead of GANs for the translation of synthetic images into realistic images.Therefore, our contributions are as follows: (1) We introduce a novel application of the Stable Diffusion model to generate synthetic surgical data in an unsupervised manner, addressing the issue of limited data availability in clinical environments, see Fig. 1.To the best of our knowledge, this approach has not been previously published.(2) We evaluate our approach using public datasets to demonstrate its effectiveness in generating realistic synthetic data.The results show that our approach outperforms the baseline method in preserving tissue integrity, achieving a mean Intersection over Union (mIoU) of 69.76% compared to 42.21% for the CholecT80-style baseline.Additionally, our method successfully captures the characteristic feature distribution of real surgical data, either comparable to or enhanced compared to the baseline dataset.(3) We provide public access to the code and our realistic rendering of the publicly available IRCAD dataset, which includes simulation frames, depth maps, segmentation maps, edges, and normals at https://github.com/SanoScience/sim2real_with_Stable_Diffusion.
Related work
Several approaches have been proposed for generating synthetic data with realistic characteristics for specific surgical procedures, e.g., [10].In one study [4], Unity3D was used to create a 3D liver and laparoscope environment, enabling the generation of images for DL segmentation training.Another study [16] employed a GAN approach to directly generate images from segmentation maps, focusing on maximizing differences between instruments and anatomical environments.The combination of synthetic images and real segmentation maps has been extensively used to train GANs for surgical tool segmentation, employing techniques like consistency losses and student-teacher learning [26,27].GAN-based approaches have also been applied in other sur-gical domains such as cardiac intervention, colonoscopy examination, and sinus surgery [13,20,29].While numerous other GAN-related works exist, they are beyond the scope of this discussion [2].
Another relevant work [22] introduced an image-toimage translation method for simplified 3D rendering of LC anatomy based on real endoscopic images from the Cholec80 dataset.This approach utilized GANs trained in an unpaired manner and generated a dataset of 100,000 images with various annotations.Although extended to video translation [23], this work uses only simplified liver views.It lacks surgical tools and the specific anatomy of interest (gallbladder), making it not representative of LC procedures.
While GAN-based approaches have shown potential, they have limitations, such as early convergence of discriminators and instability of the adversarial loss function, which can lead to mode collapse and reduced diversity in generated data.Diffusion models (DMs) have emerged as a promising alternative, surpassing GANs in computer vision tasks [3].In the medical domain, DMs have been widely utilized for various applications, including generating MRI sequences, synthesizing histological images, and generating thoracic Xray images based on text prompts [9,17,21].The latter employs the SD model, which uses text prompts as conditioning and has been applied successfully in similar medical tasks.Notably, this is the first work to utilize DMs for generating intra-operative endoscopic images, combining text prompts with virtual simulator images for conditioning the process.
Methods
Our method involves adding a concept to the SD model and using it to generate realistic images from synthetic ones.We begin by fine-tuning SD based on DreamBooth (DB) [25].Then, the fine-tuned Laparoscopic Cholecystectomy Stable Diffusion (LC-SD) model is employed to generate realistic images.This is achieved by leveraging two versions of the ControlNet support architecture, namely Tile and SoftEdge control, to ensure consistency between label and generated images.An overview of the proposed method is depicted in Fig. 2.
Fine-tuning with DreamBooth
SD [24] is a LDM that implements a denoising technique in a lower-dimensional latent space.One of the key features of SD is its flexibility in conditioning the denoising step on various modalities, such as text or images, achieved through a crossattention mechanism.Significant progress has been made in the field of SD model few-shot fine-tuning and personalized concept introduction through notable works, including Textual Inversion [5], Low-Rank Adaptation [8], Custom Diffusion [11], and DB [25].Among these, DB has been selected as it utilizes a fine-tuning approach with a small set of concept-specific images (3 to 5 for an object and 50 to 200 for a style) and allows for the introduction of highly unconventional concepts.During training, the model is paired with text prompts containing class names and unique text identifiers.The model learns to associate the text identifier with the new concept.By incorporating a class-specific prior preservation loss, DB encourages the generation of diverse instances within the subject's class, resulting in the synthesis of photorealistic images.
DB has a tendency to overfit, and the appropriate number of training steps depends on several factors, including the characteristics of the training data, learning rate, and prior preservation.This number needs to be determined experimentally.In our specific case, we did not use prior preservation during fine-tuning because SD lacks a proper prior for human tissue or surgical images.Additionally, unlike the original work where the text encoder was frozen, we fine-tuned both the text encoder and the U-Net.
Inference with ControlNet
To generate realistic tissues based on simulation scenes, we employ text-guided image-to-image inference.During the inference stage, we utilize a unique text identifier that was bound with the CholecT45 style during DB training.
ControlNet is an architecture designed to control pretrained large DMs by incorporating additional conditions, such as sketches, key points, edge, and segmentation maps [36].It maintains two sets of U-Net weights copied from pretrained DM: a trainable copy and a locked copy.The locked copy preserves the original weights from the pretrained DM during training.The trainable copy is fine-tuned using taskspecific datasets to adapt to the additional conditions and introduces control during inference.Neural network blocks of pretrained DM and ControlNet model are connected with the use of trainable "zero convolution" layers which parameters are optimized during ControlNet training.Explanation of "zero convolution" layers function, block connection and application of ControlNet to the original SD are described in detail in [36].
Since our method operates in a limited real data setting, training a custom ControlNet is not feasible as it would require at least a few thousand diversified images along with conditioning inputs.However, it is possible to directly apply ControlNet trained on original SD to LC-SD and even to combine multiple ControlNet models (each with desired strength) to impose diversified control.To combine single LC_SD block with corresponding blocks of N ControlNets, we present the extended formula from [36] as: where x is an input feature map to the LC-SD block, c i is a conditioning input feature map to the corresponding block of the ith trained ControlNet, and y c is a conditioned output feature map from the LC-SD block.We denote the weights of the LC-SD block as θ LC_S D and the trainable weights for the block of the ith ControlNet as θ C,i .The function denoted as F(•; •) transforms the input feature map into the output feature map given a set of parameters.We denote the "zero convolution" operation as Z(•; •).Within the block of the ith ControlNet, two "zero convolution" operations are performed with optimized parameters {θ Z 1,i , θ Z 2,i }, respectively.w i is the strength the ith ControlNet is applied with.The first term on the right side of Eq. 1 represents the result of applying LC-SD, while the second term relates to the contribution of the different ControlNets.
Given the variety of available pretrained ControlNets [36], we explored additional outputs from the simulator as poten-tial control inputs.However, the Segmentation ControlNet was not applicable since it requires a segmentation map compliant with ADE20K's segmentation format, which does not include any surgical-relevant class/label.We also conducted preliminary tests on depth and normal ControlNets using our inference IRCAD dataset, which is described in detail in Sect."Dataset and implementation details".While both control inputs helped maintain proper anatomical boundaries, the depth details were not captured, and the overall visual performance was unsatisfactory.
Instead, we focused on control methods that could be robustly applied to completely new styles: SoftEdge v1.1 and Tile v1.1.Both ControlNets contribute to generating consistent shapes and boundaries, but they impose additional constraints on different aspects of the output images.The SoftEdge control utilizes edges generated with Pidinet [30] or HED [34] models.It primarily preserves original edges and tissue folds.On the other hand, Tile ControlNet exhibits conceptual similarities with tile-based super-resolution models but offers broader applications.It operates in two modes: generating new details while ignoring existing ones, and ignoring global prompts when local tile semantics and prompts do not align, guiding the diffusion process with local context.In the context of endoscopic image generation, Tile ControlNet effectively adds tissue details and helps preserve accurate tissue colors.
Dataset and implementation details
To use the minimum amount of data while ensuring a sufficient variability of visual properties and the presence of all regions and instruments of interest, we trained three separate models, each based on two distinct videos from the CholecT45 dataset [19].We carefully select pairs of videos that exhibit comparable visual characteristics and ensure that all classes are represented within each training set.We train each model with DB using a manually selected set of 85, 91, and 95 images, respectively.Despite the limited number of images, it is crucial to choose representative and consistent samples that cover various procedure stages and tissues present in the synthetic dataset.Furthermore, to prevent the models from introducing tool artifacts in each frame, it was highly important to include images both with surgical tools and with minimal or no presence of them.All models are based on Stable Diffusion v1.5, and we train them with DB using a learning rate of 1 × 10 −6 and a batch size of 4 for 2,000 steps.
In the inference stage, we utilize fully labeled synthetic data from the IRCAD 3D CT liver dataset, as previously employed in [22].The dataset contains 20,000 synthetic images rendered from 3D scenes obtained from the CT data of 10 different patients including models of the liver, gallbladder (only for 6 patients), insufflated abdominal wall, fat, and connective tissue.In addition, tools, light sources, and endoscopic cameras have been added in random positions in the scene.In the image-to-image approach, the prior information significantly influences the resulting image.However, the IRCAD dataset presents simplified anatomy, and as a result, plain structures and distorted colors can lead to unrealistic results.To address this issue, we enhance the raw simulation images by incorporating texture information from example samples.We extract small texture samples for each tissue from the corresponding training set and blend them with the raw simulation scenes, guided by segmentation maps, as shown in Fig. 3.
For inference, we adjust the model checkpoint, denoising strength, classifier-free guidance scale (CFG), noise scheduler, and ControlNet strengths for each LC-SD model separately.Although all LC-SD models are trained with the same parameters, variations in the complexity and diversity of the training sets resulted in differences in denoising capabilities across the models.We carefully balance the Con- trolNet strengths for each model separately.In addition to tissue placement, we also consider overall image realism and details, such as tissue folds.The lack of tissue folds would not necessarily degrade mIoU.To achieve the desired balance, we use a stronger SoftEdge control in combination with a weaker Tile control.Using only SoftEdge with high control strength could compromise image quality by erasing valuable details.To prevent Tile control from introducing excessive detail based on the input sample, we use a smaller strength.
To generate data at a large scale while maintaining reasonable inference time and acceptable image quality, we limit the denoising steps to 20.The selected parameter values are shown in Table 1.
Fig. 4 A visual comparison is made between the data processed using our fine-tuned Stable Diffusion model (first row), the raw image from the simulator, and the data generated in [22] for random and Cholec80 styles (second row) The data generation process is carried out on a single NVIDIA A100 GPU.The generation time takes up to 3 s per image, depending on the number of ControlNets utilized.
Evaluation metrics
To evaluate the realism of the generated data, we employ established evaluation metrics [9,23]: Frechet Inception Distance (FID) [6] and Kernel Inception Distance (KID) [1].These metrics assess the similarity between two sets of images based on their feature representations extracted from a pretrained Inception network.Following [12,33,38], we employ Learned Perceptual Image Patch Similarity (LPIPS) [37] to assess diversity of generated samples.LIPIS is an image quality assessment method, it computes the average distances between samples in AlexNet, VGG, or Squeezenet feature space.
Moreover, to evaluate the LC-SD models' ability to preserve labels, we fine-tuned a variant of U-Net with a pretrained ResNet50 backbone using the CholecSeg8k dataset [7] for five classes: abdominal wall, liver, fat, gallbladder, and tool.These classes are present in both the CholecSeg8k and IRCAD datasets.From the training data, we exclude videos used to generate synthetic data.We use mean Intersection over Union (mIoU) to calculate the average overlap between predicted and ground truth segmentation masks across multiple classes which allows us to evaluate the models' ability to preserve labels.The mIoU for the real test data was 89.77%, and we assessed the mIoU for 10,000 generated images.
Results
A visual comparison revealed that our generated data achieve similar perceptual realism to the baseline dataset [22], as visible in the samples shown in Fig. 4. Table 2 presents the quantitative results obtained from various methods and styles, with the best scores highlighted in bold.The first row represents raw simulation images with a mIoU of 24.73%, FID of 305.00,KID of 0.3739 ± 0.0041, and LPIPS of 0.5820.The second and third rows, attributed to the method presented in [28], demonstrate improvements in performance.For the random style, the mIoU and LPIPS increase to 45.28% and 0.5834, respectively, while the FID decreases to 110.92 and the KID to 0.1243 ± 0.0035.When using the Cholec80 style, the mIoU remains high at 42.21%, while the FID and KID values drop to 67.13 and 0.0623 ± 0.0017, respectively.The LPIPS obtains the highest result of 0.6407.Subsequently, our method, denoted as "ours," showcases further enhancements.With the CholecT45 vid52 & vid56 style, we achieve an impressive mIoU of 66.85%, accompanied by an FID of 68.35, a KID of 0.0658 ± 0.0015, and LPIPS of 0.6245.Notably, the best performance is attained when employing the CholecT45 vid25 & vid66 style, achieving a Figure 5 and Table 3 provides an overview of the mIoU values for different control types and their improvements compared to no control inference.The table shows that the combined control models yielded the best results overall, with the highest improvement observed for the style vid01 & vid49 when inferred with the highest denoising value.For the CholecT45 vid52 & vid56 style, the no-control inference resulted in a mIoU of 61.52%.When applying only Soft-Edge control, the mIoU increased to 65.26 (+6.1%), while using only Tile control led to a mIoU of 64.20 (+4.4%).The most significant improvement of 8.7% was achieved when combining SoftEdge and Tile controls, resulting in mIoU of 66.85%.Similarly, for the CholecT45 vid25 & vid66 style, the no-control inference yielded a mIoU of 63.35, which increased to 67.16 (+6.0%) when using only SoftEdge control and to 68.01 (+7.4%) with only Tile control.However, the best performance was obtained when both SoftEdge and Tile controls were combined, resulting in mIoU of 69.76%, representing a substantial improvement of 10.1%.Furthermore, for the CholecT45 vid01 & vid49 style, the no-control inference achieved a mIoU of 54.29%.By employing only SoftEdge control, the mIoU increased to 63.26 (+16.5%), while using only Tile control resulted in a mIoU of 62.08 (+14.3%).Notably, the highest improvement of 23.8% was attained when combining SoftEdge and Tile controls, leading to a mIoU of 67.20%.
Discussion and conclusions
In this work, we have proposed an SD-based approach to generate realistic surgical images from virtual simulator images and text prompts.The SD model was initially fine-tuned using DB and then used for inference, supported by Tile and SoftEdge ControlNets.The model can be trained using less than 100 real images without manual annotations and manages to generate realistic images that outperform the baseline in all considered evaluation metrics.
We consider this work to be a significant addition to the current foundation, offering researchers a valuable dataset to facilitate the development of machine learning solutions in image-guided and robotic surgery.This approach can produce fully labeled training data for supervised machine learning algorithms.Additionally, strict alignment of the created data with its ground truth annotations extends its potential for evaluation in various unsupervised and semisupervised applications.
Despite that, our method has several limitations.Firstly, the use of a very limited training dataset makes image selection critical, requiring careful consideration to ensure representativeness and consistency.Secondly, our method heavily relies on the input image features.We leave addressing this limitations for future work.Temporal consistency is another major limitation of our approach, which we could not investigate in depth due to the lack of temporal coherence in the simulated data.To address this, a more detailed synthetic dataset with textures and tool-tissue interactions, along with extended annotations, would be necessary to support different tasks, such as surgical temporal modeling.
Overall, our proposed method represents a promising direction for generating realistic surgical images and has the potential to contribute to advancements in the field of imageguided and robotic surgery.
Fig. 1 Fig. 2
Fig. 1 Examples of synthetic data translated with our fine-tuned model to the CholecT45 style are shown.Three random frames from the simulator and their realistic translations are shown in the top and bottom rows, respectively
Fig. 3
Fig. 3 Visual between the raw image from the simulator (first from left) and image with enriched textures for two styles (second and third from left)
Fig. 5
Fig. 5 Visual comparison on example image generated with different control types applied.Without any type of control, the overall image consistency is degraded.With Tile control details are clearly rendered,
Table 1
Selected inference parameter values for each model: denoising strength, CFG, noise scheduler, SoftEdge, and Tile control strength.All the models use noise scheduler DPM++ 2 M Karras
|
2023-11-08T06:16:56.315Z
|
2023-11-07T00:00:00.000
|
{
"year": 2023,
"sha1": "8c8731721d4711fa452b374832234a3b84fdfe88",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11548-023-03030-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c98d91c2fd49c46586c7ded3abb813b668257321",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
241153408
|
pes2o/s2orc
|
v3-fos-license
|
Carotid Artery Stenting: A Single-Center Experience of a Tertiary Care Hospital
Background: Carotid Artery Stenting (CASt) is a method of carotid revascularization, which has developed rapidly over the last 3 decades. CASt now used as an alternative to endarterectomy. Although excellent results from centers with high-volume experience seem to demonstrate CASt as technically feasible and safe, there is an ongoing debate about the complications in terms of early outcomes in patients. Methods: This study was a retrospective study on patients diagnosed with carotid artery stenosis (CASs). The data collected from Jan 2011 to Dec 2019. The patient data screened for inclusion in the study based on findings from contrast angiography. Primary complications to be assessed were major or minor embolic stroke, cardiac arrest, and death within 30 days of the procedure. Minor stroke, myocardial infarction, acute hypotension and bradycardia, noncerebral bleeding, access-site bleeding were considered secondary outcomes. Results: A total of 77 patients were included in the study with a mean age of 74.4±11.3years. The technical procedure of CASt was 100% successful for all the patients. Overall post-procedural stroke rate at 30 days was 7.7% (six out of 77). One (1.3%) patient died due to cardiac arrest. There were two cases (2.6%) of acute hypotension and bradycardia and one case (1.3%) of Access-site bleeding within 30 days of stent implantation. Comorbid conditions were not significantly (p>0.05) associated with the post- pro cedural complications in CASt. Conclusion: In this study, we found that CASt is the most reliable technique for CASs and appears feasible and comparatively safe with least post-procedural complications. However, advanced techniques are required to further reduce the death/stroke rate within 30 days of stent implantation.
Introduction
Cerebrovascular Diseases (CVDs) are the primary cause of neurological and physical impairment and death in adults. The USA alone reported that cerebrovascular diseases are the fifth most common cause of death with a stroke rate of ≈ 795,000/year. CVDs is also the third prominent cause of death in Korea; 48.2 persons per 100,000 die from CVDs every year [1]. Ischemic strokes are the most common strokes, occurring from atherosclerotic CASs.
CASs observed is about 0.5% in 60-79 years old, whereas 10% in 80 years old and over [2]. In the CVDs the morbidity and mortality of stroke remain high worldwide. Stroke is the fourth leading cause of death and the number one cause of long-term disability. Ischemic strokes are mostly caused by carotid stenosis, which accounts for 20 to 25% cases [3].
Effective treatment strategies targeting stenosis of the carotid artery is of importance for prohibiting the progression of cognitive dysfunction in patients with ischemic cerebrovascular disease [4]. During the past decade, the rapid improvement in interventional technology and materials has transformed a technique initially developed as a palliative treatment in inoperable patients into an alternative therapeutic option to surgery [5]. Nowadays, CASt has become an option for treatment with less invasion. Furthermore, CASt has been regarded as a reliable approach with lower risks Volume of myocardial infarction, cranial nerve palsy, and access site hematoma [6].
Despite significant improvement in equipment and the techniques used during CASt, as well as improved operator experience, complications still occur. A significant proportion of these events are peri-procedural complications that result in catastrophic events such as stroke or death [7]. Rapid evaluation of these complications are crucial for good patient outcomes. Thus, in our study, we investigated the potential risk factors and complications following stenting treatment.
Patient selection
This study was a retrospective study on patients diagnosed with CASs at Prince Sultan Military Medical City, a tertiary care hospital in Riyadh, Saudi Arabia. The data was collected from Jan 2011 and up to Dec 2019. The study was reviewed and approved by the IRB/REC of Prince Sultan Military Medical City. The patient data were screened for inclusion in the study based on findings from contrast angiography. The cases to be included in this study were of CASs, which can be defined as >50% occlusion of the carotid artery detected at any stage during the screening or follow-up. Therefore, stenosis of more than 50% on digital subtraction angiography and more than 70% on computed tomographic angiography was considered as the inclusion criteria. In contrast, the presence of comorbid conditions like myocardial infarction within 30 days, chronic atrial fibrillation, paroxysmal atrial fibrillation within six months, and unstable angina within six months were considered as the exclusion criteria for the study.
Procedure for carotid artery stenting
The experts examined the neck and brain vessels for all the patients through computed tomographic angiography, and magnetic resonance angiography imaging before conducting the CASt. The magnitude of vessel stenosis was assessed with its length as well as location. The stenting process was carried out by an expert neuroradiology interventionalist with an ideal catheter sheath, which was inserted through Seldinger puncture of the right femoral artery. Afterward, the contrast medium was repeatedly injected via the catheter to evaluate carotid artery stenosis. Additionally, a guiding catheter was advanced towards the common carotid artery. The stent implantation was facilitated by the formation of a protective umbrella and later released following a pre-expanded balloon positioned at the desired site. Hereafter, the carotid stent was implanted and released at the carotid artery's stenosed segment. Immediate angiography was conducted following the removal of safety devices to ensure no extravasation of the contrast agent and no remaining stenosis of the artery.
Follow-up events and complications
Primary complications to be assessed were major embolic stroke, cardiac arrest, and death within 30 days of the procedure. Minor stroke, myocardial infarction, acute hypotension and bradycardia, noncerebral bleeding, access-site bleeding were considered secondary outcomes. Here, stroke can be defined as an acute neurologic event lasting for more than one day with a diagnosis of focal cerebral ischemia. Myocardial infarction can be defined as an increase of a troponin or creatinine kinase level in addition to symptoms that are consistent with electrocardiographic evidence of ischemia. Mainly computed tomographic angiography was analyzed for postoperative complications.
Statistical analysis
All data were analyzed using SPSS Statistics software (IBM Corp., NY, USA). All missing variables coded and omitted from the final analysis. Baseline variables summarized with the use of descriptive statistics. Categorical variables summarized as counts, percentages, and assessed with Chi-square and Fischer's exact analysis.
Results
Based on the inclusion and exclusion criteria of the study, the data of 77 patients with internal CASs were selected for further analysis. The CASt for these patients was performed between Jan 2011 to Dec 2019 by using an embolic protection device. The presence of CASs in all the patients was confirmed by the CTA, or angiography of magnetic resonance. All the patients underwent clinical pre-procedural tests, conducted by a neurologist and they had given written informed consent for the procedure and routine follow-up examinations. The study population consisted of 58 men (75.3%) and 19 women (24.7%) with a mean age of 74.37±11.3 years with a range of 47-101 years. To determine the prevalence of CASt in different age groups, we divided the patient into five subgroups (Table 1). Maximum number of CASt procedures were performed in the patients with an age group of 70-80 years i.e. 27 (35.1%) followed by the group with more than 80 years of age with slightly lowercases as 25 (32.5%) of total cases. The lowest number of cases were observed in the 40-50 and 50-60 years category with four cases in each group, which was the lowest in all groups with a portion of 10.2% of total cases. However, the group with 60-70 years of age was observed to be having 22.1% prevalence, with an overall 17 cases of CASt. Volume Most of the study subjects were having risk factors like diabetes, hypertension, and Ischemic Heart Diseases (IHD). Twenty-six (33.7%) patients reported being suffering from diabetes and hypertension. Whereas, 20 subjects (25.9%) were having all three risk factors including diabetes, hypertension and ischemic heart disease. Amongst all the study subjects, 25 (32.5%) were >80 years of age and were at high surgical risk; 12 (15.6%) suffered from diabetes and IHD; seven (9.1%) had hypertension and six (7.8%) patients had hypertension altogether with IHD. Whereas, the rest of the five (6.5) subjects were without any comorbid medical condition.
In our center, we achieved successful CASt implantation in each patient. The observed mean duration of the stenting procedure was found to be 30.2±8.4 minutes. The mean duration from the recent event to the treatment was found to be 14.3±5.2 days, whereas, average in-hospital stay was 6.2 days. Length of the lesion and Stenosis was assessed intraoperatively utilizing angiography. Patients were followed for 30 days from the date of procedure for any kind of clinical manifestations and these are summarized in Table 2. Over the follow-up period of 30 days, six patients (7.7%) demonstrated major stroke after the procedure, which was successfully managed by the expert team. One patient died after the procedure due to cardiac arrest, so the overall mortality rate was 1.3%. There were two cases of acute hypotension and bradycardia but they recovered completely. None of the patients experienced minor stroke, myocardial infarction and noncerebral bleeding. Whereas, one patient (1.3%) experienced access site bleeding, which was absolutely manageable. Upon statistical calculations, no significant association was found between the risk factors and 30 days follow-up outcomes as the value of p>0.05 for age, gender and preexisting risk factors.
Discussion
Carotid stenosis primarily caused by carotid atherosclerosis, and in recent reports, several risk factors identified for carotid atherosclerosis [8]. CASt has gain popularity as an alternative treatment to the other invasive techniques and provides long-term relief and patient satisfaction. Inclusive scrutiny of the literature demonstrates that the risk of stroke or death following CASt amplified in patients with symptomatic stenosis [4]. In our study, the technical success rate was found to be 100%, which was in agreement with the previously published reports [9]. Although the patient included in the study were having many comorbidities. However, there was only one death reported during the follow-up period of 30 days due to cardiac arrest. None of the subjects in our study were excluded due to comorbid conditions, although 32.5% of patients had an age of >80 years, which is considered high risk for surgery. Volume Out of 77 procedures, the overall intra and post-procedural complications rate for major stroke and death was only 9%. The 49.4% patient population involved in the study had IHD, even though we did not observe any case of myocardial infarction and positively, we had a lower percentage of stroke/death. Our results are in agreement with the study done by Meng, et al. [4], where they studied the CASt in patients with high-risk factors like old age and stroke history. During the post-procedural 30 days followup period, they reported a lower procedure-related complication and a stroke rate of 6.1% in the study population, which was a little lower in comparison to our study. The reason we observed the higher stroke rate might be due to the around 50% of the patients were in the high-risk category for surgery. However, they reported no death due to cardiac arrest but myocardial infarction at a rate of 3%. In another study by Kessler, et al. [10], they reported the data of CASt of 55 patients with symptomatic CASs for death or stroke within 30 days of the procedure. They demonstrated a periprocedural stroke/death rate of 5.4% at 30 days. Our results are more comparable with the most discussed trial in this regard; the EVA-3S randomized trial, which showed a CASt stroke/death rate of 9.6% at 30 days [2]. Furthermore, our result showed a little higher percentage of death/stroke in comparison to SAPPHIER trial [11], which was performed on 156 patients with CASt by using cerebral protection devices. Patients treated with CASt demonstrated a death/ stroke rate of 4.5%. The lower prevalence of death/stroke rate in SAPPHIER trial may be explained based on the fact that these studies excluded the patients with contralateral carotid occlusions, age >80 years, or the presence of cardiovascular comorbidities. In our study, we found little higher adverse events/follow-up events (13%) due to the number of reasons including; 32.5% of patients with an age of >80 years, 49.4% with IHD/CVS. It is evident from the literature that the age of >70 years, cardiovascular diseases, plaque ulceration, type-C lesion are the independent predictors of follow-up complications after CASt [9]. The patients with the contralateral involvements are considered high-risk patients, and therefore, utmost precaution should be taken when dealing with these patients.
In addition to this, the operator's experience also plays a role in procedural success and follow-up complications resulting from CASt. Published reports demonstrate that centers with less number of interventions reported higher stroke/death rate as a follow-up complications. Whereas, centers with a higher number of interventions reported lower stroke/death rate [12]. In our study, we observed the mean duration of the stenting procedure as 30.2±8.4 minutes, which was in agreement with other published studies. The duration of stenting procedure significantly related to the operator's experience as a randomized trial showed a significant reduction in procedure time and contrast volume with an increase in the operator's experience.
Antiplatelet therapy is peri procedural requirement for CASt, even though we observed only one case (1.3%) of access site bleeding, which is significantly lower from other published reports [4]. Patients of diabetes considered high-risk patients for any kind of surgery and in the case of CASt, diabetic neuropathy is a very common periprocedural complication [13]. However, in this study, we had almost more than 50% of the patient with diabetes, but we observed an acceptable incidence of diabetic neuropathy. Previous studies suggested that post-procedural complications are generally associated with patient's risk factors/comorbid conditions like male sex, smoking, and hyperlipidemia [4]. Whereas, hypertension diabetes mellitus and peripheral vascular disease are not having a significant association with post-procedural complications. We also observed the same finding that preexisting risk factors like diabetes, hypertension and IHD were not significantly (p>0.05) associated with the 30 days post-procedural outcomes. However, we cannot draw a significant conclusion by this study as it is having limitations of small size, single centered population, and thus have low statistical power. More extensive and prospective studies warranted to draw some conclusions.
Conclusion
In this study, the CASt was performed in 77 patients and our center achieved good technical success and relatively acceptable post-procedural complications. Despite performing CASt in highrisk groups, the overall stroke rate was 7.7%, while the death rate was 1.3%. There was no significant association between risk factors and post-procedural complications, which may be due to small sample size and single-center experience. The data from this study may help medical professionals to plan and carry out CASt in target patients.
|
2020-12-03T09:05:30.515Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6de5c68194b8c07a691cb38a0dd36493f8f43aad",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.29011/2577-2228.100091",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ad74933b64d07b6be491b5f46f8368b6aae0e17c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
13210308
|
pes2o/s2orc
|
v3-fos-license
|
STAT3 Potentiates SIAH-1 Mediated Proteasomal Degradation of β-Catenin in Human Embryonic Kidney Cells
The β-catenin functions as an adhesion molecule and a component of the Wnt signaling pathway. In the absence of the Wnt ligand, β-catenin is constantly phosphorylated, which designates it for degradation by the APC complex. This process is one of the key regulatory mechanisms of β-catenin. The level of β-catenin is also controlled by the E3 ubiquitin protein ligase SIAH-1 via a phosphorylation-independent degradation pathway. Similar to β-catenin, STAT3 is responsible for various cellular processes, such as survival, proliferation, and differentiation. However, little is known about how these molecules work together to regulate diverse cellular processes. In this study, we investigated the regulatory relationship between STAT3 and β-catenin in HEK293T cells. To our knowledge, this is the first study to report that β-catenin-TCF-4 transcriptional activity was suppressed by phosphorylated STAT3; furthermore, STAT3 inactivation abolished this effect and elevated activated β-catenin levels. STAT3 also showed a strong interaction with SIAH-1, a regulator of active β-catenin via degradation, which stabilized SIAH-1 and increased its interaction with β-catenin. These results suggest that activated STAT3 regulates active β-catenin protein levels via stabilization of SIAH-1 and the subsequent ubiquitin-dependent proteasomal degradation of β-catenin in HEK293T cells.
INTRODUCTION
The -catenin functions as an adhesion molecule that associates with E-cadherin and actin filaments at the cell membrane and as a component of the canonical Wnt pathway (Orsulic et al., 1999). In the absence of the Wnt ligand, cytoplasmic catenin is constantly degraded by a "destruction complex" consisting of the scaffolding protein axin, tumor suppressor adenomatous polyposis coli gene product (APC), casein kinase 1 (CK1), and glycogen synthase kinase 3 (GSK3). CK1 and GSK3 sequentially phosphorylate -catenin, which results in recognition by the E3 ubiquitin protein ligase -TrCP and subsequent ubiquitin-dependent proteasomal degradation (He et al., 2004). This continual elimination of -catenin leads to repression of Wnt target genes, such as c-Myc, c-jun, and cyclin D1, by preventing the translocation of -catenin to the nucleus and formation of complexes with TCF/LEF family of proteins (Liu et al., 2002;Polakis, 2000). Wnt/-catenin signaling regulates diverse cellular processes, including organ development, cellular proliferation, morphology, motility, stemness maintenance, and fate determination (Cadigan and Nusse, 1997;DALE, 1998).
Control of -catenin levels via constant degradation is a major regulatory mechanism, and reduced levels of -catenin prevent its nuclear translocation and the activation of target genes. As mentioned above, phosphorylated -catenin is recognized by -TrCP and designated for ubiquitin-dependent proteasomal degradation. In response to the induction of p53, -catenin also undergoes a phosphorylation-independent degradation pathway via interaction with the E3 ubiquitin protein ligase SIAH-1, which encourages the binding of ubiquitin-conjugating enzymes (E2s) and proteasomal degradation (Liu et al., 2001;Matsuzawa and Reed, 2001). Humans have two SIAH genes encoding Sina-like proteins, SIAH-1 and SIAH-2 (Hu et al., 1997). Sina was initially discovered in Drosophila as a requirement for R7 photoreceptor cell differentiation (Carthew and Rubin, 1990). The SIAH-1 protein plays a key role in many biological processes, such as the cell cycle, programmed cell death, and oncogenesis (Nemani et al., 1996).
STAT3, a member of the STAT family, is a latent transcription factor that mediates cytokine-and growth factor-directed transcription (Levy and Darnell, 2002). In response to the binding of extracellular ligands, receptor and non-receptor protein
Molecules and Cells
http://molcells.org Established in 1990 eISSN: 0219-1032 The Korean Society for Molecular and Cellular Biology. All rights reserved.
This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/. tyrosine kinases phosphorylate STAT3 at tyrosine residue 705 within the transactivation domain near the carboxy-terminus (Improta et al., 1994). Phosphorylated STAT3, the active form of STAT3, dimerizes with other activated STAT proteins in the cytoplasm to form homo-or hetero-dimers and then translocates to the nucleus to bind to DNA and stimulate the production of target genes (Darnell, 1997). STAT3 participates in a wide variety of cellular processes, including proliferation, postnatal survival, differentiation in the context of growth and development, invasion, angiogenesis, and metastasis in the context of cancer progression (Bromberg et al., 1999;Levy and Lee, 2002).
Despite their common cellular functions, such as proliferation and fate determination, research on the relationship between STAT3 and -catenin is limited. A previous study reported that STAT3 cooperates with -catenin to exert oncogenic effects in breast cancer cells (Armanious et al., 2010). In contrast, another study showed that treatment with siSTAT3 in HCC increased -catenin levels (Wang et al., 2010). To study the precise regulatory relationship between STAT3 and -catenin, we created an artificial system by transfecting STAT3 and -catenin in HEK293T cells. Here we report that STAT3 activation regulates the protein levels of -catenin. Specifically, STAT3 stabilizes SIAH-1, which enhances the interaction between SIAH-1 and active -catenin and results in the ubiquitin-dependent proteasomal degradation of -catenin in HEK293T cells.
Cell lines
Human embryonic kidney cells, HEK293T was maintained in DMEM (Life Technologies, Inc., USA) containing 10% FBS (Life Technologies, Inc.) and 1% antibiotics (Life Technologies, Inc.). The cells were maintained in a humidified incubator at 37°C in the presence of 5% CO2.
Plasmids
To generate expression vector for STAT3 cDNA (GenBankTM accession number NM_213662, purchased from Origene Technologies Inc.), the corresponding STAT3 cDNA was cloned in-frame into pLL3.7 vector (Enzynomics Inc.). A point mutant plasmids of the tyrosine 705 residue, the serine 727 residue, and the tyrosine 705 and serine 727 residues of STAT3 was made using pLL3.7-STAT3 (Enzynomics Inc.). The plasmid encoding constitutively activated STAT3 used in the present study, namely pCMV-caSTAT3, was purchased from Addgene (USA). HA--catenin was kindly provided by Dr. Jong-Wan Park.
Transfection, siRNA and MG132 treatment
The transient plasmid DNA transfection was done using Lipofectamine 2000 (Invitrogen, UK) according to the manufacturer's instructions. siRNAs against STAT3 and negative control were used at 100 nM on HEK293T cells transfected using Qiagen HiPerfect according to the manufacturer's instructions (Catalog No. 301705). MG132 (10 M) (PeproTech) was used to treat HEK293T cells for 10 h before harvest.
Western blot analysis
Cells were washed with cold PBS and lysed in a cold Radio ImmunoPrecipitation Assay (RIPA) buffer containing protease inhibitor (2 mM PhenylMethylSulphonylFluoried (PMSF), 10 g/ml leupeptin and 2 mM EthyleneDiamineTetraAcetic acid (EDTA)). The lysates were collected and centrifuged for 20 min at 13,000 rpm at 4°C, and the supernatants were collected. Equal amounts of proteins from the supernatants were separated by SDS-PAGE and transferred on to nitrocellulose (NC) membrane. The membranes were blocked in a TBS-T containing 5% non-fat dried milk for at least one hour and subsequently incubated with specific primary antibodies overnight at 4°C. After washing with TBS-T for 30 min at room temperature, the membranes were further incubated with a HRP-conjugated secondary antibody for 1 h. After washing with TBS-T, the signals were detected using SuperSignal West Femto (Thermoscientific, USA). The following antibodies were used: phosphotyrosine STAT3, total STAT3, HA-tag, and ubiquitin (Cell signaling technology, USA), and active -catenin, Myc-tag (Millipore), and total -catenin (Santa Cruz Biotechnologies, USA) and α-tubulin (Thermoscientific).
Immunoprecipitation
Cell extracts were precleared with anti-Myc antibody or anti-HA antibody overnight at 4°C and incubated with protein G sepharose beads for 1 h at 4°C. The beads were then washed five times with ice-cold lysis buffer and suspended in SDS sample loading buffer. Western blotting was then performed.
Luciferase assay
Cells were cotransfected with both 0.25 to 2 g each of reporter construct and CMV--gal plasmid. Luciferase activity was measured using Lumat-LB960 luminometer (Berthold), and divided by -gal activity to normalize the transfection efficiency.
Statistical analysis
Data are presented as average ± Standard Deviation (SD) and statistical significance was determined by unpaired two-tailed Student's t test implemented in Microsoft Excel software. **p < 0.001, *p < 0.01.
Activated STAT3 suppresses -catenin-TCF-4 transcriptional activity
To determine if constitutively activated STAT3 (caSTAT3) influences -catenin transcriptional activity, we analyzed TOP-flash reporter activity in human embryonic kidney (HEK293T) cells overexpressing both caSTAT3 and -catenin. The protein levels of phosphorylated STAT3 at tyrosine residue 705 and total STAT3 showed a concentration-dependent elevation based on the amount of transfected DNA (Fig. 1A). The -catenin/TCF-4dependent transcriptional activity showed a concentrationdependent decrease in response to the amount of transfected caSTAT3. To confirm that STAT3 activation is associated with decreased -catenin transcriptional activity, we completed cotransfections of several STAT3 mutants with -catenin (Fig. 1B). The decreased TOP-flash activity induced by STAT3 was restored to control levels in cells transfected with STAT3 mutants. These results contradicted previous reports that STAT3 aids in -catenin activation in cancer cells (Armanious et al., 2010). Thus, we confirmed that activated STAT3 supports the transcriptional activity of -catenin in breast cancer cells (Supplementary Figs. 1A and 1B) but not in human embryonic kidney cells. These findings indicate that phosphorylated STAT3 suppresses the transcriptional activity of -catenin in non-tumorous cells.
Inhibition of STAT3 increases active -catenin levels
To examine the role of STAT3 on the protein levels of -catenin, HEK293T cells were transfected with -catenin and then treated with 10 M MG132 for 10 h before harvesting. Cells were incubated with 40 M AG490 or a 10 M STAT3 inhibitor overnight before harvesting. Cells were harvested, and the proteins were extracted in RIPA buffer for Western blotting. Anti-α-tubulin was used as the loading control.
we performed a siRNA-mediated STAT3 knockdown with or without a GSK inhibitor treatment in HEK293T cells. Western blot analysis showed that the GSK inhibitor increased the protein levels of endogenous -catenin. STAT3 knockdown combined with a GSK inhibitor treatment further enhanced the protein levels of active -catenin ( Fig. 2A). To confirm if the inhibition of STAT3 activity also increased non-phosphorylatable mutant -catenin (active--catenin), we incubated cells with AG490 (40 M) or a STAT3 inhibitor (10 M) overnight. HA-catenin is a mutant form of -catenin, also termed active catenin because the phosphorylation of the serine 33 residue of -catenin is mutated and thus cannot be degraded by the -TrCP-dependent pathway. As shown in Fig. 2B, treatment with either AG490 or a STAT3 inhibitor increased the protein levels of active -catenin. These results suggest that STAT3 is an important regulator of active -catenin levels.
STAT3 stabilizes SIAH-1 protein via a direct interaction
STAT3 knockdown increased the levels of non-phosphorylatable mutant -catenin, which suggests that SIAH-1 mediates the degradation of active -catenin. To determine if STAT3 regulates SIAH-1, we examined the effect of STAT3 on the RNA A B C D Fig. 3. STAT3 stabilizes SIAH-1 via a direct interaction. (A) HEK293T cells were transfected with wtSTAT3 and -catenin. After 24 h, cells were harvested and the mRNA and proteins were extracted for RT-PCR and Western blotting, respectively. (B) HEK293T cells were transfected with siSTAT3 (100 nM) or scrambled siRNA (100 nM, nonspecific siRNA used as a negative control) and with wtSTAT3, -catenin, or a control pCMV vector. After culturing for 24 h, cells were subjected to Western blotting. (C) HEK293T cells were transfected with wtSTAT3-myc or STAT3-Y705F-myc. After 24 h, an immunoprecipitation assay was performed with an anti-myc-antibody, and Western blotting was completed with the indicated antibodies. (D) HEK293T cells were transfected with wtSTAT3-myc and -catenin. After 24 h, an immunoprecipitation assay was performed with an anti-myc-antibody and Western blotting was completed with the indicated antibodies. and protein levels of SIAH-1 in HEK293T cells using RT-PCR and Western blotting, respectively. As shown in Fig. 3A, the RNA expression of SIAH-1 was not effected by STAT3 transfection; however, STAT3 activation increased the protein levels of SIAH-1. This result indicates that SIAH-1 stability is regulated by STAT3 activation. Next, we confirmed the STAT3dependence of SIAH-1 protein levels using STAT3 siRNA and a STAT3 expression plasmid (Fig. 3B). We speculated that STAT3 interacts with SIAH-1 to improve protein stability; thus, we assessed the protein-protein interaction between STAT3 and SIAH-1 using immunoprecipitation. Either myc-tagged wild type STAT3 (wtSTAT3-myc) or myc-tagged STAT3 mutated at tyrosine residue 705 (STAT3-Y705F-myc) was transfected in HEK293T cells, and immunoprecipitation was performed with an anti-myc antibody (Fig. 3C). Ectopic STAT3 co-precipitated with endogenous SIAH-1, and the interaction between STAT3 and SIAH-1 was reduced when STAT3 activation was de-creased via the STAT3 mutation at tyrosine residue 705. These results indicate that STAT3 activation is essential for its interaction with SIAH-1. To evaluate if -catenin affects the interaction between STAT3 and SIAH-1, wtSTAT3-myc and HA--catenin was co-transfected in HEK293T cells. Next, we performed Western blotting and co-immunoprecipitation with an anti-myc antibody (Fig. 3D). The data show that STAT3 interacted with SIAH-1, despite the overexpression of HA--catenin. These results suggest that STAT3 interacts with SIAH-1 and that active -catenin does not influence the interaction between STAT3 and SIAH-1.
STAT3 facilitates the proteasomal degradation of -catenin
Activated STAT3 increases the protein levels of SIAH-1 via a direct interaction; thus, it is possible that STAT3 promotes the interaction between SIAH-1 and leads to proteosomal degradation of active -catenin. The results showed that co-expression After 24 h, cells were harvested. We performed an immunoprecipitation assay with an anti-HA-antibody, and Western blotting was completed with the indicated antibodies. (B) HEK293T cells were transfected with wtSTAT3-myc and -catenin-HA and then treated with 10 M MG132 for 10 h before harvest. After 24 h of transfection, cells were harvested. An immunoprecipitation assay was performed with an anti-HA-antibody, and Western blotting was completed with the indicated antibodies. (C) Graphical summarization. Two distinct mechanisms of -catenin degradation by activated STAT3 is described.
A B
C of STAT3 and HA--catenin increased the binding of endogenous SIAH-1 and HA--catenin (Fig. 4A). These findings indicate that HA--catenin easily undergoes proteasomal degradation through SIAH-1. To examine if STAT3 enhances polyubiquitination of HA--catenin, STAT3 and HA--catenin were co-expressed in HEK293T cells in the presence or absence of the proteasome inhibitor MG132 (10 M) to check for the ubiquitination of HA--catenin. As shown in Fig. 4B, the polyubiquitination of HA--catenin was increased in the presence of both MG132 and a STAT3 inhibitor. Taken together, our results suggest distinct role of phosphorylated STAT3 in -catenin degradation in non-tumorous cells. STAT3 enhances the protein expression and transcriptional activity of b-catenin in many type of tumor. In contrast, STAT3 activation leads to proteosomal degradation of active -catenin by stabilization of SIAH-1 in the developmental process (Fig. 4C).
DISCUSSION
STAT3 is responsible for various cellular processes, such as survival, proliferation, and differentiation. The -catenin, as a co-activator of canonical Wnt signaling, is also involved in diverse cellular procedures. Nevertheless, little is known about whether these molecules work with or regulate each other. In this report, we found that STAT3 controls active -catenin protein levels through regulation of SIAH-1, leading to the ubiquitindependent proteasomal degradation of active -catenin in nontumorous HEK293T cells.
In cancer, the synergistic role between phosphorylated STAT3 and active -catenin to potentiate tumorigenic progression has been reported (Armanious et al., 2010;Bromberg et al., 1999;Polakis, 2000;Wang et al., 2010). Thus, targeting STAT3 phosphorylation and active -catenin gives therapeutic benefits in malignant tumors. On the other hand, STAT3 and catenin also have critical role such as embryonic stem cell pluripotency in non-tumorous embryonic cells (Hao et al., 2006;Kielman et al., 2002). During developmental processes, both STAT3 and -catenin play important roles in cell survival, maintenance of pluripotency, and specific cellular differentiation. The Wnt/-catenin pathway acts to prevent ES cell differentiation through convergence on the LIF/JAK-STAT3 pathway (Hao et al., 2006). However, genetic and molecular evidence shows that the ability and sensitivity of embryonic stem cells to differentiate into the three germ layers is inhibited by increased doses of -catenin by specific Apc mutations (Kielman et al., 2002). Sokol and his colleagues suggest that Wnt/-catenin signaling pathway is recognized as a potential mechanism to reinforce embryonic cell fate decision (Sokol, 2011). For this reason, the knowledge of the mechanism to regulate the balance of catenin protein levels is important in embryonic developmental stage.
In this study, we found that phosphorylated STAT3 inhibits the transcriptional activity of -catenin in non-tumorous HEK293T cells (Fig. 1). Interestingly, phosphorylated STAT3 repress only non-phosphorylatable -catenin (active -catenin) and these data suggested that phosphorylated STAT3 regulates non-canonical -catenin degradation pathway not -TrCPdependent degradation pathway (Fig. 2). Expectedly, SIAH-1, the major component of non-canonical -catenin degradation pathway, stabilized by protein-protein interaction with activated STAT3 (Fig. 3). Through the stabilization of SIAH-1 protein, STAT3 facilitates the proteasomal degradation of -catenin (Fig. 4). From these results, we suggest that STAT3 activation is involved in the degradation of active -catenin via SIAH-1 in non-tumorous cells.
Our findings provide not only a comprehensive understanding of the interaction between STAT3 and SIAH-1 but also a novel mechanism of -catenin degradation by STAT3 phosphorylation might be one of the strategy to orchestrate developmental process of embryonic cells.
Note: Supplementary information is available on the Molecules and Cells website (www.molcells.org).
|
2018-04-03T05:25:07.069Z
|
2016-11-18T00:00:00.000
|
{
"year": 2016,
"sha1": "04cf9877ff9534d5c48cabb134020e88c4c93021",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.molcells.org/journal/download_pdf.php?doi=10.14348/molcells.2016.0212",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "04cf9877ff9534d5c48cabb134020e88c4c93021",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
266162430
|
pes2o/s2orc
|
v3-fos-license
|
Resonant Stratification in Titan’s Global Ocean
Titan’s ice shell floats on top of a global ocean, as revealed by the large tidal Love number k 2 = 0.616 ± 0.067 registered by Cassini. The Cassini observation exceeds the predicted k 2 by one order of magnitude in the absence of an ocean, and is 3σ away from the predicted k 2 if the ocean is pure water resting on top of a rigid ocean floor. Previous studies demonstrate that an ocean heavily enriched in salts (salinity S ≳ 200 g kg−1) can explain the 3σ signal in k 2. Here we revisit previous interpretations of Titan’s large k 2 using simple physical arguments and propose a new interpretation based on the dynamic tidal response of a stably stratified ocean in resonance with eccentricity tides raised by Saturn. Our models include inertial effects from a full consideration of the Coriolis force and the radial stratification of the ocean, typically neglected or approximated elsewhere. The stratification of the ocean emerges from a salinity profile where the salt concentration linearly increases with depth. We find multiple salinity profiles that lead to the k 2 required by Cassini. In contrast with previous interpretations that neglect stratification, resonant stratification reduces the bulk salinity required by observations by an order of magnitude, reaching a salinity for Titan’s ocean that is compatible with that of Earth’s oceans and close to Enceladus’ plumes. Consequently, no special process is required to enrich Titan’s ocean to a high salinity as previously suggested.
INTRODUCTION
Recent decades of space exploration have revealed a solar system populated with internally heated icy worlds where large reservoirs of liquid water accumulate in subsurface global oceans (Nimmo & Pappalardo 2016).These worlds signal a possibility for life beyond Earth in a location that is accessible to future in-situ space exploration.A global ocean plays a fundamental role in determining the potential habitability of these icy worlds because water is required for life as we know it.Beyond detection, however, these global oceans remain poorly understood.Ocean thickness is typically known within broad limits ranging in the tens of percent of the icy world radius (Sohl et al. 2003;Grindrod et al. 2008), preventing an accurate assessment of the satellite's liquid water inventory and thermal history.On Earth, ocean dynamics modulate the distribution of nutrients and energy sources required by life (Uchida et al. 2020, e.g.), but on icy worlds the type of convection or lack thereof remains poorly known (Jansen et al. 2023).Here we argue in favor of the stratification of Titan's ocean based on a new interpretation of Cassini gravity measurements where internal gravity waves in Titan's ocean become resonantly excited by tides raised by Saturn (see also Luan (2019)).We hereafter refer to this proposed scenario as resonant stratification.
Titan is the second largest solar system icy satellite and the best characterized from the perspective of gravity measurements (i.e., moment of inertia, J 2 , C 22 , and k 2 (Durante et al. 2019)), offering us a unique opportunity to reveal the hidden interior structure and dynamics of the global ocean within.The Cassini spacecraft unambiguously signaled the existence of a global ocean from the observed tidal response registered in the Love number k 2 = 0.616 ± 0.067 (Durante et al. 2019).The observation is an order of magnitude larger than the predicted k 2 ≈ 0.03 when the ocean is absent (Rappaport et al. 2008).Ignoring the ice shell, a global ocean of pure liquid water produces a k 2 ≈ 0.468 independent of ocean thickness if the high-pressure ice and silicates beneath the ocean behave approximately rigidly and the total mass of the satellite is conserved (Section 2.1.2).The presence of an overlying elastic ice shell further restricts the motion of the ocean beneath.Estimates of Titan's ice shell thickness yield d ∼ 100 km (Sohl et al. 2003;Nimmo & Bills 2010;Luan 2019); an ice shell this thick reduces tides down to k 2 ≈ 0.42 (Section 2.1.3),which is roughly 3σ away from the Cassini observation.A thinner ice shell provides a k 2 closer to the observation, but then the heat conducted across the ice shell exceeds the interior heat production expected from radiogenic and tidal heating (Sohl et al. 2003;Luan 2019), leading to thickening of the ice shell by freezing over time.A thinner shell is also more difficult to reconcile with the observed topography (Nimmo & Bills 2010).The resonant stratification presented here can self-consistently explain the large k 2 observed by Cassini by introducing a positive dynamical fractional correction to the non-resonant hydrostatic k 2 ≈ 0.42.Resonant stratification enhances k 2 by dynamic amplification of the vertical displacement of Titan's surface (Fig. 1).Ocean waves can produce significant dynamic surface displacement when resonantly excited by Saturn's eccentricity tides, namely when Titan's orbital frequency (ω s = 4.561 × 10 −6 s −1 ) is close to a match with a normal mode emerging from an aggregation of Titan's ocean waves.Amplification results from the constructive addition of ocean waves over many cycles of resonant excitation, until balanced by dissipation.The effect produces dynamic gravity that can be registered by the tracking system of a nearby passing spacecraft (i.e., Cassini).A vertical gradient in ocean salinity promotes ocean stratification and the emergence of internal gravity waves.These waves are restored by buoyancy forces and organize in a spectrum of low-frequency normal modes that asymptotically approach zero frequency from an upper bound frequency ω 2 ≲ N 2 , where N 2 is the Brunt-Vaisala frequency that typically describes the strength of ocean stratification (Section 2.2 and Appendix A).A typical value N 2 ≳ 10 −8 s −2 ≳ ω 2 s suggests a spectrum of internal gravity waves capable of resonance with eccentricity tides, and roughly corresponds to a modest vertical salinity gradient where salt concentration increases by ≳ + 1 g/kg every ∼100 km of ocean thickness (the salinity of Earth's oceans is roughly 35 g/kg).Crucially, our mechanism works for a range of mean ocean salinity values, including Earth's and that inferred for Enceladus (Postberg et al. 2009).
When the ocean is fully convective (i.e., unstratified) and homogeneously mixed, ocean waves require an unphysically thin ocean (H < 1 km) to resonate with eccentricity tides (Matsuyama et al. 2018).An ocean this thin is not compatible with the thermal history of Titan as expected from radiogenic and tidal heating (Tobie et al. 2006;Grindrod et al. 2008).Only the high-frequency tides from moon-to-moon interactions can resonantly excite a fully mixed ocean of realistic thickness (H ∼ 100 km) (Hay et al. 2020), but those signals are relatively small and thus beyond Cassini's detection threshold.
Alternative scenarios can enhance k 2 to the value required by Cassini (Fig. 1), but require either high salinity of the ocean or low rigidity of the solid Titan.Previous calculations (Rappaport et al. 2008;Waite et al. 2017) show that a heavy ocean produces the required extra gravity when its density is increased by a high concentration of dissolved salts (salinity S ∼ 200 g/kg).The required concentration of salts is an order of magnitude higher than that in Earth's oceans, Enceladus' plumes, and that predicted from water-rock interactions on Titan's ocean floor (Postberg et al. 2009;Leitner & Lunine 2019).On the other hand, a low rigidity or low viscosity ocean floor can allow large vertical displacements of the ocean floor that produce an extra gravity signal that constructively adds to the gravity from tides at the surface (Durante et al. 2019).In practice, reasonable estimates of the rigidity and viscosity for silicates and high-pressure ice at tidal timescales produce a negligible ocean floor displacement.For example, the contribution to k 2 from the elastic icy ocean floor displacement can be estimated to be negligible following k 2,ice /k 2 × (ρ hp-ice − ρ)/ρ ∼ 2%, where k 2,ice /k 2 is the ratio between the Love number when the ocean is excluded/included, respectively, ρ hp-ice is the density of high-pressure ice, and ρ the ocean density.The previous estimate assumes that the tidal response of the ocean floor can be crudely approximated by the tidal response of an oceanless Titan model after introducing a correction for the relatively higher surface density of high-pressure ice.A viscous icy ocean floor can introduce a non-negligible component to k 2 if the viscosity of high-pressure ice is lower than ∼10 15 Pa s (Rappaport et al. 2008).The viscosity of high-pressure ice is typically comparable to or greater than this value when the temperature of the ice is ∼10% lower than the melting temperature (Durham et al. 1998), a reasonable assumption at the ocean floor.
Since resonantly excited internal gravity waves can produce the required dynamic gravity, the main challenge for resonant stratification is how to establish a long-lived tidal resonance in an icy satellite.The possibility of resonant tidal excitation is suggested by the tendency of Titan's ocean to freeze due to secular cooling, which imposes a continuous evolution on the frequency of ocean waves by a change in the stratification profile of the ocean.At some point in Titan's freezing history, the normal frequency of ocean waves crosses the orbital frequency.After first being encountered, a tidal resonance can be sustained over long geological timescales provided that a stable fixed point is reached between orbital and interior evolution, analogously to ideas previously applied to tidal resonant locking in Jupiter and Saturn (Fuller et al. 2016;Luan et al. 2018;Lainey et al. 2020;Idini & Stevenson 2022).The onset of resonant ocean waves enhances Titan's tidal heating in the ocean (Tyler 2011(Tyler , 2014;;Rovira-Navarro et al. 2019), with the additional heating slowing or halting the freezing of the ocean.
BASIC EQUATIONS
The tidal Love number k ℓm represents a normalization of the gravitational tidal response by the gravitational excitation, where the eccentric gravitational excitation ϕ e ℓm is derived in Appendix B and the tidal response ϕ ′ ℓm due to eccentricity tides is calculated numerically and analytically in this section in the context of various models.We concentrate in the ℓ = m = 2 Love number k 2 , which matches the spherical harmonic of the Cassini observation.We do not discuss obliquity tides in this manuscript because they excite m = 1 spherical harmonics that do not contribute to the Cassini k 2 observation.
Equilibrium tide
Here we reproduce previously known results of the tidal response of icy satellites with oceans using simple principles.Our objective is to illuminate the effects contributing to the amplitude of the equilibrium tide and to produce an estimate for Titan within a few percent of accuracy.We start by deriving the hydrostatic k 2 in a homogeneous fluid body.This derivation shows how all the tidal gravity in k 2 emerges from a thin layer of displaced fluid near the surface.However, Titan strongly departs from this estimate because its density profile is not homogeneous and its tidal response is not fully hydrostatic.Our second derivation introduces the effects of a nonhomogeneous density profile and an inner core that responds rigidly to diurnal tides (i.e., tidal frequency ω = ω s ) rather than hydrostatically.Finally, our last derivation shows how an elastic ice shell covering the ocean reduces the tidal response and provides a rough estimate that agrees with more sophisticated modeling.After considering all these effects acting jointly, the result is what we consider the diurnal equilibrium tide of Titan k 2 ≈ 0.42; a tide restored purely by static forces where inertial forces are neglected (i.e., no dynamics).
Hydrostatic tides in a homogeneous fluid body
We first revisit the classical problem of tides in a homogeneous incompressible body that satisfies hydrostatic equilibrium using basic principles.The linearized equation for the conservation of momentum is The tidal response of the icy satellite produces adiabatic perturbations represented in the gravitational potential ϕ ′ and pressure p ′ .In this expression, the potential of the gravitational pull ϕ e and the tidal gravitational potential ϕ ′ are combined into φ′ = ϕ e + ϕ ′ for analytical simplicity.
In an incompressible body, the density ρ remains constant regardless of forcing and the only change in the gravity field comes about from the radial displacement ξ r of the surface.This diplacement is typically small compared to the radius R. Thus, we can calculate the gravity field from the integration over the volume of a thin shell of fluid with thickness ξ r , where g is the surface gravity acceleration calculated in hydrostatic equilibrium g = 4πGρR/3, and G the gravitational constant.
The boundary condition on the surface indicates that the fluid is free from external pressure (i.e., δp = 0).This statement translates into turning the tidal displacement into the sole source for the pressure perturbation, formally expressed as Next, we can go back to equation ( 2) and obtain an expression for the tidal forcing, (5) We evaluate the tidal response and forcing at the surface (r = R) to obtain the tidal Love number from the ratio between them, which agrees with the classical result k 2 = 1.5 of the fluid Love number of a uniform density body in hydrostatic equilibrium (Munk et al. 1977, e.g.).
A global ocean with a rigid ocean floor
We now consider a body with a rigid core of density ρ c and radius R c overlaid by an ocean of density ρ by introducing vertical differentiation in the density profile.The main change compared to the homogeneous case is a change in the expression for the surface gravity acceleration g = 4πG ρR/3, where ρ is the body's mean density.The tidal gravity field still emerges solely from the radial displacement ξ rℓm of a thin region near the surface, Following the same steps as before, the resulting tidal Love number is independent of core properties, This equation yields k 2 = 0.468 when using Titan's mean density ρ ≈ 1.88 g cm −3 and ρ = 1 g cm −3 .Equation (8) reproduces numerical results (within 5%) obtained previously for Europa when the icy shell is ignored (Moore & Schubert 2000), namely k 2 ≈ 0.249 when using ρ ≈ 3.01 g cm −3 .Analogous versions of Equation ( 8) can be found in the literature when following an alternative derivation (Dermott 1979;Murray & Dermott 1999;Beuthe 2015, e.g.).
Ice shell thickness
An elastic ice shell constrains the radial displacement of the ocean that produces the tidal response registered in k 2 .The weight of the ocean trying to reach an equipotential surface is balanced by the resistance of the ice shell to deformation (Kamata et al. 2015, e.g.).Here we provide a simple argument to evaluate the role that the ice shell thickness plays in reducing the amplitude of the tidal response.
Consider a global ocean of density ρ that is trying to reach the equipotential surface defined by the radial displacement ξ req .The pressure that the weight of the ocean exerts on the base of the ice shell can be expressed by where g is the gravity acceleration and ξ r is the radial tidal displacement.The shear stress τ on the ice shell can be expressed as where µ is the shear modulus of the ice shell.
Next, we consider the equilibrium of forces over a meridional section dividing the satellite into two hemispheres.The ocean pressure integrated over the projected area of the ice shell base must be balanced by the shear stress integrated over the section of the ice shell, namely where d is the ice shell thickness and R is the icy satellite radius.We can rearrange the previous expression to discover the fractional change in k 2 due to a rigid ice shell, which represents a competition between elastic and gravitational energy (see also equation ( 11) in Goldreich & Mitchell (2010)).An alternative derivation of equation ( 12), including all the correct numerical factors, can be found in Beuthe (2015).We may use Titan's radius R ≈ 2575 km, surface gravity acceleration g ≈ 135 cm s −2 , ocean density ρ ≈ 1 g cm −3 , and shear modulus of ice I and methane clathrates µ ∼ 4 GPa, and obtain An ice shell thickness d ∼ 100 km balances radiogenic heating with heat conduction (Appendix D) and produces k 2 ≈ 0.42 (the hydrostatic response without an ice shell is k 2 = 0.468).We use this k 2 value as the reference hydrostatic tidal response when computing the fractional change ∆k 2 due to various effects, which is compatible with previous estimates for Titan models without salinity (Rappaport et al. 2008).The result is valid for a shell made of ice or methane clathrates given that the elastic modulus is similar in both cases.Instead of being purely elastic, the ice shell covering the ocean can be viscous near the melting temperature.Viscous ice can in principle flow at certain timescale and reduce the resistance that the ice shell imposes on the tidally excited ocean motion.In practice, this effect is negligible at the tidal timescale ∼16 days unless large portions of the d ∼ 100 km ice shell thickness is in convection and thus have relatively low viscosity (∼10 14 Pa s).The compensated long-wavelength topography of Titan suggests that the ice shell is unlikely to be convecting (Nimmo & Bills 2010).
Dynamical tides
We now solve the problem of a rotating ocean world with a stratified ocean subjected to the full action of the Coriolis effect.The equation of conservation of momentum and continuity are where v is the 3D incompressible tidal flow, Ω is the icy satellite spin vector, ρ is the ocean density, p is pressure, and γ is the linear damping coefficient that represents dissipation in the ocean.In this paper, we consider at all times that Titan is in synchronous rotation (i.e.Ω = ω s ).
We derive the linearized form of the conservation of momentum equation by introducing linear perturbations of the form φ ≈ ϕ 0 + φ′ , where quantities with the zero subindex indicate the background state and primed quantities indicate perturbations due to tides.In the following, we drop the zero subindex for convenience, thus all nonprimed quantities indicate the background state.The resulting linearized equation of conservation of momentum is The timescale of tidal motion of ∼15 days is orders of magnitude shorter than the timescale required to transport heat by diffusion or convection.Tides are consequently adiabatic and the associated Lagrangian tidal perturbations in density and pressure satisfy where Γ is the adiabatic index.We can relate the Lagrangian perturbations to the Eulerian perturbations in equation ( 17) using the definitions where ξ is the tidal displacement.Moving forward, we put together the latter equations to arrive to where N 2 is the Brunt-Vaisala (BV) frequency defined in general by The BV frequency represents the salinity stratification of the ocean.Here the only relevant component in N 2 is the radial direction, given that we consider the background state to be spherically symmetric.
When rN 2 > 0, the ocean is vertically stratified.A stratified ocean parcel will develop static stability once displaced out of its equilibrium position.According to equation ( 21), an Eulerian change in ocean density may emerge from either the adiabatic response of the ocean fluid (first term in the right-hand side) or the buoyancy of the stratified ocean parcel (second term in the right-hand side).In the incompressible approximation used here, Eulerian perturbations to density uniquely emerge from the buoyancy of the stratified ocean parcel.This results from the adiabatic index Γ ≡ (∂ log p/∂ log ρ) S tending to infinity: the expected changes in pressure keep density unperturbed.As a consequence, the BV frequency reduces to where the radial variation in density comes from the addition of a small amount of salts organized in a vertical salinity gradient.As we can see from equation ( 23), stratification permits local density perturbations in the ocean regardless of its incompressibility.Equation ( 23) shows a direct relationship between stratification in N 2 and the salinity gradient in ∂ r ρ.When we consider constant stratification throughout the ocean, we can write ∂ r ρ = ∆ρ/H, where ∆ρ is the change in density between top and bottom of the ocean.Our models have constant density, thus the ∆ρ represents a virtual change in density due to the addition of salts.The concentration of added salts S is described in g of salts per kg of water (g/kg).
Next, we rewrite the linearized momentum and continuity equations as where ω = ω + iγ is a complex frequency that accounts for tidal dissipation, ω = ω s is the tidal frequency of eccentricity tides, and the tidal displacement ξ is periodic with a time dependency ∝ e −iωt .We have used v = ∂ t ξ = −iωξ.This set of equations is traditionally known as the Boussinesq approximation.
A typical method of solution of equation ( 24) involves applying the curl to reach the vorticity equation (Rieutord 1987) A rigid core implies that waves produce no radial displacement near the bottom of the ocean.We set the rigid boundary condition at the ocean bottom to The free boundary at the ocean top prescribes a zero Lagrangian perturbation of pressure (e.g., Goodman & Lackner (2009)), which leads to The tidal displacement field ξ is the only quantity to be determined, which is forced by the ocean top boundary condition on the radial component of the displacement.Previous work (Rovira-Navarro et al. 2019, e.g.) typically sets ξ•r at the surface to be equal to the equilibrium tide radial displacement (i.e., no-slip boundary conditions) and/or neglects the self gravitation of the tide (e.g., Cowling approximation).Given that we are interested in precise tidal gravity estimates, we have retained the self gravitation of the equilibrium tide and allowed the ocean top to displace dynamically beyond the equilibrium tide, relaxing the no-slip assumption.No-slip boundary conditions explicitly set the dynamical part of ξ • r to zero at the surface, removing by definition the dynamical enhancement of k 2 that we calculate here.
The resulting set of equations is an infinite system of equations coupled in degree by the Coriolis effect (Rieutord 1987;Rieutord & Valdettaro 1997;Lockitch & Friedman 1999;Rieutord et al. 2001;Ogilvie & Lin 2004;Ogilvie 2009;Rovira-Navarro et al. 2019;Idini & Stevenson 2021, e.g.).We solve this system by the traditional method of projection onto vectorial spherical harmonics (Appendix C), followed by a pseudo-spectral discretization on the radial functions based on the analytically-tractable Chebyshev polynomials and Gauss-Lobatto collocation points (Boyd 2001, e.g.).We truncate the infinite set of equations at degree ℓ max = 100 and use N max = 100 Chebyshev polynomials to represent each radial function at each degree ℓ ≤ ℓ max .The Love number can then be obtained from the tidal displacement evaluated at the surface (equation ( 3)), which is the only source of tidal gravity in a homogeneous ocean.
Tidal heating rate in the ocean
The heating rate in the ocean can be fully determined by volume integration of the work done by the dissipative force in the equation of motion (equation ( 14)).This work per unit volume follows where dl is an infinitesimal line element along a fluid parcel motion.We can use dl = vdt and integrate ẇ across the ocean volume to obtain the time averaged ocean heating rate (Chen et al. 2014;Rovira-Navarro et al. 2023, e.g.) The linear damping coefficient γ is unconstrained in icy worlds with global oceans.The range of possible estimates spans from γ ∼ 10 −11 s −1 on Enceladus to the better constrained γ ∼ 10 −5 s −1 on Earth (Matsuyama et al. 2018).The projection of equation ( 32) onto vectorial spherical harmonics is shown in Appendix C.
NUMERICAL RESULTS
3.1.The predicted tidal response k 2 as a function of ocean structure We use perturbation theory and numerical methods to calculate the dynamic gravity produced by resonant stratification in a stratified and rotating ocean (Section 2).We concentrate on fractional changes ∆k 2 to the non-resonant hydrostatic k 2 ≈ 0.42 obtained in a pure water ocean with a d ∼ 100 km ice shell resting on top.Our method of solution considers the Coriolis force in full and avoids the thin shell approximation typically used in tidal studies of icy satellites (Beuthe 2016;Matsuyama et al. 2018;Rovira-Navarro et al. 2023).The additional effort of relaxing the thin-shell approximation allows us to study the dynamic gravity of internal gravity waves that result from the mixing of rotational and stratification effects.We simplify the structure of stratification by assuming a constant N 2 throughout the ocean (equation ( 23)), which translates into a linear increase in salt concentration with depth starting from zero at the ocean surface.More complicated salinity distributions are in principle possible and lead to additional uncertainty in the inferred ocean salinity structure.Our models show that resonant stratification can explain the k 2 enhancement observed by Cassini.We observe an enhancement ∆k 2 beyond +15% when overtones composed of internal gravity waves are resonantly excited by eccentricity tides (Fig. 2).This brings k 2 from 3σ to 2σ away from the mean value of the observation (Fig. 2).In this case, we have used the conservative linear damping γ = 10 −9 s −1 , but our models predict a resonant ∆k 2 beyond +45% when using a still realistic γ = 3.3 × 10 −10 s −1 (Fig. 2), an enhancement that puts k 2 at the mean value of the observation at the saturation point of the resonance.Resonances occur at various H and N 2 , preventing us from identifying a unique ocean thickness and stratification profile based solely on k 2 .In resonant stratification, the salt concentration near the ocean floor can be as low as < 5 g/kg in the simplified model we use here (Fig. 2; equation ( 23)).The mean salinity can then be less than that for the oceans of Earth or Enceladus (Postberg et al. 2009), while still producing a resonant response.As a general rule, internal gravity waves become resonant when the nodes in the radial displacement of the ocean wave perfectly fit the thickness of the stratified ocean cavity (Fig. 3).We can achieve this resonant fit by adjusting N 2 and changing the radial wavelength of internal gravity waves, or by adjusting the thickness of the stratified ocean H.The number of radial nodes is directly proportional to overtone order and inversely proportional to mode frequency, with lower frequency internal gravity waves having more radial nodes (Unno et al. 1979).Increasing the strength of stratification N 2 shifts the spectrum of g-modes toward high frequency and allows higher order gravity modes to become resonant with the fixed orbital frequency (Fig. 4).
Surface radial displacement and interior nonlinear wave breaking
The dynamical enhancement ∆k 2 emerges from a tens-of-meter enhancement on the radial displacement of the ocean surface.In our models, this fractional enhancement to ocean surface radial displacement ∆h 2 equals ∆k 2 for any given combination of ocean parameters.We derive this result from combining equations ( 1 resulting in the relationship A direct inspection of equation ( 34) leads to ∆h 2 = ∆k 2 after substitution of the h ℓm and k ℓm that include a base value and a fractional correction.The Love number h ℓm describes the radial displacement of the ocean surface as a function of the equilibrium tide.A h ℓm = 1 implies that the surface follows the shape of the equilibrium tide when the self gravity of the tide is ignored.In a homogeneous fluid body, the average density is ρ = ρ and equation (34) recovers the classical result h 2 = 5/2.In the case of simple Europa interior models, equation (34) reproduces previous numerical results for h 2 when the core/mantle are approximately rigid (Moore & Schubert 2000).In general, equation ( 34) is valid for simple models where the tidal gravity originates entirely from the radial displacement of the surface, as is approximately the case for icy satellites with global oceans overlying roughly rigid rocky interiors.When we consider that Titan's equilibrium tide produces a surface displacement |ξ r | ≈ 26 m, the dynamical enhancement from resonant stratification shown in Fig. 3 leads to tides with surface displacement below |ξ r | ⪅ 40 m in the most dramatic case.We can calculate Titan's equilibrium tide Love number h ℓm using equations ( 8) and (33), Titan responds to Saturn's gravity with an equilibrium tide h 2 ≈ 1.47, which is reduced by −10% when a d ∼ 100 km ice shell is included (Section 2.1.3).We can obtain the radial displacement of Titan's equilibrium tide from equations ( 33), (B13), and (B24), where M is Saturn's mass and m s is Titan's mass.Below the ocean surface, resonant internal gravity waves produce negligible gravity but attain a radial displacement |ξ r | ∼ 1 km in the case of γ = 10 −9 s −1 (Fig. 3).A lower dissipation γ produces even larger resonant amplitudes of tidal motion interior to the ocean (Fig. 5), with γ = 10 −10 s −1 reaching |ξ r | ∼ 10 km.This large radial displacement is accompanied by a horizontal displacement that is typically ξ ⊥ /ξ r ∼ nR/H ∼ 50 (equation ( C31)) and allows the flow to preserve continuity in an incompressible fluid.Despite the large tidal displacement, our models of resonant tidal excitation remain far from experiencing nonlinear wave breaking when γ ≳ 10 −10 s −1 , as stipulated in |ξk| ≲ 1, the typical criterion to avoid nonlinear wave breaking, where k is the wavenumber.For the radial displacement, this criterion translates to |ξ r | ≲ H/n.In the case of the horizontal displacement, the criterion stipulates ξ ⊥ ≲ πR/m, where m is the azimuthal order.All g-modes shown in Figs. 2 and 3 avoid nonlinear wave breaking by at least one order of magnitude.
Heating rate at saturation point
Our models also show that resonant stratification produces enough heat to compete with the radiogenic heating generated by decaying isotopes inside solid Titan (Fig. 4).This result is key to allow Titan to reach a stable fixed point that may prolong the crossing of a resonance as the ocean freezes.When in thermal steady state, the heat transported across the ice shell must balance all interior heat sources.If heat is transported by conduction, we can write d ∼ (3 × 10 13 )/ Ėint km (Appendix D), where Ėint is the interior heating rate in W and d is the ice shell thickness.When the ocean freezes until balance with radiogenic heating, Ėint ∼ 3 × 10 11 W (Appendix D) and the ice shell thickness grows to d ∼ 100 km.In this scenario, the ocean plays a negligible role in heating the interior.Resonant stratification changes this picture by introducing an additional heating source when the ocean is still rapidly freezing from secular cooling.In resonant stratification with γ = 10 −9 s −1 , for example, we get a total Ėint ∼ 6 × 10 11 W at saturation point (Fig. 4) and the ice shell thickness becomes d ∼ 50 km while in steady state with internal heat from radiogenic heating and ocean tidal heating combined.
The model above is a simple conductive model, but it provides a plausible argument in favor of catching resonant stratification via ocean freezing.Thermal steady state cannot be attained when the ice shell thickness is very thin near at the onset of ice shell formation (d ≲ 50 km for γ = 10 −9 s −1 ); the resonance saturates at peak heating rate (Fig. 4) before producing enough heat to balance conduction across the thin ice shell.Ice shell growth by secular cooling stops at d ∼ 100 km, hence resonant stratification via ocean freezing must catch a resonance before then.This last requirement is not significantly changed when a high abundance of ammonia in the ocean is considered.Ammonia can lower the freezing temperature of the ocean to the eutectic at T ∼ 180 K if present in concentration ∼15 wt.% (Lunine & Stevenson 1987;Grasset & Sotin 1996), reducing the d required for thermal steady state in half independently of whether resonant stratification is in place or not (see equation (D51)).
A lower γ = 10 −10 s −1 increases the heating rate at saturation point to an order of magnitude above Titan's radiogenic heating rate, and allows k 2 to grow higher (Fig. 5).A larger γ leads to the damping of resonances, reducing the amplitude of the resonant ∆k 2 and the resonant Ė (Fig. 5).The hydrostatic flow is not much affected by γ, thus the heating rate is typically increased by a larger γ when far from resonances.Our results are relevant in the range γ = 10 −9 -10 −10 s −1 ; a γ larger than this range damps ∆k 2 below the signal registered by Cassini, whereas a γ lower than the range results in nonlinear breaking of internal gravity waves, an effect not included in our models.In addition to thermal steady state, ocean freezing requires an approach toward the resonance from the branch that produces a positive ∆k 2 (Fig. 2).Approaching the resonance from the negative ∆k 2 branch would depart from the gravity enhancement required by Cassini.A simple freezing model of a fully stratified ocean fails to satisfy this criterion.In this freezing model, the ocean freezes leaving salts in the liquid phase and redistributing them into a linear salinity profile that conserves the total mass of salts M s .Following equation ( 23), the ocean stratification in this case follows Despite its attractive simplicity, a fully stratified ocean freezes following a trajectory that never converges toward a g-mode resonance, independently of the M s assumed (Fig. 6).
In an alternative freezing model, a diffusive layer of thickness δ is sandwiched between two layers of constant density, where the top layer has roughly no salinity and the bottom layer is high salinity.Internal gravity waves are excited in the stratified diffusive layer instead of the entire ocean.In this alternative case, the ocean freezes without changing the diffusive layer thickness δ, but increasing the density contrast δρ and consequently increasing N 2 (equation ( 23)), according to where h b is the thickness of the bottom layer with high salinity.The freezing of an ocean with a diffusive layer (i.e., reducing h b in equation ( 38) at constant δ) suggest the possibility of crossing the ∆k 2 resonance from the positive branch (Fig. 6).Further theoretical development is required to determine the effects of a diffusive layer in the ∆k 2 results reported here.
A stable fixed point is then established once resonant stratification succeeds at reaching thermal steady state by halting ocean freezing with the additional heating rate provided by the resonance (Tyler 2011(Tyler , 2014)).The secular expansion of Titan's orbit pushes the orbital frequency away from resonance, reducing the heating rate and promoting further ice thickening until ocean waves tune again with the new orbital frequency.This can happen because ice shell thickening by secular cooling can keep up with the orbital evolution.For instance, the thermal adjustment timescale for a 100 km thick shell is d 2 /π 2 κ ≈ 30 Myr; over this timescale the orbital frequency will have changed by 0.5% (Section 4.2).The alternative to a stable fixed point is that Titan has by chance encountered a resonance that will last until further orbital evolution pushes the orbital frequency out of resonance.
The two ocean freezing scenarios described above assume a radially isotropic distribution of salts that linearly increases with depth over certain radial distance, either the ocean thickness H or the diffusive layer thickness δ.More complicated distributions of salts are in principle possible.The distribution of salts can show lateral variations in the presence of nonuniform ice shell thickness due to alternating regions of melting and freezing below the ice shell (Ashkenazy et al. 2018;Lobo et al. 2021;Kang et al. 2022;Kang 2023).Future investigations are required to better understand the impact of various distributions of salts in the dynamics of tidally excited internal gravity waves and the ocean freezing path that leads to a stable resonance.
Resonant stratification via orbital evolution
In the absence of a stable fixed point, resonant stratification can still be established by orbital evolution.Titan orbits Saturn at a slower rate than Saturn's rotation, leading to outward migration from tidal torques that arise after tidal dissipation occurs inside Saturn (Lainey et al. 2020).This outward migration imposes a slow drift in the excitation frequency of eccentricity tides.At some point during orbital migration, the frequency of eccentricity tides can match the frequency of a g-mode overtone trapped in the stratified ocean, setting resonant stratification (Fig. 7).This mechanism operates without requiring any specific freezing history for Titan's ocean.The resulting resonance, however, is short lived.Continuous orbital migration will further break the resonance after pushing Titan through the resonance width δa ∼ 0.02R s , where R s is Saturn's radius (Fig. 7) and we have assumed the reasonable γ = 10 −9 s −1 (i.e., the resonance width depends on dissipation, as shown in Fig. 5).At Titan's current migration rate ȧ/a ∼ 10 −10 yr −1 (Lainey et al. 2020), the time it would take Titan to cover the g 6 -mode resonance width would be δa/ ȧ ∼ 10 Myr.
The probability that Titan is simply passing over an ocean resonance is slim as a result of this fast orbital migration.The astrometric observation of Titan's migration (Lainey et al. 2020) indicates that Titan's orbital period has increased ∼3 times by orbital expansion over the lifetime of the solar system λ ∼ 4.5 × 10 9 yr, or Idini & Nimmo f r e e z i n g o c e a n : f u l l y s t r a t i fi e d freezing ocean: diffusive layer where P s is Titan's current orbital period, and ∆P s and ∆a represent orbital parameter changes on the timescale λ.The period spacing of g-modes (Appendix A) allows us to estimate the number of g-modes that Titan crosses over the timescale λ, where ∆P g is the g-mode spacing.Cassini registered Titan's enhanced k 2 at no particular time within Titan's interior evolution.The probability that Cassini observed a g-mode resonance motivated Same as Fig. 2, but as a function of orbital semi-major axis a.The shaded vertical line represents Titan's current semi-major axis a = 21R s , where R s is Saturn's radius.The ocean is fixed to H = 300 km and N 2 = 1.4454 × 10 −8 s −2 (see also Fig. 3).The tidal frequency is equal to the orbital frequency and ω 0 is Titan's current orbital frequency ω s .The eccentricity is fixed to Titan's current e and we assume synchronous rotation at all times (i.e., Ω = ω s ).
uniquely by orbital evolution (i.e., no stable fixed-point) is We obtain # g ∼ 5 and P(g-mode) ∼ 1% when using the reasonable ocean parameters in Fig. 7. Equation ( 41) is only valid for # g ≳ 2. The low P(g-mode) indicates that resonant stratification is unlikely to be a result of pure orbital evolution (i.e., no ocean freezing involved), yet not impossible.
A mildly salty ocean versus a heavy ocean
When compared to previous studies, resonant stratification only requires a mild concentration of solute dissolved in the ocean to explain Titan's k 2 (Fig. 8).With γ = 10 −9 s −1 , resonant stratification yields a k 2 value 1σ closer to the measured central value starting from the hydrostatic k 2 = 0.42.When γ = 3.3 × 10 −10 s −1 , the enhancement over the hydrostatic k 2 is 3σ, crossing the mean value of the Cassini k 2 observation at low salinity (S < 10 g/kg).In the absence of resonant stratification, a heavy convective ocean requires a salt concentration that is on average S ∼ 100 − 200 g/kg higher to produce a similar effect on k 2 , depending on the exact γ (Fig. 8).Previous studies have argued in favor of a heavy convective ocean that holds S ∼ 200 g/kg in salts to obtain a 1σ agreement with k 2 observations.However, water-rock interactions at the bottom of Titan's ocean are expected to produce a limited S ≲ 10 g/kg of salts (Leitner & Lunine 2019).
Instead of water-rock interactions, the heavy convective ocean relies on a special event to attain its relatively high salt concentration.Ammonium sulfate ((NH 4 ) 2 SO 4 ) could form in the ocean from reactions between water-ammonia and brine leaching upwards from a core experiencing hydration (Fortes et al. 2007), contributing to S ∼ 200 g/kg of dissolved solute.Unfortunately, this scenario leads to predicted surficial expressions on Titan that failed to be observed during the Cassini mission (Leitner & Lunine 2019).Alternatively, magnesium sulfates can be incorporated into a heavy ocean that is thermodynamically consistent (Vance & Brown 2013) via a late-delivery of salt-rich carbonaceous chondrites (Hogenboom et al. 1995).However, it is not clear whether this delivery mechanism can provide the required salt concentration.
Water-rock interactions on Titan's ocean floor (Leitner & Lunine 2019) produce enough salts (S ∼ 10 g/kg) to enable resonant stratification.At S ∼ 10 g/kg salinity, the predicted k 2 is in a 2σ agreement with the Cassini observation when γ = 10 −9 s −1 and within 1σ agreement when γ = 3.3 × 10 −10 s −1 (Fig. 8).Both γ values are realistic and prevent nonlinear wave breaking.This bulk concentration of salts (S ∼ 10 g/kg) is compatible with the average salinity of Earth's oceans and the salinity inferred for Enceladus' ocean from direct sampling of E-ring particles provided by Enceladus' plumes (Postberg et al. 2009).This aspect favors resonant stratification over a heavy ocean because water-rock interactions are better understood than the special event required to explain a high concentration of salts, namely early hydration during the interior differentiation of ices and silicates (Fortes et al. 2007) or a late delivery of carbonaceous chondrites with a special composition (Hogenboom et al. 1995).
Ocean stability to overturning convection
The heat flux at the ocean floor provided by interior heating threatens the stability of a weakly stratified ocean.A thermal gradient can counter the stratification produced by a chemical gradient and lead to unstable overturning convection.Convection introduces mixing that can further erase the chemical gradient over time when no salinity forcing is introduced.From a simple balance of the density profile including thermal effects and a salinity gradient (i.e., the Ledoux instability criterion), we require across an ocean thickness H ∼ 300 km to preserve the stability of the mild stratification N 2 ∼ 1 × 10 −8 s −2 discussed before, where α ∼ 2 × 10 −4 K −1 is the thermal expansivity and g ≈ 135 cm s −2 is the surface gravity.This temperature gradient is equivalent to a heat transfer ≲2 × 10 9 W across the ocean by thermal conduction, two orders of magnitude lower than the heating rate expected from radiogenic heating (Appendix D).
One might then presume that the heat flux from radiogenic heating should destroy any mild stable stratification.However, this need not necessarily be the case.For example, plumes from hydrothermal vents or volcanoes on the surface of the silicate core may pierce through the stratified ocean in a Rayleigh-Taylor instability (Collins & Goodman 2007) and allow the radiogenic heat from the solid interior to escape outward without triggering overturning convection at the scale of the entire ocean.In this hypothetical scenario, heat passes across the ocean in small lengthscale plumes that do not disturb the large lengthscale structure of stratification required by resonant stratification.
Double-diffusive convection (Radko 2013) constitutes another hypothetical scenario that may permit maintenance of the radiogenic heat flux without erasing the compositional gradient.This regime is typically observed when the temperature profile is steeper than the convective temperature profile and less steep than the Ledoux instability criterion (Leconte & Chabrier 2012, e.g.).The convective temperature profile prescribes a ∆T ∼ αHgT /c p ∼ 5.2 K (Turcotte & Schubert 2002) over an ocean with the same properties used in equation ( 42), where c p ∼ 4 J g −1 K −1 is the heat capacity.Assuming thermal equilibrium between conduction through the ocean thickness and the radiogenic heat flux (Appendix D), we require to maintain Ledoux stability (equation ( 42)) of the ocean salinity gradient, where κ is the thermal diffusivity of the ocean.In regimes typical of gas giant planets, numerical simulations suggest that double-diffusive convection can increase the efficiency of heat transport by up to a factor of ∼50 compared to heat conduction (Rosenblum et al. 2011;Mirouh et al. 2012).This enhancement can be thought of as being roughly equivalent to an increase in κ that allows a salinity gradient with N 2 ∼ 10 −8 s −2 to remain Ledoux stable (equation ( 43)).Numerical simulations confirm a strong enhancement of heat transport by double-diffusive convection in the regime of icy satellites (Wong et al. 2022), where, contrary to the gas giant planets, the kinematic viscosity ν is typically larger than the thermal diffusivity κ.Double-diffusive convection is typically accompanied by the evolution of the compositional gradient (Radko 2013;Wong et al. 2022), thus further studies are required to better understand the timescales imposed on the evolution of salinity profiles in icy satellites.
Predictions and future tests
Water-rock interactions can occur at the ocean bottom of other icy satellites, thus an enhancement of k 2 by resonant stratification is also possible on Ganymede and Europa.NASA's Europa Clipper mission (Phillips & Pappalardo 2014;Howell & Pappalardo 2020) will measure Europa's k 2 with an expected accuracy of 2% (Mazarico et al. 2023), providing us with a new opportunity to study the interior of an icy satellite.If resonant stratification is a common mode of operation for icy satellites, we should also observe an important enhancement in Europa's k 2 given that the ice shell elasticity only provides small resistance to vertical tidal displacements.ESA's JUICE mission (Grasset et al. 2013) will measure Ganymede's k 2 with even greater precision due to the orbital design of the mission and the improved K-band antenna onboard the spacecraft.Ganymede's tidal response will be measured at the excitation frequency of the various moon-to-moon tides present in the Galilean moon system (De Marchi et al. 2022), in addition to the conventional eccentricity tide raised by Jupiter.This tidal response spectrum will provide a unique opportunity to measure the potential stratification of an ocean regardless of whether resonant stratification is in place or not, in addition to providing further constraints to ocean thickness from sampling high-frequency moon-to-moon tides.One quantity to look for in the tidal response spectrum is the g-mode spacing, which is both sensitive to the degree of stratification and the thickness of the stratified cavity (Appendix A).
CONCLUSIONS
We calculated Titan's tidal response to eccentricity tides using a new theoretical framework that includes the dynamical effects of tidally excited waves trapped in the ocean.Our results present a new interpretation of Cassini's observation of Titan's Love number k 2 = 0.616 ± 0.067, which is 3 − σ away from the predicted k 2 in an ocean of pure water resting on top a rigid ocean floor.If Titan's ocean is stably stratified, its measured tidal response can be fully reproduced using plausible dissipation factors (γ) without requiring a salinity greater than those of Earth or Enceladus's oceans (Fig. 8).This enhanced response requires the ocean to be set in resonance with the period of the current tidal excitation, namely Titan's orbital period.In one possible scenario, this resonance is encountered as the ocean progressively freezes and develops a deep salty layer (Fig. 6); this situation yields a long-term stable thermal equilibrium with conduction across the ice shell.
Studies on the extent to which stably stratified oceans can be maintained against convective mixing would form a valuable theoretical addition to the current work.Processes similar to those hypothesized to be operating at Titan could be in play at Ganymede or Europa, and may be tested with future spacecraft missions Europa Clipper and JUICE.The seismometer expected on the Dragonfly mission (Barnes et al. 2021), or a future Titan orbiter (Sotin et al. 2017, e.g.), might similarly be able to look for evidence of a resonantly-excited ocean.
The g-mode frequency spacing constitutes a diagnostic quantity typically used to characterize stratified cavities inside planets and stars (Aerts et al. 2010;Mankovich & Fuller 2021, e.g.).Following our simple model of constant N across a stratified cavity of thickness H, we obtain the mode spacing ) This expression can be used to characterize the stratification of oceans in icy satellites when a multi-frequency k 2 is available to observation, as currently expected from JUICE measurements of moon-to-moon tides on Ganymede (De Marchi et al. 2022).Future icy satellite seismology could provide an alternative observation of mode spacing from the recording of free oscillations on the satellite's surface (Marusiak et al. 2021), assuming that the normal modes are excited beyond the detection threshold.
B. THE TIDAL FORCING OF ECCENTRICITY TIDES
The gravitational potential ϕ T experienced by an observer at r from a concentrated mass located at r ′ , is inversely proportional to the distance between them where α is the angle between the two position vectors, ℓ is degree, and P ℓ are the m = 0 case of the associated Legendre polynomials where m is azimuthal order.When the concentrated mass M is that of a planet orbiting at a semi-major axis a, the gravitational excitation assumes the form The two lowest degree harmonics, ℓ = 0 and ℓ = 1, are discarded as they do not disturb the shape of the icy satellite.
The addition theorem allows us to express the gravitational excitation in spherical coordinates, following cos α = cos θ cos θ p + sin φ sin φ p cos(φ − φ p ), (B7) where rθφ are spherical polar coordinates in a corotating frame fixed to the icy satellite and the subscript p denotes the position of the planet.The gravitational excitation is now ) where we use the conventional definition of spherical harmonics ) From fundamental identities of spherical harmonics, it can be shown that The tidal forcing potential for ℓm then becomes In the standard case of a coplanar circular orbit, we have cos θ p = φ p = 0, leading to with the normalization constant The effect of eccentricity imposes a change in the semimajor axis and a libration in the position of the planet as seen from the corotating frame on the icy satellite, The resulting tidal excitation potential in a planar orbit is The circular tide becomes static (i.e., no time dependence) in a synchronous corotating frame where the spin of the icy world matches the orbital frequency of the planet.Eccentricity tides propagate at the diurnal frequency in both west and east directions for a given m.We observe a perfect superposition between the east m > 0 tide and the west m < 0 tides.As a result, the contributions are typically added to obtain where the upper index on y(r) denotes degree instead of exponent.This equation sets a requirement for the radial and spheroidal displacement fields.The toroidal displacement field is automatically continuous given that it constitutes a rotor on displacement rather than a relocation of fluid.
Ψ. (C42)
heating rate from terrestrial rock samples if we assume a homogeneous distribution of radiogenic elements along heliocentric distance.The abundances of Th-U-K isotopes and their radiogenic decays prescribe a heating rate H ≈ 7.4 × 10 −12 W/kg on Earth's mantle (Turcotte & Schubert 2002).The resulting radiogenic heat production is Ėint ∼ 0.5Hm s ∼ 5 × 10 11 W. Titan's core composition most likely departs from the composition of Earth's mantle due to the presence of undifferentiated iron, in which case the previous estimate is an upper bound.An alternative to the previous estimate comes from using the radiogenic heat production of CV chondrites H ≈ 4.5 × 10 −12 W/kg (Grasset & Sotin 1996;Spohn & Schubert 2003), which results in the smaller Ėint ∼ 3 × 10 11 W.
The balance between internal radiogenic heating Ėint and conduction across the ice shell determines the ice shell thermal equilibrium thickness.Titan's icy surface is at T s ∼ 90 K.In a pure water ocean, we can use k ∼ 2 W m −1 K −1 , Titan radius R ≈ 2575 km, and ∆T ∼ 180 K to obtain (Luan 2019) d ∼ 4πR 2 k∆T Ėint ∼ 3 × 10 13 Ėint km. (D51) Titan most likely contains a considerable amount of ammonia dissolved in its global ocean (Lunine & Stevenson 1987;Stevenson 1992;Grasset & Sotin 1996).When dissolved in water, ammonia behaves like an anti-freeze, reducing the temperature of the eutectic to T ∼ 180K in an ocean with 15%wt.ammonia (Grasset & Sotin 1996).In this scenario, the temperature gradient across the ice shell diminishes to ∆T ∼ 90 K and the equilibrium ice shell thickness can be reduced in half compared to the pure water ocean (equation (D51)).
Figure 1 .
Figure 1.Proposed explanations to Titan's large k 2 as observed by Cassini.The schematic of Titan's interior model has been extracted from de Kleer et al. (2019).
Figure 2 .
Figure 2. Titan's k 2 enhancement (fractional correction ∆k 2 ) from dynamic gravity as a function of (a) ocean thickness H and (b) the Brunt-Vaisala frequency N 2 .Peaks represent resonances with ocean normal modes.The ocean top salinity is zero and increases linearly with depth.The ocean bottom salinity ranges 0.8 − 1.5 g/kg (H = 100 km) and 2.3 − 4.4 g/kg (H = 300 km) for the range of N 2 shown in (b).Frictional dissipation in (b) is γ = 10 −9 s −1 .The reference hydrostatic Love number is k 2 = 0.42, a pure water ocean with an elastic ice shell with thickness d ∼ 100 km.The tidal frequency is equal to the rotational frequency and the orbital frequency, ω = Ω = ω s = 2π/T orb , where T orb = 15.945days is Titan's orbital period.
Figure 3 .
Figure 3. Meridional cross section of the radial displacement on Titan's ocean due to tides for selected internal gravity wave normal modes of increasing radial order shown in Fig. 2. Frictional dissipation is γ = 10 −9 s −1 .
Figure 4 .
Figure 4. Heating rate produced by tidally excited internal gravity waves in the stratified ocean, as a function of (a) ocean thickness H and (b) Brunt-Vaisala frequency N 2 .The dashed line indicates Titan's radiogenic heating Ėint ∼ 3 × 10 11 W (see Apprendix D).The frictional damping is γ = 10 −9 s −1 .Overtones of g-modes are labeled with a subindex representing the mode radial order n.The smaller peaks without a label represent dissipation from resonant inertial wave modes/attractors not discussed here (see Rovira-Navarro et al. (2019), e.g.).
Figure 5 .
Figure 5. Resonant dynamical gravity signal and heat production as a function of dissipation for g-mode g 6 (ℓ = m = 2 and n = 6).Ocean thickness is H = 300 km.The dashed lines are the same as in Fig. 2, and Fig. 4, respectively for (a) and (b).
Approach to a long-lived resonance via ocean freezing
Figure 6 .
Figure 6.Model parameters that produce tidally excited resonances with internal gravity waves (solid lines; see equation (A2)).The positive branch of the ∆k 2 resonance extends toward the bottom-left of each solid line.The red crosses represent individual resonances identified in our numerical simulations and shown in Fig. 3.The dashed lines represent simplified freezing trajectories for different models of ocean stratification.The fully stratified ocean model assumes M s ≈ 8.3 × 10 21 g, which is equivalent to a δρ ∼ 0.001 g cm −3 over an ocean thickness H ∼ 200 km.
Figure 7.Same as Fig.2, but as a function of orbital semi-major axis a.The shaded vertical line represents Titan's current semi-major axis a = 21R s , where R s is Saturn's radius.The ocean is fixed to H = 300 km and N 2 = 1.4454 × 10 −8 s −2 (see also Fig.3).The tidal frequency is equal to the orbital frequency and ω 0 is Titan's current orbital frequency ω s .The eccentricity is fixed to Titan's current e and we assume synchronous rotation at all times (i.e., Ω = ω s ).
a
= a 0 (1 − e cos ωt), (B14) φ p = 2e sin ωt.(B15) Our strategy is now to expand equation (B11) to first order in e.The semimajor axis dependency expands of the planet expands as e −imφp ≈ 1 − 2ime sin ωt.≈ 1 − em(e iωt − e −iωt ).(B21) of the fact that the direction of tides can be flipped by either changing the sign in φ or the tidal frequency ω s , reducing the eccentricity tidal potential toϕ e ℓm ≈ e (ℓ + 1 + 2m) U ℓm r R ℓ Y m ℓ (θ, φ)e −iωt .(B24)In this paper, all quantities mimic the time dependence of the tidal forcing.East tides correspond to m > 0 and west tides to m < 0. East tides propagate in the direction of rotation, whereas west tides are counter-rotation.Notice that the amplitude of eccentricity tides is 7e fold compared to the amplitude of the static tides in the case ℓ = m = 2.C.PROJECTION OF DYNAMICAL TIDES ONTO VECTORIAL SPHERICAL HARMONICSWe project our equations onto vectorial spherical harmonics following the standard decompositionξ = y 1 Y + y 2 Ψ + y 3 Φ,(C25)where YΨΦ constitute an orthonormal base for the projection of vectorial fields in spherical polar coordinates.VSH relate to scalar spherical harmonics (equation (B9)) asY = Y r, (C26) Ψ = r∇Y , (C27) Φ = r∇ × Y,(C28)where spherical harmonics satisfy r 2 ∇ 2 Y = −ℓ(ℓ + 1)Y .We further project the gravity response and pressure-gravity potentials, respectively,
|
2023-12-12T06:41:13.018Z
|
2023-12-10T00:00:00.000
|
{
"year": 2024,
"sha1": "2d8725d4fb066e09875e379461d783529c7e37bf",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.3847/PSJ/ad11ef/pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "2d8725d4fb066e09875e379461d783529c7e37bf",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
59248767
|
pes2o/s2orc
|
v3-fos-license
|
Error Bounds for Discontinuous Finite Volume Discretisations of Brinkman Optimal Control Problems
We introduce a discontinuous finite volume method for the approximation of distributed optimal control problems governed by the Brinkman equations, where a force field is sought such that it produces a desired velocity profile. The discretisation of state and co-state variables follows a lowest-order scheme, whereas three different approaches are used for the control representation: a variational discretisation, and approximation through piecewise constant and piecewise linear elements. We employ the optimise-then-discretise approach, resulting in a non-symmetric discrete formulation. A priori error estimates for velocity, pressure, and control in natural norms are derived, and a set of numerical examples is presented to illustrate the performance of the method and to confirm the predicted accuracy of the generated approximations under various scenarios.
Introduction
Fluid control problems are highly important in diverse fields of science and engineering. For example, they are encountered in the minimisation of drag, in the design of devices serving to increase mixing properties, in the reduction of turbulent kinetic energy, and in several other applications. Some of the earliest references to theoretical aspects of these control problems can be found in the classical works [1,35]. The literature that relates to their numerical approximation is quite abundant, especially if associated to finite element methods (see e.g. [9,26,45,47,48] and the references therein). Works focusing on the approximation of control problems subject to Stokes and Navier-Stokes flows typically employ conforming discretisations for state, co-state and control variables. It has been found that the convergence rate of the control approximation is of O(h) and O(h 3 2 ) for piecewise constant and piecewise linear discretisations, respectively. On the other hand, using the so-called variational discretisation approach (cf. [29], in which the control set is not discretised explicitly but recovered by a projection), it is possible to improve this convergence rate to O(h 2 ). Alternatively, a similar behaviour is observed if one uses graded meshes instead of uniform partitions [44], or if using piecewise constant control discretisations when state and adjoint variables are approximated with Lagrangian finite elements [42].
The present paper focuses on finite volume element (FVE) approximations (methods where one introduces a dual mesh and reformulates a pure finite volume scheme in the form of a Petrov-Galerkin scheme). A priori error estimates for FVE schemes applied to linear elliptic and parabolic optimal control problems have been established in [38,39]. These methods are based on the optimise-then-discretise approach, which we will adopt herein. In this context, we recall that the order in which the optimisation and discretisation steps are performed, results in different discrete adjoint equations and the solutions may not coincide (see the review [9] and the references therein). We will concentrate the analysis on a particular class of FVE schemes: a hybrid strategy called discontinuous finite volume (DFV) method, where discontinuous piecewise linear functions conform the trial space, and piecewise constant test functions are used in a FV fashion. The application of these schemes in the approximation of Stokes and related fluid problems can be found in e.g. [10][11][12]24,32,33,54].
Discontinuous approximations will be generally preferable to guarantee preservation of physically relevant properties. They would also be appropriate when the model exhibits rough coefficients and where sharp solutions are expected. Permeability fields possess this behaviour in many applicative scenarios, and DFV schemes would be of special interest. Other advantages of DFV formulations include flexibility for choosing accurate numerical fluxes, smaller dual control volumes, and suitability for error analysis in the L 2 -norm. In the formulation advanced herein the momentum equation is tested against vector functions spanned by a basis associated to a dual grid, and the mass conservation equation is tested against piecewise constants defined on the primal mesh. Integration by parts on each dual element yields a classical finite volume scheme defined in terms of fluxes across the boundaries of dual elements. Then, some particular features of a given lumping map connecting discrete functions associated with the primal and dual meshes allow us to rewrite the formulation completely in terms of volume integrals involving primal elements, except for the zeroth-order term, the right-hand sides of the state and costate equations, and all jump terms that appear in the off-diagonal bilinear forms.
For non-viscous flow in porous media written in primal (pressure) formulation, the permeability tensor manifests itself as an anisotropic diffusion, and some methods are available for their successful discretisation. These include cell and vertex-centred schemes [23,55], discrete duality finite volumes [16], high-order gradient reconstruction finite volume methods [40], mimetic schemes [18], anisotropic FVE methods [36], virtual finite volumes [22], or discontinuous immerse FVE schemes [37]. Even if the treatment is simpler in our case, as the inverse permeability is assumed isotropic, and it appears only in the drag term; our intention is not to perform a thorough comparison against these techniques, but rather to regard our contribution as a natural extension of the optimal control problems solved using specifically DFV methods (and having so far being constructed for systems governed by linear and semilinear elliptic, semilinear parabolic, and hyperbolic equations [49][50][51][52]) to the case of velocity control for the Brinkman equations. We emphasise once more that the discontinuous character of permeability fields represents a clear motivation for employing DFV methods. On the other hand, for the approximation of the control variable, we will discuss three alternatives: a variational discretisation approach, element-wise constant and element-wise linear discretisation.
The paper is structured in the following manner. The remainder of this section includes some standard notations, statement of the governing problem along with its weak formulation, and the corresponding optimality condition in continuous form. Next, in Sect. 2 we formulate the DFV scheme of the considered optimal control problem. Section 3 focuses on the development of a priori error estimates for different types of control discretisations. Finally, in Sect. 4 we summarise the solution algorithm and illustrate our theoretical error bounds and performance of the method by a set of numerical experiments.
Let Ω ⊂ R d , d = 2, 3, be a bounded convex polygonal domain with boundary ∂Ω. The outward unit normal vector to Ω is denoted by n. Standard terminology will be employed for Sobolev spaces: The corresponding norms will be denoted by · 1,Ω . We also consider the space of integrable functions with zero mean: L 2 0 (Ω) = {q ∈ L 2 (Ω) : Ω q dx = 0}, and will write L 2 (Ω) = L 2 (Ω) d . By div we will denote the usual divergence operator div applied row-wise to a tensor, I will denote the d × d identity matrix, and 0 will be used as a generic null vector. The Optimal Control Problem Let us consider the following distributed optimal control problem min u∈U ad governed by the Brinkman equations where U ad is the set of feasible controls, defined for −∞ ≤ a j < b j ≤ ∞, j = 1, . . . , d by This model describes the motion of an incompressible viscous fluid within an array of porous particles, and according to the flow regime characterised by the ratio between permeability and viscosity, it can represent both the Darcy and Stokes limits. Here y denotes the fluid velocity, p is the pressure field, u is the control variable, and λ > 0 is a given Tikhonov regularisation (or control cost) parameter. The quantity νε( y)− pI is the Cauchy (true stress) tensor, where ε( y) = 1 2 (∇ y + ∇ y T ) is the infinitesimal rate of strain, ν(x) is the dynamic viscosity of the fluid, and K(x) stands for the permeability tensor of the medium divided by the viscosity. The forthcoming analysis requires that this matrix is isotropic. Here the desired velocity y d and the applied body force f are known data with assumed regularity L 2 (Ω) or H 1 (Ω), depending on the specific case. One seeks to identify an additional source u giving rise to a velocity y in order to match a target velocity y d . We stress that by proceeding analogously to the proof of [53,Theorem 2.37] it can be shown that the optimal value of the control u is in H 1 (Ω) under the assumption that y d ∈ L 2 (Ω). A similar observation has been made in [14], and used in the derivation of error estimates.
We assume that K is symmetric, uniformly bounded and positive definite, i.e., there exist two positive constants k 1 and k 2 such that We also assume that the variable viscosity satisfies The weak formulation associated to the state equations (1.2)-(1.4) is given by: where the bilinear forms a(·, ·) : for all y, v ∈ H 1 0 (Ω) and q ∈ L 2 0 (Ω). Above (·, ·) 0,Ω stands for the scalar product in L 2 (Ω) and · 0,Ω denotes the associated norm. The bilinear form b(·, ·) relating the functional spaces for velocity and pressure satisfies the following Babuška-Brezzi condition (see [46], for example): there exists ξ > 0 such that and, together with the ellipticity of a(·, ·)+c(·, ·), it implies the unique solvability of problem (1.6). The optimal control problem (1.1)-(1.4) under consideration is strictly convex. Hence, it admits a unique optimal solution, and the first order necessary conditions are also sufficient for optimality (for details on well-posedness and first order optimality we refer to [35]). The optimality condition can be formulated as J (u)(ũ − u) ≥ 0 ∀ũ ∈ U ad , and also rewritten as Fig. 1 Left: sketch of a single primal element T in T h , and sub-elements T * i belonging to the dual partition T * h . Right: its three-dimensional counterpart, showing a tetrahedron T decomposed into four sub-tetrahedra where w is the velocity associated with the adjoint equation (1.10) In turn, the variational inequality (1.7) can be equivalently recast in component-wise manner where the operator P denotes a projection defined for a generic scalar function g as P [a,b] (g(x)) = max(a, min(b, g(x))), a.e. in Ω.
It is not difficult to see that this projection satisfies the following regularity property (see also [43]) In addition to T h (from now on, referred to as primal mesh), we introduce a dual partition in the following way. Each element T ∈ T h is split into three sub triangles (or four subtetrahedra if d = 3) T * i , i = 1, . . . , d + 1, by connecting the barycentre of the element to its corner nodes (see a schematic for d = 2 and d = 3 in Fig. 1). The set of all these elements generated by barycentric subdivison will be denoted by T * h and will be called the dual partition of T h . Let e be an interior face shared by two elements T 1 and T 2 in T h . By n 1 and n 2 we will denote unit normal vectors on e pointing outwards T 1 and T 2 , respectively. The average {{·}} e and jump [[·]] e operators defined on e for a generic scalar or vectorial field v are: Note that jump and averages are defined so that they preserve the dimension of the argument.
We denote by P m (T ) the space of polynomials of degree less or equal than m, defined on T ∈ T h , and P m (T ) will denote its vectorial counterpart. A finite dimensional trial space (used for the state and co-state velocity approximation) associated with the primal partition T h is while the finite dimensional test space for velocity (and corresponding to the dual mesh T * h ) is Moreover, the discrete space for state and co-state pressure approximation is defined as and we define a space with higher regularity These spaces, associated with the two different meshes, are connected through the transfer operator γ : V(h) → V * h , characterised in the following manner: Some useful properties of this map are as follows.
Lemma 1 Let γ be a transfer operator defined as in (2.1). Then i) γ is self-adjoint with respect to the L 2 -inner product, i.e.
ii) The operator γ is stable with respect to the norm · 0,Ω , that is ,Ω , then |||·||| 0,h and · 0,Ω are equivalent, with equivalence constants being independent of h. iii) There exist constants C 0 and C 1 independent of h such that Proof The proof of (i) and (ii) are given in [7] for the scalar version of the inter-mesh operator (2.1), and the same arguments can be used to extend their validity to the vector case. Property (iii) follows analogously to the proof of [20, Lemmas 2.3 and 2.6]. For a proof of (iv) we refer to [31].
Let us stress that throughout the paper, the symbol C will represent a generic positive constant independent of meshsize h, that can take different values at different instances.
Discrete Formulation for the State and Adjoint Equations
Let v h ∈ V h . We proceed to test (1.2) and (1.3) against γ v h ∈ V * h and φ h ∈ Q h , respectively, and after integrating by parts the momentum equation on each dual element, and the mass conservation equation on each primal element, we end up with the following scheme: find [11]): where, A d+2 = A 1 . With the help of Fig. 1 we see that, in 2D, for a fixed j the integrals A j+1 B A j are considered over a path of two segments, whereas in 3D they are taken over a triangular facet. In any case, they contribute to construct the normal fluxes on the interior faces of each dual sub-element T * j , and so the symbol n also denotes the outer normal on that sub-triangle or sub-tetrahedron. In addition, the constants α d and δ = (d − 1) −1 are parameters independent of h commonly used in interior penalty methods.
Proceeding analogously, we can write down a DFV formulation for the adjoint equation (1.8)-(1.10) as follows: For the sake of our forthcoming analysis, we introduce the following discrete norms in V(h), which are naturally associated with the bilinear form c h (·, ·): and we note that they are equivalent in V h . Moreover, we also have the following discrete Poincaré-Friedrichs inequality (see [11, pp. 457 and we can use Cauchy-Schwarz inequality and the definition of γ to readily obtain Proceeding analogously to [56, Lemma 6], we can establish the coercivity of the bilinear form A h (·, ·), stated in the following result.
Lemma 2 Let us assume that
Proof Let B = K −1 and consider its average tensor, defined on each primal element by Then, in view of Cauchy-Schwarz inequality together with properties (2.9) and (2.2), we can assert that and therefore, relations (2.10) and (2.3) lead to
Lemma 3
The bilinear forms defined in (2.6) possess the following properties: and it satisfies the bound
ii) For the non-symmetric bilinear form c h (·, ·) it holds that
where for (2.12), α d > 0 is assumed sufficiently large. iii) The choice of approximation spaces V h and Q h yields the condition Proof For i) it suffices to apply the definition of A h (·, ·), together with relation (1.5), and the norm equivalence between |||·||| 0,h and · 0,Ω . Results in ii) have been established in [33] and [7], whereas proofs for iii)-iv) can be found in [54].
Discretisation of the Control Variable
Let U h ⊆ L 2 (Ω) denote the discrete control space, and let us introduce the discrete admissible space for the control field as U h,ad = U h ∩U ad . Three approaches are outlined in what follows.
Variational Discretisation
In the so-called variational approach (cf. [29]), control variables are not discretised explicitly, that is, one simply takes U h = L 2 (Ω) and in this case the discrete and continuous admissible spaces U h,ad and U ad coincide. Consequently, the control variable does not necessarily lie in a finite element space associated to T h , and typically one requires a nonstandard implementation and more involved stopping criteria for the algorithms of control computation. Discretisation errors using this method will be addressed in Sect. 3.2.
Piecewise Linear Control Discretisation
Here we approximate the control variable with the similar elements as those employed for state and co-state velocity. That is, It is worthy to note that the state velocity space V h coincides with the control space in the case of homogeneous Neumann boundary conditions, whereas for Dirichlet boundary data,
Piecewise Constant Discretisation
In this case, the discrete control space is defined as The convergence properties associated with the above two approaches will be derived in Sect. 3.3, but already at this point we can apply Lemma 3 along with the Babuška-Brezzi theory for saddle point problems to ensure the unique solvability of (2.4)-(2.5), for a fixed control u h .
Convergence Analysis
In this section we provide a priori error estimates for DFV approximations of the state and adjoint equations, and for the three control discretisation approaches outlined in Sect. 2.3.
Preliminaries
For a given control u and f , let the pair ( y h (u), p h (u)) be the solution of the following problem Similarly, for a given state velocity y, let (w h ( y), r h ( y)) be the solution of We then proceed to decompose total errors in the following manner:
Lemma 4 There exists a positive constant C independent of h such that the following estimates hold
Adding (3.9) and (3.10) after In turn, using the coercivity of A h (·, ·) and c h (·, ·) in combination with (2.2) and (2.7), we obtain which readily yields the bound On the other hand, applying the inf-sup condition (2.14), using (3.9), the boundedness of A h (·, ·) and c h (·, ·), along with (3.11), we realise that (3.14) Proof We proceed analogously to the proof of [24, Theorem 3.1] and directly apply Lemma 3 to readily derive the following estimates: Next, the derivation of L 2 -estimates for y − y h (u) and w − w h ( y) follows an Aubin-Nitsche duality argument. Let us consider the dual problem: which is uniquely solvable, and moreover the following H 2 (Ω) × H 1 (Ω)-regularity is satisfied: where the auxiliary bilinear forms adopt the following expressions Since z I ∈ V h is a continuous interpolant of z, we note that the pair y− y h (u), p− p h (u) will be a solution of the following problem Using the definition of c h (·, ·) and C h (·, ·) we can assert that where the inner product over the primal mesh is understood as the sum of the inner products over each element in T h . On subtracting Eq. (3.19) from the sum of Eqs. (3.18) and (3.20), and using (3.21), it follows that Notice that the estimation of R 1 results as a combination of the boundedness of K −1 , assumption (1.5), the bounds (3.17), the self-adjointness and approximation properties of γ stated in (3.16), and Cauchy-Schwarz inequality. This gives where the last inequality follows from (3.13). For the second term we employ the definition of c h (·, ·), and relations (3.17), (3.16) to verify that Bounds for the remaining terms can be obtained following the proof of [33,Theorem 3.4] and [24, Theorem 3.2], as follows Combining the five estimates above with (3.22), we straightforwardly obtain and very much in the same way, one arrives at w − w h ( y) 0,Ω ≤ Ch 2 w 2,Ω + r 1,Ω + y 1,Ω + y d 1,Ω .
Now, for a given control u, let (w h (u), r h (u)) be the solution of and notice that similar arguments as those appearing in the proof of Lemma 5 and in the derivation of the estimate y − y h (u) 0,Ω ≤ Ch 2 , will readily lead to The following result plays a vital role in deriving error estimates of the control, state and co-state variables. Its proof is similar to that in [38,Theorem 4.1].
24)
where C > 0 is independent of h.
Then, using the approximation property of γ together with Lemmas 4 and 5 implies Adding (3.27) and (3.28) and using that ( y h − y h (u), γ ( y h − y h (u))) 0,Ω ≥ 0, we arrive at where we have used relations (2.2), (2.7), (2.11) and (2.13). An application of Lemmas 4 and 5 in the above inequality leads to the following bound Remark 1 (Right-hand side regularity) According to the contributions [8,19,28,30] (see also the references therein), for linear finite volume element methods applied to second order elliptic problems, the optimal error estimates (establishing second order accuracy in the L 2 − norm) can be achieved under the assumption that the source term is in H 1 (Ω). However, assuming that the right-hand side is in H 1 (Ω) does not imply that the exact solution is in H 3 (Ω), as discussed in e.g. [19]. Some counterexamples are actually given in [28,30] to confirm that optimal L 2 − error estimates cannot be derived if one only assumes that the forcing term is in L 2 (Ω). Proceeding analogously to the analysis of standard finite volume methods, optimal error estimates in the L 2 − norm have been derived by taking the source term in H 1 (Ω) (see for instance [19,24] and their references, for the specific case of DFV methods applied to elliptic and Stokes problems). Following the analysis of [31], one can derive the error estimates given in Lemmas 5 and 6 under the less restrictive assumption that f and y d are in H 1 (T ), that is, locally-H 1 .
Error Estimates Under Variational Discretisation
Theorem 1 Let ( y h , w h ) be DFV approximations of ( y, w) and let u h denote a variational discretisation of u. Then there exists a positive constant C independent of h, but depending on λ, such that the following estimates hold: Proof We recall the continuous variational inequality and the discrete variational inequality under variational discretisation and rearranging terms, we get
L 2 -Error Estimates for Fully Discretised Controls
A discrete admissible controlũ h = (ũ h, j ) d j=1 ∈ U h,ad is defined component-wise and locally asũ whereĨ h u j is the Lagrange interpolant of u j . To avoid ambiguity, we choose h sufficiently small so that min x∈T u j (x) = a j and max x∈T u j (x) = b j do not occur simultaneously within the same element T ∈ T h . Next, we proceed to group the elements in the primal mesh into three categories: h,n = ∅ for m = n according to the value of u j (x) on T . These sets are defined as On the other hand, the following assumption will be instrumental in the subsequent analysis. There exists a positive constant C independent of h such that (3.38) A similar assumption has been employed in [41][42][43]48]. We will first focus on error bounds for the control field under piecewise linear discretisation. Before proceeding we state an auxiliary result, whose proof can be found in [41]. (3.38) and that w ∈ W 1,∞ (Ω). Then, there exists C > 0 independent of h such that
Lemma 7 Assume
The main result in this section is stated as follows. Proof Testing the continuous and discrete variational inequalities against u h ∈ U h,ad ⊂ U ad andũ h ∈ U h,ad , respectively, and adding them, leads to Addition and subtraction ofũ h in the first term above yields and after rearranging terms we obtain In view of estimating the term u −ũ h 0,Ω , we proceed to rewrite it as whereas for T 2 , we employ the projection property (1.11) together with (3.38) Inserting the bounds of T 1 and T 2 in (3.40) we arrive at Finally, applying Cauchy-Schwarz and Young's inequalities, the estimates (3.24), (3.41), and Lemmas 5 and 7 into (3.39), we readily obtain the required result.
We now turn to the L 2 −error analysis for the control field under element-wise constant discretisation. The main idea follows from [14], using an L 2 −projection Π 0 : L 2 (Ω) −→ U h,0 that has the following property: there exists a positive constant C independent of h such that u − Π 0 u 0,Ω ≤ Ch u 1,Ω . Proof Since Π 0 U ad ⊂ U h,ad , the continuous and discrete optimality conditions readily imply that Adding and subtracting u, and rearranging terms, we then obtain and since Π 0 is an orthogonal projection and u h ∈ U h,ad , then the term λ(u h , Π 0 u − u) 0,Ω vanishes to give For the first term, we use (3.24) to get whereas a bound for I 2 follows from the orthogonality of Π 0 : It is left to show that w h is uniformly bounded, which can be readily derived using the coercivity of A h (·, ·) and c h (·, ·) and the uniform boundedness of U h,ad : Substituting the bounds for I 1 and I 2 in (3.43), and using (3.42) the desired result follows.
L 2 -Error Estimates for Velocity Under Full Discretisation of Control
The main result in this section is given as follows (see similar ideas, based on duality arguments also applied in [43,50]). ( y, w) be the state and co-state velocities, solutions of (1.1)-(1.4), and let ( y h , w h ) be their DFV approximations under piecewise linear (or piecewise constant) discretisation of control. Then
Theorem 4 Let
Proof We start by splitting the total error and applying triangle inequality as: where Π h represents the L 2 −projection operator onto the discrete control space U h . Next, let (w h ,r h ) ∈ V h × Q h be the unique solution of the auxiliary discrete dual Brinkman problem We then choosez h =w h andψ h =r h in (3.45) and (3.46), respectively, next we add the result, and we use the coercivity properties (2.8) and (2.12), to derive that (Π h u), respectively, and adding the result, we obtain (3.48) In addition, employing the discrete state equation for y h (u) and y h (Π h u), we obtain We then proceed to subtract (3.49) from (3.48) and to rearrange terms, to arrive at Using the definition of the norm |||·||| 0,h and its equivalence with the norm · 0,Ω we find that By virtue of the properties of Π h applied in the above inequality, we can assert that Approximation properties of γ and the L 2 −projection readily yield appropriate bounds for S 1 and S 2 , respectively: Then, a direct application of (3.47) yields We next use relations (2.11), (2.13) and (3.47) to obtain Consequently, substituting the estimates for the terms S 1 , S 2 , S 3 and S 4 back into (3.50), one straightforwardly arrives at The third term in (3.44) is bounded using (2.7) and proceeding as in the proof of Lemma 4: Using the discrete variational inequality along with the projection property of Π h and (3.37), we have the following relation Plugging the bounds for J 1 , J 2 , J 3 and J 4 in (3.53), putting (3.51) and (3.52) into (3.44), and using interpolation estimate u − Π h u 0,Ω ≤ Ch u 1,Ω along with Lemma 5; we can assert that y − y h 0,Ω ≤ Ch 2 y 2,Ω + p 1,Ω + u 1,Ω + f 1,Ω . (3.54) Finally, splitting the co-state velocity error as w − w h = w − w h ( y) + w h (y) − w h , using triangle inequality and Lemmas 4,5, and relation (3.54), we get the second desired estimate
Error Bounds in the Energy Norm
Theorem 5 Let ( y, w, p, r ) be the state and co-state velocities, and pressures, solutions of (1.1)-(1.4), and let ( y h , w h , p h , r h ) be their DFV approximations. Then Proof Using (3.5) and (3.6), applying triangle inequality and Lemma 4, we obtain The proof follows after combining Lemma 5 with the bounds for u − u h 0,Ω and y − y h 0,Ω .
Numerical Experiments
In this section we present a set of numerical examples to illustrate the theoretical results previously described. For sake of completeness, before jumping into the tests we provide some details about the implementation and algorithms for the efficient numerical realisation of the DFV method applied to the optimal control of Brinkman equations. Implementation Aspects We will use the well-known active set strategy (proposed in [6]) involving primal and dual variables (see also [25,44] for its application in Stokes flow). The principle is to approximate the constrained optimal control problem by a sequence of unconstrained problems, using active sets as summarised in Algorithm 1. By u n h , w n h we will denote the optimal control and adjoint velocity, solutions to the discrete problem (2.16)- (2.20) at the current iteration. Also, the control constraints are a = (a 1 , ..a d ) T and b = (b 1 , ..b d ) T . From u n h and w n h , construct the new finite sets A a n+1 , A b n+1 and I n+1 using (4.2) 5: if n ≥ 1, and A a n+1 = A a n , and where χ T * i is the characteristic function assuming the value 1 on T * i ∈ T * h and zero elsewhere. We next proceed to define the discrete active and inactive sets, based on the degrees of freedom of U h , as follows where, in general, s n,k j,h stands for the discrete value associated with the degree of freedom at position k, related to the spatial component j of the vector field s, at the step n of Algorithm 1. By the definition of the optimal control problem, we have that and if we further introduce the following characteristic sets (4.3) Finally, we define the matrix blocks along with the vectors so that after testing (4.3) against {ψ i } M i=1 we end up with the following matrix form of the discrete optimal control problem (2.16)- (2.20): where Y, P, W, R and U are the coefficients in the expansion of y n+1 h , p n+1 h , w n+1 h , r n+1 h and u n+1 h , respectively, and the hats indicate quantities associated with the previous iteration.
Example 1 We start by assessing the experimental convergence of the proposed scheme applied to the optimal control problem (1.1)-(1.4) defined on the unit square Ω = (0, 1) 2 . Viscosity, permeability and the weight for the control cost assume the following constant (see e.g. [48]) which satisfy the homogeneous Dirichlet boundary conditions under which the analysis was performed. Source term and desired velocity field of the problem are constructed according to these exact solutions, that is, respectively A family of nested primal and dual triangulations of Ω is generated, on which we compute errors in the L 2 −and mesh-dependent norm |||·||| 1,h for the state and co-state velocity, in the L 2 −norm for pressures, and in the L 2 −norm for the control approximation. Table 1 displays the error history for this first test, where we observe optimal convergence rates for velocity and pressure (only those of the state equation are shown) in their natural norms, along with an O(h) convergence for the control when approximated by piecewise constant elements, which improves to roughly a O(h 3/2 ) rate under piecewise linear approximations. We can also confirm that a maximum of three iterations are needed to reach the stopping criterion that the active sets are equal to those in the previous optimisation step. This indicates a mesh independence of the method in the sense that the number of iterations needed to achieve the stopping criterion is independent of the resolution. In addition we portray in Fig. 2 the obtained approximate solutions at the finest resolution level, where we highlight the active sets with a contour plot on top of the control and state velocities. In all examples herein we employ a BiCGSTAB method with AMG preconditioning to solve the linear systems involved at each step of Algorithm 1. Moreover, the zero-mean pressure condition is applied for both pressure and adjoint pressure using a real Lagrange multiplier approach (which accounts for adding one column and one row to the relevant matrix system).
We also present a basic comparison with other classical methods in terms of accuracy. For instance, we have performed the same test as above but employing model coefficients with jumps, in order to highlight the need for discontinuous approximations. Both fluid viscosity and medium permeability have now a discontinuity of five orders of magnitude at x 1 = 0.5. The tested methods are: a conforming stable P 2 −P 0 and MINI-element pairs for velocity and pressure approximation, a classical interior penalty DG method using the same stabilisation parameters as in (2.4)-(2.5), and the proposed DFV formulation. In all cases we consider a piecewise linear approximation of the control variable. The results are collected in Fig. 3, where convergence histories (errors for velocity and pressure vs. the number of degrees of freedom DoF= 2(N + L) + M) associated to the studied discretisations are shown. For all fields, the DFV approximation exhibits a slightly better accuracy than its pure-DG counterpart. This may be explained by the smaller elements used in the dual mesh (but being associated with the same number of DoF). On the other hand, for coarse meshes the conforming approximation P 2 − P 0 outperforms all other methods, but for finer meshes the discontinuous coefficients of the problem imply a badly conditioned system matrix requiring more iterations of the linear solver and eventually the conforming methods lose their optimal convergence. For a fixed number of DoF, the proposed DFV scheme produces smaller errors for the pressure approximation than the other methods. We stress that some recent theoretical comparison results are available for forward Stokes problems (see e.g. [13]), but only in the case of smooth solutions and constant coefficients. If the comparisons are carried out for the case of smooth solutions, then the error estimates in e.g. Theorem 5 are indeed of the same order as their finite element counterpart. However the constants in the estimates are not necessarily the same. As mentioned above, since the dual elements are in principle smaller than the primal ones, the approximate solutions and the corresponding errors generated with discontinuous finite volume schemes are still slightly more localised, implying that the errors themselves are smaller than those produced with methods based on the primal mesh. Complexity, implementation, and CPU times for assembly and solution of the linear systems are, on the other hand, comparable to those associated with classical finite elements.
Example 2 Our second test focuses on the optimal control problem applied to the well-known lid driven cavity problem. The objective function still corresponds to (1.1), but no analytic exact solution is available. Again the domain consists of the unit square, and the data of the problem are given by a traction boundary condition on the top of the lid, the applied body force, and an observed velocity field y = (1, 0) T on the top and zero elsewhere, and f = y d = 0 in Ω. The adjoint problem is subject to homogeneous Dirichlet data. The viscosity is set to ν = 0.1, the control weight is now λ = 0.2, the admissible control space is characterised by a 1 = a 2 = −0.15, b 1 = b 2 = 0.15, and the permeability exhibits a discontinuity on the line x 2 = 0.4: K(x) = κ ν I, with κ(x) = {10,000 if x 2 ≥ 0.4; 10 elsewhere in Ω} (see also [2,3] for the simulation of Brinkman flows with sharp interfaces). The domain is discretised into 20,000 primal triangular elements, and Fig. 4 portrays all fields obtained with our DFV scheme, where the stabilisation parameter is α d = 10. From Fig. 4 we observe that the controlled velocity approaches the desired velocity, that is, it goes to zero and the movement of the fluid concentrates in the upper section of the cavity. In addition, we study the influence of the Tikhonov regularisation in the iteration count of the active set algorithm applied to a coarse solve of this test. As in [25], we immediately observe that a larger number of iterations are required for smaller values of λ (see Table 2).
Example 3 Next we turn to the numerical solution of a three-dimensional optimal control problem (see also [34]). The domain is a cylinder with height 4 and radius 1, aligned with the x 2 axis. The permeability field is anisotropic K = diag(0.1, 10 −5 χ B + 0.1χ B c , 0.1), where B is a ball of radius 1/4 located at the centre of the domain. A Poiseuille inflow profile is imposed as state velocity at x 2 = 0: y = (0, 10(1−x 1 2 −(x 3 −1/2) 2 ), 0) T , a zero-pressure is considered on x 2 = 4, whereas homogeneous Dirichlet data are enforced on the remainder of ∂Ω. The viscosity is ν = 0.005, the Tikhonov regularisation is λ = 1/2, the desired velocity is zero y d = 0, the bounds for the control are a j = a = −0.1 and b j = b = 0.2, and a smooth body force is set as in [4]: f = K −1 (exp(−x 2 x 3 ) + x 1 exp (−x 2 2 ), cos(π x 1 ) cos(π x 3 ) − x 2 exp (−x 2 2 ), −x 1 x 2 x 3 − x 3 exp(−x 2 3 )) T . The primal meshes has 78,631 internal tetrahedral elements and 13,593 vertices. We observe that five iterations are required to reach the stopping criterion (4.1). Snapshots of the resulting approximate fields are collected in Fig. 5.
|
2019-01-27T15:47:34.230Z
|
2018-06-09T00:00:00.000
|
{
"year": 2018,
"sha1": "8b4b15117884044d8719be9eedd9ac9a29de31d7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10915-018-0749-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b4b15117884044d8719be9eedd9ac9a29de31d7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Medicine"
]
}
|
236900141
|
pes2o/s2orc
|
v3-fos-license
|
Chromatin Conformation in Development and Disease
Chromatin domains and loops are important elements of chromatin structure and dynamics, but much remains to be learned about their exact biological role and nature. Topological associated domains and functional loops are key to gene expression and hold the answer to many questions regarding developmental decisions and diseases. Here, we discuss new findings, which have linked chromatin conformation with development, differentiation and diseases and hypothesized on various models while integrating all recent findings on how chromatin architecture affects gene expression during development, evolution and disease.
INTRODUCTION
All eukaryotic species share the ability to reproduce and transmit their genetic information to their offspring. Mammals originate from single cells, with all the hereditary information stored in the DNA. The 2 meters of chromatin, consisting of DNA plus associated proteins must be compacted to fit in a nucleus with a diameter that varies between 2 and 10 µm.
The chromatin fiber is a highly dynamic polymer undergoing cycles of de-compaction and re-compaction during the cell cycle and proliferation/differentiation of the cells (Woodcock and Ghosh, 2010). Compaction impacts on chromatin accessibility to transcription factors (TF) and RNA polymerases (RNAPs) and is one of the parameters that fine-tunes the regulation of gene transcription. Thus, different cell fates require a different three-dimensional genome architecture that is closely related to gene expression and cellular function (Dixon et al., 2015). The nuclear genome appears to be organized non-randomly, through a variety of chromatin loops and rosettes and suggests that transcription is also architecturally organized (Lanctôt et al., 2007). Recent data suggest that alterations in chromatin architecture could be causal in diseases and cancer . Here, we describe recent findings about the relation between chromatin conformation and gene regulation in development and diseases and propose a model for chromatin architecture and the formation of loops during development.
Chromosomal Territories
Although the sequence of many genomes has been elucidated, the study of its 3D organization is subject to increasing endeavors using a variety of techniques, most prominent of which are 3C related technologies and high-resolution microscopy. Chromatin is divided into a dark and a light electron-dense region, representing heterochromatin and euchromatin, respectively and gene Nature, Topology and Role of Topologically Associated Domains The second sub-megabase level of topological organization comprises compartments which are organized in self-associating domains and are divided by linker regions. These compartments are called "topologically associated domains" (TADs) (Figure 1E; Dixon et al., 2012). This organization facilitates physical contacts between genes and their regulatory elements (Nora et al., 2012;Sexton et al., 2012) and range between 0.2 to 1 Mbp (Dixon et al., 2012;Nora et al., 2012;Sexton et al., 2012). Contacts between regulatory elements are more frequent inside a particular TAD rather than between two different TADs ( Figure 1F; Nora et al., 2012).
TADs are highly conserved upon stem cell differentiation, reprogramming, stimulation and in different cell types (Bonev and Cavalli, 2016;Andrey and Mundlos, 2017;Flyamer et al., 2017;Sauerwald and Kingsford, 2018;Zheng and Xie, 2019). Many differentiated cell types contain hundreds of TADs similar to those of human ESCs (Dixon et al., 2015;Schmitt et al., 2016). Thus, TADs are regarded as the basic unit of the folded genome and are considered as structural elements of chromosomal organization (Cremer and Cremer, 2001;Dekker and Heard, 2015;Sexton and Cavalli, 2015). TADs may also not appear as stable structures in single cells, but rather as a mix of chromatin conformations present in a population of cells (Nora et al., 2012;Flyamer et al., 2017;Zheng and Xie, 2019). A multiplexed, super-resolution imaging method identified TAD-like structures in single cells, although these were not stable (Bintu et al., 2018). Similar observations were made even between individual alleles (Finn et al., 2019). Interestingly, a number of studies have indicated that TADs could also be conserved between species (Rudan et al., 2015;Harmston et al., 2017;Krefting et al., 2018), while others come to the conclusion that that TADs certainly have some functional conservation but that specific TAD structures and their location may not be conserved (Eres and Gilad, 2021).This difference in conclusions suggests that the observed various sorts of conservation could be the result of study designs and/or different analytical choices.
As discussed in recent reviews (Sexton and Cavalli, 2015; van Steensel and Furlong, 2019), TADs could affect gene expression in various ways. TADs play an important role in regulation of gene expression by either acting as barriers or by facilitating or preventing loop interactions, because two points (regulatory elements) tethered on a string interact more frequently ( Figure 1F; Dillon et al., 1997;Lieberman-Aiden et al., 2009;Symmons et al., 2014;Bonev and Cavalli, 2016;Bompadre and Andrey, 2019;Robson et al., 2019;Schoenfelder and Fraser, 2019;Sun et al., 2019). Importantly TADs appear to be lost FIGURE 1 | The 3D organization of chromatin. (A) Schematic representation of the arrangement of chromosomes in nucleus, all chromosomes are in contact with the nuclear envelop i.e., the nuclear lamina. Each chromosome resides in its territory (multicolor areas), but there are areas of overlapping. (B) Schematic illustration of Hi-C maps at the genomic scale of chromosomes. When compared to inter-chromosomal connections, intra-chromosomal interactions are found to be more prevalent. (C) Chromatin is organized in "A" (yellow) and "B" (green) compartments, with "B" compartments being at the nuclear lamina. (D) Schematic illustration of Hi-C maps at the compartmental scale, where distal chromatin contacts generate a distinctive plaid pattern with A and B compartments. (E) TADs are formed via loop extrusion, and architectural proteins are found near the TAD boundaries. Within each TAD, cohesin-mediated loops contribute in chromatin folding. (F) Schematic illustration of Hi-C maps at the sub-megabase scale, TADs appear as interaction-rich triangles separated by TAD borders. Through loops, enhancers are brought into closer to the promoters that they control.
during mitosis and cell division and to be re-established only after the formation of cis regulatory interactions, which suggests they are not driving but rather maintaining genome structure (Giorgetti et al., 2013;Naumova et al., 2013;Espinola et al., 2021). Disruption of TAD boundaries can nevertheless alter promoterenhancer interactions by allowing new or preventing normal interactions (Lupiáñez et al., 2015;Flavahan et al., 2016). While TAD boundaries are generally conserved across cell types, a small fraction exhibit cell-type specificity with changes observed within boundaries during differentiation (Dixon et al., 2012(Dixon et al., , 2015Zheng and Xie, 2019). It is worth mentioning here, that the location of boundaries in single-cells varies from cell-to-cell but is always located close to CTCF and cohesin binding sites. Stable TAD boundaries could only be observed in population averaging studies (Bintu et al., 2018). Changing the enhancer-promoter distance within a TAD has little effect on the gene's expression level (Symmons et al., 2016), unless multiple genes compete for interactions with the enhancer (Dillon et al., 1997). However, inversions, that disrupt the TAD structure alters expression levels (Lupiáñez et al., 2016;Symmons et al., 2016;Robson et al., 2019). Potentially TAD boundaries could be as barriers to prevent the spread of heterochromatin to active regions (and vice versa) and/or the spread of proteins tracking on the chromatin (Austenaa et al., 2015;Narendra et al., 2015). One of the main roles of TADs is to provide an insulation for the enhancerpromoter interactions and contain them within the TAD (Dixon et al., 2012;Rao et al., 2014;Zhan et al., 2017;Gong et al., 2018), although there are cases where enhancer-promoter interactions cross over the TAD boundaries, such as in human hematopoietic cells (Javierre et al., 2016) and between Polycomb-bound regions in mouse ESCs (Schoenfelder et al., 2015b;Bonev et al., 2017;Schoenfelder and Fraser, 2019).
The position of TADs in the nucleus relative to each other, or to the nuclear periphery or substructures is under intense investigation. Localization has been proposed to influence gene expression, such as the observation that TADs containing repressed genes at a particular developmental stage are localized at the nuclear lamina (Guelen et al., 2008). Some heterochromatic TADs correspond to lamina associated domains (LADs) or parts of the genome with repressive histone marks (Nora et al., 2012). This agrees with studies suggesting that LADs are poor in genes and that their transcription is suppressed (Lanctôt et al., 2007;Guelen et al., 2008). LAD and heterochromatic TAD regions overlap, albeit incompletely (van Steensel and Belmont, 2017). Euchromatic TADs are transcriptionally active and correspond to regions with active histone marks (Dixon et al., 2012;Nora et al., 2012;Sexton et al., 2012). Erasing the histone modifications did not affect TAD conformation, possibly because these histone marks are formed in pre-existing TADs (Nora et al., 2012;Dekker and Heard, 2015). LADs and euchromatic TADs are clearly separated by defined borders of CTCF or active promoters (Guelen et al., 2008). Interestingly, in D. melanogaster, most of the TAD borders correspond to regions of active promoters rather than CTCF-binding sites (Ramírez et al., 2018). Similar observations were also made in mESCs (Bonev et al., 2017).
The Important Regulators of Genome Organization
Several key proteins are involved in the establishment of chromatin loops and domains with CTCF and cohesin being among the most studied (Dixon et al., 2012;Rao et al., 2014;Fudenberg et al., 2016;Kim et al., 2019). Proper chromatin interactions require convergent pairs of CTCF bound regions, marking the boundary sites of a TAD (Phillips-Cremins et al., 2013;Zuin et al., 2014;de Wit et al., 2015;Guo et al., 2015;Jia et al., 2020). Inverting or deleting the CTCF sites could affect chromatin conformation, leading to an increase of inter-domain contacts and a decrease of intra-domain contacts (Dixon et al., 2012;de Wit et al., 2015;Hanssen et al., 2017). CTCF is enriched in TAD boundaries (Dixon et al., 2012;Nora et al., 2012), although its presence is not limited to boundary sites . It is also important to note that while CTCF loops define a subset of TADs (Dixon et al., 2012;Nora et al., 2012;Sexton et al., 2012), not all TADs are surrounded by CTCF sites (Rao et al., 2014). Importantly, CTCF disruption changes TAD structure (de Wit et al., 2015;Guo et al., 2015;Narendra et al., 2015;Sanborn et al., 2015;Nora et al., 2017), while TADs dramatically disappear after depletion of cohesion and compartmentalization is increased (Haarhuis et al., 2017;Rao et al., 2017;Schwarzer et al., 2017;Wutz et al., 2017). Interestingly, these results were corroborated by polymer simulations (Nuebler et al., 2018). Moreover, CTCF interacts with the cohesin complex, which was proposed to organize the genome based on loop extrusion (Fudenberg et al., 2016). It should be noted though that it has not been shown yet that cohesin loops are formed through extrusion in vivo. The extrusion mechanism of cohesin is an asymmetric process, which would have certain implications on gene expression. Interestingly recent data indicate that cis regulatory loops are already formed after mitosis before TADs are formed (Espinola et al., 2021).
An example of a topological organization of a locus that could be explained based on the loop extrusion model is that of the α-globin locus (Brown et al., 2018). The selfinteracting domain is not present in mES cells, but is formed in differentiating erythroblasts with no apparent change in the binding of CTCF (Brown et al., 2018). Upon perturbations that abolish the expression of α-globin, the domain conformation was unaffected, although interactions within the domain were significantly altered. The convergent pair of CTCF bound regions do not appear as a unique contact, but a broader area of tissue specific contacts was observed around the CTCF borders (Brown et al., 2018). Other mechanisms such as transcription could also lead to loop extrusion. Different cohesin complexes with different subunits (SA1, SA2) seem to act in a different manner mediating different aspects of DNA conformation. SA1-containing complexes promote TAD formation/stabilization while SA2-containing complexes mediate intra-TAD enhancer-promoter contacts (Kojic et al., 2018), suggesting that transcription and transcription factors are important in the formation of those domains. Loop extrusion is also supported by computational modeling (Fudenberg et al., 2016) and also by perturbation assays of important factors of 3D chromatin conformation, such as CTCF and cohesin (Sofueva et al., 2013;Haarhuis et al., 2017;Nora et al., 2017;Rao et al., 2017;Schwarzer et al., 2017;Wutz et al., 2017;Schoenfelder and Fraser, 2019;Thiecke et al., 2020).
DNA is thought to asymmetrically slide through the cohesin ring until it reaches a CTCF site where cohesin is stalled to stabilize the loop (Nuebler et al., 2018). It has been proposed that loop extrusion initiates where cohesin is loaded on DNA through the NIPBL protein. Experiments in vitro have shown that human cohesin-NIPBL complexes extrude loops in an ATP-dependent manner (Kim et al., 2019;Golfier et al., 2020).
The removal of NIPBL highlighted two different mechanisms for the genome organization. One is independent of cohesin and organizes the genome into fine-scale compartments (compartmentalization), while the other is dependent on cohesin FIGURE 2 | An example of genome architecture: TADs and sub-TADs. T2C interaction frequencies are displayed as a two-dimensional heatmap, where intra-TAD contacts (in fact proximities) are more frequent than inter-TAD contacts. TADs confine cis-regulatory elements and target gene promoters in space like two elements tethered on a string. This facilitates regulatory interactions within the TAD and prevents unwanted regulatory activity across TAD regions. Sub-TADs and TADs are depicted with yellow and green lines, respectively. and contributes to the formation of TADs (Schwarzer et al., 2017;Thiecke et al., 2020). In fact, depletion of CTCF had little effect on A/B compartments, while depletion of cohesin even strengthens it (Nora et al., 2017;Rao et al., 2017;Schwarzer et al., 2017;Wutz et al., 2017;Cremer et al., 2020). This is further supported from experiments where RAD21, a subunit of cohesin complex, was degraded, which disrupted all CTCF loops indicating that CTCF alone cannot stabilize the loops. After restoring RAD21, the majority of CTCF loops appeared within 40 minutes (Fudenberg et al., 2016;Hansen et al., 2017). These findings contradict the hierarchical organization model that suggests that TADs are the compartmental building blocks and suggests that the loop extrusion may change compartmental features (Nuebler et al., 2018). The unloading of cohesin is ensured by other proteins such as WAPL and PDS5 (Wutz et al., 2017). Lack of WAPL contributes to loop collision, with an increase of interactions between distal CTCF sites due to an incremental aggregation of loop domain anchors, and thus, creating a "cohesin traffic jam" (Allahyar et al., 2018). Whether cohesin is "fixed" at CTCF sites remains elusive. It was shown that CTCF and WAPL bind to the same cohesin pocket, with CTCF stabilizing cohesin at TAD boundaries and thus blocking WAPL action . The binding signals at CTCF binding sites are higher than at other position in the genome (Sanborn et al., 2015), but the low general background signal could indicate that cohesin is loaded and extruding continuously and only has a longer dwell time at CTCF sites (Fudenberg et al., 2016. CTCF mediated RNA interactions are essential for the proper genome organization (Saldaña-Meyer et al., 2019). Furthermore, many long non-coding RNAs (lncRNAs) have been found to interact with chromatin (Chu et al., 2011;Simon et al., 2011;Engreitz et al., 2013;Li et al., 2017), suggesting that lncRNAs are involved in structural organization of the genome, like Xist and Firre. During X chromosome inactivation, the lncRNA Xist controls the conformation of the inactive X chromosome Engreitz et al., 2013;Chen et al., 2016), while Firre facilitates the colocalization of genomic regions from different chromosomes (Yang et al., 2015). Moreover, T-cell fate is determined by the lncRNA ThymoD and its role to promote promoter-enhancer interactions (Isoda et al., 2017). Nonetheless, further research is needed in order to conclude, whether lncRNAs play a role in structural organization of the genome.
Higher/Lower Levels of Genome Organization
Topologically associated domains are further divided into smaller organizations, the sub-TADs which have a median size of ∼185 Kbp and are characterized by higher interaction frequencies (Figure 2; Rao et al., 2014;Rowley et al., 2017). Sub-TADs should not to be confused with the compartmental domains, which are not formed by CTCF loops but the segregation of A/B compartments (Rowley et al., 2017;Rowley and Corces, 2018). Compartmental domains have been proposed as a model for the organization of chromatin, with architectural proteins and TAD boundaries contributing in the fine-tuning of the transcriptome or regulating a subset of the genes (Stadhouders et al., 2019). On the other hand, a sub-TAD could contain one (or more) gene(s) with its/their regulatory elements, leading to their transcriptional activation or repression (Phillips-Cremins et al., 2013;Rao et al., 2014;Symmons et al., 2014;Bonev et al., 2017). TADs may also contact each other on a higher scale, forming meta-TADs in which inter-TAD interactions are favored (Fraser et al., 2015). sub-TADs and/or meta-TADs exhibit more tissue specific interaction patterns than the tissue invariant TADs (Dixon et al., 2016;Andrey and Mundlos, 2017).
Other levels of chromatin organization are loop domains and insulated neighborhoods (Rao et al., 2014;Hnisz et al., 2016a;Andrey and Mundlos, 2017). Loop domains are regions with enriched interactions marked by a loop at their border (Rao et al., 2014). A loop domain can represent a whole TAD, but also only a part of it. The current mainstream hierarchical model of chromatin organization promotes, that compartments contain several TADs and subsequently contain several sub-TADs, suggesting that if TADs are the building blocks of the genome, sub-TADs would be the cement holding them together (Bonev and Cavalli, 2016). Insulated neighborhoods are genomic domains, encompassing at least one gene and forming chromatin loops, which are sealed by a CTCF homodimer and co-bound with cohesin (Hnisz et al., 2016a).
Limitations of Methods Unveiling TADs
At present, genome-wide identification of both TADs and sub-TADs relies on the resolution of 3C related technologies and at least 22 different computational methods, contributing to the argument that TADs may not be a "discrete" level of organization of the genome (Fudenberg and Mirny, 2012;Rao et al., 2014;Xu et al., 2020). Nevertheless, genes within the same TAD show similar expression patterns across multiple types of cells and tissues, a trait that is substantially lower at other levels of organizational. This observation favors the role of TADs as a functional level of organization where gene regulation takes place. It is however worrying that different experimental methods result in different estimates of TAD size and numbers (Zufferey et al., 2018), possibly due to the low coverage of the 3C related technologies (Xu et al., 2020) and the different models that each algorithm employs. Adding to this, in single-cell Hi-C experiments, TADs are not reproducibly detected at individual loci, but may be "reassembled" when the individual maps are combined to create a whole population (bulk) experiment . The inherent problem here is that each fragment has only two ends and thus, it could be ligated with four only other fragments. Moreover, contacts are dynamic, created and lost all the time, with TAD borders seeing each other more frequently, strengthens the notion that a TAD is only visible when many cells are analyzed. Thus, the need of improved chromatin conformation capture techniques with increased resolution and coverage as well as algorithms identifying consistently TADs is of prime importance.
Looping (de novo Contacts)
Gene transcription is tightly regulated by regulatory elements (enhancers, insulators, silencers), which can be located at various distances from their cognate gene(s) on the linear DNA strand (Figure 3A; Kolovos et al., 2012;Schoenfelder et al., 2015a;Sun et al., 2019). In order to carry out their function, regulatory elements have to be in close proximity to their target gene(s) (Stadhouders et al., 2019). 'Loops' between enhancers and promoters usually result in local interactions, as opposed to CTCF-mediated long-range chromatin loops (TADs), which could facilitate enhancer-promoter interactions either by bringing them closer or by segregating the genome according to its chromatin state ( Figure 3B; Zheng and Xie, 2019).
Recently it was shown that TFs (e.g., YY1 and LDB1), ncRNAs, the Mediator complex, p300 acetyltransferase and the cohesin complex proteins play key roles in the stabilization of chromatin looping or transcription factories (Kagey et al., 2010;Stadhouders et al., 2012;Phillips-Cremins et al., 2013;Fang et al., 2014;Zuin et al., 2014;Schoenfelder et al., 2015a;Boija et al., 2018;Cho et al., 2018;Spielmann et al., 2018;Peñalosa-Ruiz et al., 2019). The function of cohesin varies between various promoter-enhancer interactions. Some promoter-enhancer interactions could also be established only by transcription factors without the involvement of cohesin (Rubin et al., 2017). Four models have been proposed to explain how promoters and enhancers may regulate gene expression with the looping and the transcription factory model being the most prominent (Kolovos et al., 2012;Papantonis and Cook, 2013). Notably, the general notion of the looping model is that an enhancer is in close proximity to its target promoter(s) leading to gene activation, while the gene is silenced when the enhancer and promoter are not in close proximity.
Gene regulation from distal regulatory elements through local looping is now a commonly accepted concept (Lupiáñez et al., 2015;Flavahan et al., 2016;Hnisz et al., 2016a;Bonev et al., 2017;Stadhouders et al., 2018). Before the development of chromosome conformation capture technologies, which are essentially biochemical techniques, there was already strong evidence from biochemical and genetic type experiments that loop formation mediates transcription in both prokaryotic and eukaryotic systems. That was depicted in vitro with the lac repressor system (Hochschild and Ptashne, 1986). In eukaryotic systems, in vitro assays using a plasmid suggested that an enhancer and a gene could be separated by a protein bridge invoking looping (Müeller-Storm et al., 1989). Strong evidence in eukaryotes, with genes in the normal genome environment, was obtained at the β-globin locus after discovery of the Locus Control Region (LCR, (now called super-enhancers), which is located 70 kb upstream of the β-globin gene(s). Changing the distance or order of the β-globin genes and the LCR could only be explained by looping (Grosveld et al., 1987;Hanscombe et al., 1991;Dillon et al., 1997). A few years later, the effect of natural mutations by defective enhancers located at very long distance, like in the case of polydactyly, was very difficult if not impossible to explain by mechanisms other than looping (Lettice et al., 2003).
The regulation of the β-globin like genes by its LCR, was and still is the best-studied example for the looping model ( Figure 3C; Grosveld et al., 1987). In adults, the LCR and the β-globin promoter are located in close proximity contributing to the formation of new chromatin loops by the recruitment of specific TFs such as LDB1, TAL1, GATA1 and KLF1 to the LCR (Noordermeer and de Laat, 2008;Palstra et al., 2008a). The different enhancer elements and the gene appear to form a regulatory hub where all the different elements appear to interact with each other (Allahyar et al., 2018). Interestingly, even though the individual enhancers appear to interact, the overall activity of the LCR usually appears to be the result of an additive effect of the individual enhancer elements rather than a synergizing effect, with the individual enhancers exhibiting different properties (Fraser et al., 1993;Bender et al., 2012). Absence of crucial TFs in the LCR results in the disruption of chromatin conformation and in gene mis-expression.
Recent allele specific interaction studies indicate that the LCR interacts with more than one of the (mouse) β-globin genes simultaneously (Allahyar et al., 2018), whereas previous studies showed that only the (human) β-globin gene can be active at any given moment in time in the situation where two genes are in contact with the LCR at the same time (the γand β-globin genes in human and the βmajand βminor-globin in mouse) (Wijgerde et al., 1995;Trimborn et al., 1999). These observations lead to the conclusion that transcription is a discontinuous process and that the frequency and stability of the promoter-enhancer interactions is a very important parameter in determining the level of transcription. The observation that the mouse LCR would interact with two β-globin genes simultaneously, but that only one would be expressed, sets up the interesting question whether this is perhaps particularly prevalent among genes "competing" for the same enhancers.
Looping interactions are not limited only to enhancers and promoters. Subsequent studies suggest that enhancers make contacts also with gene bodies following the elongating RNAPII . In parallel, Polycomb proteins (PRC1, PRC2) facilitate the regulatory topology by repressing genes through chromatin interactions and keep them under tight control (Schoenfelder et al., 2015b;Cruz-Molina et al., 2017;Cai et al., 2021). Moreover, some promoters (E-promoters) can act as bonafide enhancers and are in close proximity with others to activate gene expression (Dao et al., 2017).
An interesting debate is whether gene activation precedes locus conformation or vice versa (van Steensel and Furlong, 2019). In a previous study, during neuronal differentiation, promoter-enhancer interactions appeared along with changes in gene expression (Bonev et al., 2017). However, during erythropoiesis, chromatin structure precedes expression and does not require the presence of TFs, but TFs are essential for the advancement to, or maintenance of, a fully functioning active chromatin hub (Drissen et al., 2004). Moreover, chromatin loops are not altered in the β-globin locus upon transcriptional inhibition, suggesting that structure precedes function (Palstra et al., 2008b). Interestingly, the recruitment of LDB1 to the β-globin promoter depends on GATA1, in contrast to its recruitment to LCR. In GATA1-null cells that do not express β-globin, its expression can be rescued by the tethering of LDB1 via a zinc finger domain to its promoter, mediating its interaction with the LCR, and thus supporting the hypothesis that conformation comes first . In another study LDB1 was directed to the silenced promoter of the embryonic βlike globin (βh1) gene in adult mice erythroblasts (Deng et al., 2014). In parallel, during the zygotic genome activation, the formation of TADs coincides with the onset of gene expression (Hug et al., 2017).
Pre-looping (Pre-determined Contacts)
Recent studies propose an additional way on how chromatin conformation controls gene transcription. Genes are often in close proximity to their cognate enhancers without being actively transcribed. Although their cognate enhancer is often bound by various TFs, it lacks the binding of a crucial TF required for gene activation (Kolovos et al., 2016). At the same time, RNAPII is stalled at the promoter (Ghavi-Helm et al., 2014). In that case, when a developmental or a differentiation signal triggers the additional recruitment of crucial TF(s) to the enhancer, looping is maintained and transcription is induced. This model is termed pre-looping ( Figure 3D; Ghavi-Helm et al., 2014;Kolovos et al., 2016;Rubin et al., 2017). During mouse development, pre-existing chromatin contacts of the Hox genes could help in the recruitment of the necessary transcription factors, in order tissue-specific promoter-enhancer interactions to occur (Lonfat et al., 2014). Moreover, loops mediated by the PRC1 and PRC2 complexes in pluripotent cells are not only repressing the genes inside such loops, but also maintain them in close proximity with their regulatory elements permitting a fast response (activation) to specific differentiation signals ( Figure 3E; Schoenfelder et al., 2015b;Cruz-Molina et al., 2017). Similarly, the CTCF and cohesin complex bring the Shh promoter and ZRS enhancer in close proximity in posterior and anterior limbs . Although they are in close proximity, the Shh gene is differentially expressed in these tissues (Williamson et al., 2016). An even closer proximity is observed when Shh is activated in the posterior limbs (Williamson et al., 2016). As it is clear from the previous examples, specific topological features are not a sufficient criterium to initiate transcription (Ghavi-Helm et al., 2014;Hug et al., 2017).
Most of the interactions of the pre-looping model are not mediated or predicted by CTCF, but by TFs and RNAPII, e.g., in HUVEC cells, SAMD4A is not expressed while its promoter is in close proximity with its enhancers. Upon activation by TNFα signaling, the TF NFκB is released from the cytoplasm, enters the nucleus and binds to the enhancer leading to looping maintenance and the activation of SAMD4A expression (Kolovos et al., 2016). Other examples of pre-looping were later reported in macrophages, upon adipogenesis, differentiation of the epidermis, during differentiation of mouse embryonic stem cells (ESCs) to neural progenitors, in the mouse HoxB locus and in hypoxia, but also as a mechanism of action for specific transcription factors like PAX5 (Barbieri et al., 2017;Cruz-Molina et al., 2017;Rubin et al., 2017;Siersbaek et al., 2017).
Thus, there is an interesting conundrum. How could transcription be controlled by two different chromatin conformations; looping and pre-looping. According to the pre-looping model, loops formed by CTCF, cohesin, PRC1 or PRC2 could contain poised enhancers and promoters in close proximity only to activate them with subsequent tighter contacts, e.g., after post translational modifications of essential for activation TFs take place (Figures 3D,E; Drissen et al., 2004;Robson et al., 2019). According to the looping model, loops appear and disappear dynamically during development, in parallel with transcriptional activation and could flexibly fine-tune transcription (Javierre et al., 2016;Bonev et al., 2017). Another explanation could be that most of the looping paradigms are studied in steady-state systems or when comparing only two stages of differentiation or development (Grosveld et al., 1987;Palstra et al., 2008a;Deng et al., 2012;Kolovos et al., 2014). Maybe some genes have been selected evolutionary to use one of the two ways of chromatin conformation. However, studying more than two stages of differentiation, development or embryogenesis could unveil which of the two mechanisms is used mostly (Stadhouders et al., 2018;Di Stefano et al., 2020). Although the dynamics of nuclear organization have been studied so far during mitosis (Naumova et al., 2013), meiosis (Patel et al., 2019), hormone treatment, differentiation (Bonev et al., 2017) and cell reprogramming (Stadhouders et al., 2018), there is an immediate need for methods that are precisely tailored for the study of time-dependent conformational changes (4D) (Di Stefano et al., 2020).
TRANSCRIPTION FACTORIES AS THE DRIVING FORCE OF TRANSCRIPTION
The established transcription model claims that the polymerase moves along the DNA sequence to produce the transcript. Nowadays, it is believed that transcription takes place in nucleoplasmic hot spots (called "transcription factories" Papantonis and Cook, 2013, see above), mediated by a high local concentration of all the necessary transcription factors. This notion suggests that the polymerase is located primarily, but not fixed in "transcription factories" (Ghamari et al., 2013;Papantonis and Cook, 2013). In the traditional model of transcription, RNAPII leaves the promoter and moves along the DNA template. In "transcription factories", the RNAPII is present these nucleoplasmic hotspots, while genes and their respective promoters diffuse to them, as transcription takes place through the movement of the DNA template via transcription factories (Jackson et al., 1981;Iborra et al., 1996;Papantonis et al., 2010;Cho et al., 2018). Notably a similar type of mechanism/principle has been proposed for "loop extrusion", the mechanism by which loops are formed and where the DNA moves through the cohesion complex (see above). Time course experiments indicated that the enhancer and the promoter of the Cd47 and Kit genes are in close proximity during transcription . "Transcription factories" are most likely a collection of several "active chromatin hubs, " that merge in a phase transition type process containing several polymerase complexes, each transcribing a different template (de Laat and Grosveld, 2003;Larson et al., 2017).
Current interests are focused on liquid-liquid phase separation (LLPS) as the driving force to concentrate the necessary elements (e.g., enhancers, transcription factors, RNAPII, etc.) at active chromatin hubs or transcription factories Guo et al., 2019;Nair et al., 2019). The concept of phase transition, LLPS is a mechanism to generate "structures" without membranes (Hildebrand and Dekker, 2020). Molecular seeds are thought to start the process of phase transition leading to a local enrichment of protein-protein complexes. Intrinsically disordered protein domains are thought to play a major role by their ability to have multivalent interactions (multi-modular features) . It has been shown that artificial condensates are able to physically pull together specific loci, and thus, LLPS generate mechanical force to the chromatin (Shin et al., 2018). Such compartmentalized hydrogel-like states would have a reduced fluidity and movement of proteins, which would for example fit with the concept that the DNA moves through the polymerase in a transcription factory rather than the polymerase moving along the DNA. Subsequent research has revealed that the Mediator complex, along with other transcription factors, coactivators, and RNAPII, form condensates during transcription Cho et al., 2018;Chong et al., 2018;Sabari et al., 2018;Guo et al., 2019). Phase-separated HP1α and RNAPII showed the ability to create phase-separated heterochromatin and euchromatin droplets, respectively (Larson et al., 2017;Lu et al., 2018). Condensation of bound TFs and coactivators is induced by multivalent enhancer sequences via LLPS (Shrinivas et al., 2019). Although this idea has not been thoroughly tested, it has been observed that LLPS causes enhancers that would typically dwell in distant TADs to migrate closer (Nair et al., 2019). The local concentration of RNA can impact condensate formation and dispersion, acting as a transcriptional feedback mechanism (Henninger et al., 2021).
It has also been proposed that the outer edge of phaseseparation droplets acts as a barrier that proteins could not pass through (Strom et al., 2017), despite the quick recovery of CDK9-mCherry signal after photo-bleaching, suggesting that CDK9-mCherry is constantly recruited to the stably positioned transcription factories (Ghamari et al., 2013), Chromatin compartmentalization might be the reason that activating transcription factors are not present in B compartments (Laghmach and Potoyan, 2021). Phase separation could explain several confusing observations, like how transcriptional activation occurs without direct physical contact between enhancers and promoters through eRNAs (Cai et al., 2020), or multi-enhancer and multi-promoter contacts (Li G. et al., 2012;Jin et al., 2013), or simultaneous regulation of more than one gene by a single enhancer (Fukaya et al., 2016). In parallel, recent data suggest that forces other than the ones derived from LLPS could also stabilize transcription factories (Ulianov et al., 2021).
The β-globin active chromatin hub, containing Hbb-b1, its LCR (60 Kbp upstream of Hbb-b1) and Eraf (encoding an α-globin stabilizing protein, located ∼25 Mbp far from Hbb-b1) is the best example of different genes in the same transcription factory. Various assays, like 3C-like methods, RNA and DNA FISH coupled to immuno-labeling, confirmed that Hbb-b1, its LCR and Eraf are found together in sites rich with RNAPII (Bender et al., 2012;Mitchell et al., 2012). As mentioned above, another property of transcription factories is that they encompass groups of genes (located in cis or in trans), which are co-regulated by specific signaling pathways or activators leading to the idea that co-regulated genes are expressed in "specialized" transcription factories (Schoenfelder et al., 2010). This is corroborated by ChIA-PET of active RNAPII, which uncovered spatial associations between co-regulated and cotranscribed genes in response to various stimulations Li et al., 2015). Moreover, RNAPII transcribed genes are located in separate factories than RNAPIII genes. TNFα responsive genes and erythropoietic genes are also located in distinct factories (Pombo et al., 1999;Papantonis et al., 2010;Schoenfelder et al., 2010;Baù et al., 2011;Monahan et al., 2019). Therefore, it is tempting to conclude that there are "specialized" transcription factories.
THE INTERPLAY BETWEEN STRUCTURE AND FUNCTION THROUGH DEVELOPMENT, DIFFERENTIATION AND EVOLUTION
Loops within the genome can be separated into two categories according to their role ; structural and functional. Structural loops are forming the building blocks of the 3D conformation of the genome. They can take place between DNA segments (none of which is a promoter or an enhancer) through CTCF or cohesin binding, forming large TAD domains with their base defining the domain boundaries (Dixon et al., 2012;Rao et al., 2014;Zuin et al., 2014). Various chromatin conformation capture results suggest that these structural loops are the same between different cell types (Dixon et al., 2015;Schmitt et al., 2016). Therefore, structural loops could contribute indirectly to the regulation of gene expression, via the formation of TADs confining genes and their respective regulatory elements in a dedicated 3D nuclear space. Functional loops, which often appear within structural loops, are the ones bestowing a function/task (activation/repression/poised) to a gene and often correspond to sub-TADs (Grosveld et al., 1987;Splinter et al., 2006;Palstra et al., 2008a;Wendt et al., 2008;Kagey et al., 2010;Schoenfelder et al., 2010;Kolovos et al., 2012;Phillips-Cremins et al., 2013;Sofueva et al., 2013;Fang et al., 2014;Ghavi-Helm et al., 2014;Rao et al., 2014;Zuin et al., 2014;Ji et al., 2016;Kolovos et al., 2016;Phanstiel et al., 2017;Rubin et al., 2017). These interactions could be direct or indirect. The direct interaction is between two DNA segments with one containing a regulatory element (an enhancer or a silencer) and the other the promoter of the target gene (de Laat and Grosveld, 2003;Palstra et al., 2008a;Stadhouders et al., 2012;Kolovos et al., 2014;Kolovos et al., 2016). The indirect interaction is between an enhancer/silencer and a DNA segment which is not the promoter of the target gene, which subsequently interacts with the promoter creating an active regulatory hub (Stadhouders et al., 2012;Schuijers et al., 2018;Quinodoz et al., 2018). An example is the Myc loci where its super-enhancer interacts with its promoter through a CTCF site located 2 Kbp upstream of the Myc promoter (Schuijers et al., 2018), similar to the way Myb is regulated in mouse erythroid cells (Stadhouders et al., 2012).
In this part, we describe how functional and structural loops are formed, as well as the shape of the 3D chromatin organization at different stages of development and differentiation (Figure 4). As already mentioned before, loops are critical for proper gene expression and the integrity of these loops is indispensable for the development of various tissues, differentiation of cells, diseases and cancer. Hence, it is important to understand how or even when they are formed in order to decipher how the local chromatin architecture contributes to different phenotypes.
The chromatin architecture changes significantly during gametogenesis and early embryonic development (Li et al., 2019;Zheng and Xie, 2019). In short, during spermatogenesis A/B compartments and TADs vanish in pachytene spermatocyte and then reappear in round spermatid and mature sperm stages (Wang et al., 2019).The transcriptionally inactive mouse sperm displays chromatin conformation features, with CTCF and cohesin occupying positions similar to those in mESCs, implying the important role of these factors in shaping chromatin conformation even in the absence of transcription (Carone et al., 2014;Du et al., 2017;Jung et al., 2017). Those features, albeit weaker, were also detectable in oocytes . During oogenesis, the oocyte shows the typical higher-order structures until the germinal vesicle (GV) stage . The strength of those features declines dramatically from the immature oocytes to mature oocytes , and from this point forward oocytes lack the typical interphase chromatin structures (Du et al., 2017;Ke et al., 2017). Chromatin structure at this point resembles the chromatin structure during mitosis (Naumova et al., 2013).
After fertilization, chromatin conformation undergoes dramatic reprogramming (Zheng and Xie, 2019). Since TADs and A/B compartments are very weak in early-stage mouse embryos, some studies have shown that chromatin adopts a more relaxed state (Du et al., 2017;Ke et al., 2017). However, loops and TADs have also been observed in mouse zygotes . Indeed, TADs are maintained during the oocyte-to-zygote transition in mice and gradually become more prominent (Du et al., 2017;Ke et al., 2017). Genes are initially silenced, but after the zygotic genome activation (ZGA), they are activated (Clift and Schuh, 2013). ZGA occurs in the 2-cell embryo in the mouse (Du et al., 2017;Ke et al., 2017). Inhibition of ZGA did not prevent the formation of TADs (Ke et al., 2017), suggesting that TAD formation precedes their main function of transcriptional control Hug et al., 2017;Ing-Simmons et al., 2021). Thus, TADs act first as building blocks of architecture and then as transcriptional controllers. TADs are established in Drosophila during ZGA. Compartmentalization of the chromosomes at the zygote stage seems to be driven by a different mechanism than the one of TAD formation Ke et al., 2017). Specifically, the paternal originated chromosomes maintain all the genome structures, whereas the maternal chromosomes lose the A/B compartments. During the two-to-eight-cell stage, conformation is slowly re-established and become progressively stronger in both, maternal and paternal chromosomes (Du et al., 2017;Flyamer et al., 2017;Gassler et al., 2017;Ke et al., 2017).
Common TADs and A/B compartments that correspond to transcriptionally active regions are present in both pluripotent cells and differentiated cells, but the chromatin of pluripotent cells is less compacted than in other cell types (Melcer and Meshorer, 2010;Gaspar-Maia et al., 2011). In pluripotent cells, pluripotency TFs are found in the same areas of the nucleus, establishing long-range chromatin interactions with each other (Bouwman and de Laat, 2015). The observation that gene loci controlled by pluripotency factors are located in close proximity inside the nucleus, suggests a regulatory mechanism similar to phase separation (De Wit et al., 2013). For example, it was shown that many KLF4-bound regions are in close proximity to each other in pluripotent cells and released upon differentiation or KLF4 depletion (Wei et al., 2013).
Early in differentiation, pluripotent genes are initially repressed and subsequently activated (Phillips-Cremins et al., 2013). Early differentiation genes exhibit a permissive architecture and are in close proximity to their associated poised enhancers. Upon differentiation, their enhancers become active and activate their target gene(s) (Cruz-Molina et al., 2017). This suggests that conformation structures mediated by Polycomb proteins create a permissive regulatory environment, where poised regulatory elements are ready to be expressed (Cruz-Molina et al., 2017). Similar observations have been also made in other differentiation pathways, such as adipogenesis (Siersbaek et al., 2017).
An intriguing question is how regulatory elements are generated during evolution, because it is clear that a gene can use different regulatory elements in different cell types or during differentiation to more mature cell types. Interestingly, neocortical enhancers start out as basic proto-enhancers and evolve in complexity and size over time (Emera et al., 2016). Moreover, the rapid evolution of enhancers in liver across 20 mammalian species (18 placental species from Primates, Rodents, Ungulates, Carnivores and 2 marsupial species) is a general feature of mammalian genome as observed by profiling genomic enrichment of H3K27ac and H3K4me3 of liver enhancer regions (Villar et al., 2015). Interestingly, the majority of the recently evolved enhancers are derived from ancestral DNA exaptation and are significantly over-represented in the vicinity of positively selected genes in a species-specific manner (Villar et al., 2015). Thus, it would be tempting to speculate that species, which were less "evolved", have developed "simpler" regulatory elements to control their gene expression. Since these species were more primitive, gene expression profiles were less complicated and more specific for each of the much smaller number of different cell types. During evolution and the appearance of At an early developmental/differentiation stage most genes are silenced. Thus, inside the structural loop, the genes will either not form any loops with their cognate regulatory element (looping model; blue and orange genes) or form functional silencing-loops within structural-loops (loops-within-loops) with their cognate regulatory element lacking a crucial TF (pre-looping model; red and green genes). (D) At later developmental stages, new functional loops are formed within the pre-existing functional loops (orange gene) and/or the structural loop (blue gene) forming "loops within loops" in order to activate the orange and blue genes, respectively. At the same time, the previously pre-looped genes (red and green) are activated as a result of a recruitment of the necessary TF to their cognate enhancer or due to conversion of their cognate poised enhancer to an active one. more complex organisms that require an increased diversity of cell composition, the control of gene expression became more complex and new regulatory elements appeared (Ong and Corces, 2011).
Thus, during the early stage(s) of development, differentiation or evolution, a DNA segment with various genes and regulatory elements ( Figure 4A) will mostly form structural loops to shape the chromatin (Figure 4B), since chromatin conformation during ZGA is independent of activation of gene expression (Hug et al., 2017;Ing-Simmons et al., 2021). At an early developmental/differentiation stage or during the oocyte-tozygote transition, genes are often silenced. Based on the prelooping model, some genes will already be in close proximity with their enhancer, which lacks one or more necessary TFs and is in a poised state (Figure 4C, red and yellow genes and their respective regulatory elements) to promote their activation or silencing, forming functional "loops-within-loops". In parallel based on the looping model, the genes will be far apart from their cognate enhancer in the 3D space ( Figure 4C, green and orange genes and their respective regulatory elements). At a later developmental/differentiation stages, genes which do not have a poised functional loop, will have to form new functional loops within the pre-existing structural or silencing-functional loops ("loops-within-loops") in order to become transcriptionally active ( Figure 4D green and orange genes and their respective regulatory elements). At the same time, the previously silenced genes in a poised loop (Figure 4D, red and yellow genes) are activated as a result of a recruitment of the necessary TF to their cognate enhancer or due to conversion of their cognate poised enhancer to an active one.
In this context, at initial stages of development, differentiation or evolution, we speculate that the genome must have an initially regulatory element located at a distance from its target gene ( Figure 5A, green regulatory element), which interacts with its target gene via a specific loop (Figure 5B). At a subsequent developmental, differentiation or evolutional stage, we hypothesize that new (cell/tissue type specific) regulatory elements are developed between the gene and its original "early" regulatory element, which can interact with their target gene ( Figure 5C, orange and red regulatory elements). Thus, we could observe an initial big loop, which can be functional or either structural, containing other loops at later stages. The latter will form new "loops-within-loops" to accommodate new expression patterns. This type of regulation is observed when comparing the activity of different regulatory elements in multiple stages of differentiation/development/evolution (de Laat and Grosveld, 2003;Palstra et al., 2008a;Mylona et al., 2013;Pimkin et al., 2014;Villar et al., 2015;Goode et al., 2016). However, we cannot exclude the possibility that in rare cases during development, differentiation or evolution, a regulatory element outside the original "early" loop would develop, which could also interact with its target gene at subsequent stage (Figure 5D, yellow regulatory element). Finally, all these aforementioned interactions could satisfy either the prelooping or the looping model (Figure 3). In an evolutionary sense, developing novel enhancers is an almost inevitable feature of multicellular organisms with different cell types and functions.
Other mechanisms are very difficult to envision for the enormous diversity in gene expression patterns, which is ultimately due to the fact that DNA is a linear molecule. At an initial stage in development, differentiation or evolution, the genome has a silenced gene and a regulatory element located in a distance from it (regulatory element 1, depicted with a green circle). (B) The original "early" regulatory element interacts with its target gene in order to activate it. (C) At subsequent developmental, differentiation or evolutional stages, the genome could develop new regulatory elements (regulatory elements 2 and 3, depicted with orange and red circle, respectively) between the gene and its original "early" regulatory element, which interact with their target gene. (D) At some cases, at later developmental, differentiation or evolutional stages, we could observe a new regulatory element (regulatory element 4, depicted with a yellow circle) outside the original "early" loop, which could also interact with its target gene via a formation of a new loop.
CHROMATIN CONFORMATION FROM EARLY DEVELOPMENT TO DIFFERENTIATION
The internal structure of TADs becomes more organized during development and differentiation, as TADs enable more enhancerpromoter contacts (Bonev and Cavalli, 2016). This is important during development, where specific genes need to be activated or repressed to promote specific cell programs and lineage commitment. For example, during limb development, the HoxD cluster is located at the border of two flanking regulatory elements, which are contained into two separated TADs (Lonfat and Duboule, 2015). In the beginning, the 3 TAD is active and regulates the proximal patterning. Subsequently, this TAD is switched off while the 5 TAD becomes active and controls distal structure. Activation of Hox13 switches off the 3 TAD through a global repressive mechanism and interacts with enhancers at the 5 TAD that sustains its activity (Beccari et al., 2016). Thus, the HoxD cluster contains a dynamic TAD boundary, regulating the switching between the flanking TADs and enabling a proper limb development (Rodríguez-Carballo et al., 2017).
Early studies before the discovery of TADs, showed that the lack of CTCF or its disruption on one of the binding sites in the mouse β-globin locus resulted in an altered interactome (Splinter et al., 2006). New insights in the significance of TADs during development came from a study of the HOXA locus, which is important for development of many tissues such as limb. The HOXA locus is organized in two different TADs, with CTCF and cohesin binding sites at their boundary. The disruption of CTCF or Cohesin recruitment at the boundary sites of these two TADs allows the spreading of euchromatin into heterochromatin domains and the subsequent ectopic activation of HOX genes during cell differentiation due to new chromatin contacts Lonfat and Duboule, 2015). Another example is the Tfap2c and Bmp7 locus, which is split into two functional and structural domains, with each gene being present in separate TADs with their cognate enhancers. Inversions at the TAD boundary, changes the position of Bmp7's cognate enhancer into the TAD containing Tfap2c, thus leading to upregulation of the latter gene and downregulation of Bmp7 (Tsujimura et al., 2015). This illustrates the extent to which proper topology influences the regulation of expression of developmentally essential genes. A fine example of regulatory specificity of enhancers controlled by chromatin architecture is that of Pitx1, a regulator of hindlimb development (Kragesteen et al., 2018). In hindlimbs, Pitx1 is in close proximity with its enhancer (active), allowing for normal leg morphogenesis. In forelimbs, Pitx1 is physically separated from the enhancer (inactive), allowing for normal arm development. The disturbance of that specificity (e.g., due to structural variants) can cause gene mis-expression and disease in vivo (Kragesteen et al., 2018). Transcription after activation of the glucocorticoid receptor occurs without significant changes of the pre-looped chromatin interactions, enabling its rapid reaction (Hakim et al., 2011). Changes in chromatin topology and conformation have already been associated and described in muscle progenitor specification and myogenic differentiation , sensory experience during post-natal brain development (Tan et al., 2021), dendritic cell development and differentiation (Chauvistré and Seré, 2020), and neural development (Kishi and Gotoh, 2018).
An interesting question is whether conformation accompanies cell lineage decision and what the role of TFs is. During reprogramming, TFs reorganize genome structural features before changes in gene expression occur (Stadhouders et al., 2018). Somatic cell reprogramming is a useful model for investigating how genome topology affects cell fate decisions. A recent study, investigating chromatin interactions in ESC, iPSC and NPCs, revealed that reprogramming does not completely restore a number of pluripotency-related interactions (Beagan et al., 2016). CTCF was abundant in these regions in ESC, while poor in differentiated NPCs. CTCF binding was not restored in iPSC causing an inadequate pluripotent genome topology recovery. The embryonic and trophoblast lineages have significant differences between them, in their epigenetic landscapes and their 3D conformation (Schoenfelder et al., 2018). ESCs have an enrichment for repressive interactions between gene promoters and also involving poised/silenced enhancers (marked with H3K27me3), whereas trophoblasts have an enrichment for active enhancer-gene interactions (Schoenfelder et al., 2018). Similarly, during neuronal differentiation of ESCs, Polycomb repressive complex 1 and 2 (PRC1 and PRC2) are known to have important functions in mediating repressing interactions. PRC1 mediated interactions are disrupted and geneenhancer interactions become prominent (Bonev et al., 2017). Interestingly, poised enhancers in ESCs are already in close proximity with their target genes in a PRC2 dependent manner (Ngan et al., 2020). Deletion of PRC2 core components leads to activation of their target genes and embryonic lethality (Boyer et al., 2006;Bracken et al., 2006). When these enhancers are activated during differentiation of ESCs to neural progenitors, the interaction with their cognate genes is preserved, leading to their activation (Cruz-Molina et al., 2017). This is similar to the aforementioned pre-looping phenomenon. All in all, these results demonstrate that chromatin architecture changes may not cause instant transcriptional changes. As an alternative, structure seems to set the stage for future transcriptional changes by sculpturing the chromatin environment.
X chromosome inactivation (XCI) is another well-studied example to show how the 3D chromatin organization impacts development, as well as the differences between the two homologs (Lee and Bartolomei, 2013). One of the two X-chromosomes in female cells is randomly inactivated to equalize the expression levels of the X-linked genes between female and male cells, early during embryonic development or upon differentiation of female ESCs (Gribnau and Grootegoed, 2012). Several regulatory elements and genes directing the XCI process are located in a small region, the X inactivation center (Xic) (Barakat et al., 2014). This region harbors the best-studied mammalian lncRNA, Xist and its negative regulator Tsix (Lee et al., 1999). While Xist silences one X chromosome in cis, Tsix represses Xist also in cis and thus these two lncRNAs form a regulatory switch locus (Nora et al., 2012). The Xist locus has been proposed to be organized in two big TADs and XCI is initiated by the upregulation of Xist in one of the two X-chromosomes (Chaumeil et al., 2006;Engreitz et al., 2013). In another study, the two X chromosomes were shown to have distinct and different chromatin organization. The active X presented distinct compartmentalization of active and inactive regions, while the inactive X compartments were more uniform (Tan et al., 2018). TADs were present in the active X chromosome, but not in the inactivated X chromosome Nora et al., 2012;Giorgetti et al., 2016). In comparison, two mega-structures appeared on the inactivated X chromosome, separated by a microsatellite repeat containing several CTCF-binding sites (Horakova et al., 2012;Rao et al., 2014;Wang et al., 2016). Interestingly, a study using mathematical prediction and experimental validation suggested that three internal elements (CTCF/binding sites within the Linx, Chic1 and Xite/Tsix loci) might work in partnership with boundary elements for the formation and the stabilization of the two TADs (Bartman and Blobel, 2015). The deletion of these internal elements is sufficient to disrupt the TADs and subsequently triggers ectopic expression of genes at the neighboring TAD hence disturbing XCI process (Nora et al., 2012;Dixon et al., 2016).
Overall, TAD formation and maintenance as well as specificity of the enhancer-promoter interaction play key roles during development and differentiation to ensure the finely tuned regulation of gene expression and lineage decision.
CHROMATIN CONFORMATION IN DISEASE AND CANCER
Human diseases are often caused by structural variations (SVs) in the genome, through disruption of genes or changes in gene dosage Ibrahim and Mundlos, 2020). While their effect in coding regions can be easily predicted, their occurrence in non-coding regions requires further investigation to address its influence on gene expression, for example in the case of limb formation involving the TAD-spanning WNT6/IHH/EPHA4/PAX3 locus (Lupiáñez et al., 2016). SVs have the potential to interfere with genome architecture causing disease phenotypes (Lupiáñez et al., 2015;Spielmann et al., 2018). Depending on the type but also the extent of the SV, the effect on gene regulation may vary a lot (Ibrahim and Mundlos, 2020).
Disruption of genome architecture may lead to altered gene expression in a variety of ways and, as a result, disease phenotypes. This disruption is separated into inter-TAD and intra-TAD alterations.
Inter-TAD alterations can disrupt and rewire the 3D chromatin architecture resulting in changes of TAD boundaries, mis-regulation of important genes with deleterious effects and relocation of regulatory elements such as enhancers and/or silencers. Inter-TAD alterations are caused by many reasons. Genome architecture disruption involves the disruption of TADs borders, leading to contacts of enhancers and genes, otherwise insulated from each other, and thereby, the ectopic activation of those genes. This phenomenon is called ''enhancer adoption" or ''enhancer hijacking" ( Table 1; Lettice et al., 2011;Northcott et al., 2014;Lupiáñez et al., 2016;Kaiser and Semple, 2017). Deletions result in TAD fusion (Table 1; Katainen et al., 2015;Lupiáñez et al., 2015;Flavahan et al., 2016), inversions in a swap of DNA regions (TAD shuffling) and duplications or translocations of regulatory or structural elements in new domains (neo-TADs) ( Table 2; Gröschel et al., 2014;Northcott et al., 2014;Lupiáñez et al., 2015;Franke et al., 2016;Weischenfeldt et al., 2017). Furthermore, inter-TAD alterations could be caused by inversions, translocations of regulatory elements which may result in gain-of-function events by coupling enhancers with newly associated promoters, or loss-of-function events by separating enhancers from their associated promoters or a combination of the two ( Table 2; TABLE 1 | Summary of inter-TAD alterations (Enhancer adoption and TAD fusion), the disease/abnormality they caused and their description.
LMNB1 locus
Adult-onset demyelinating leukodystrophy (ADLD) A deletion eliminates a TAD boundary, leading to new interactions between the LMNB1 promoter and three non-cognate enhancers and its subsequent activation, resulting in the progressive central nervous system demyelination (Giorgio et al., 2014) FOXG1 locus Rett syndrome A telomeric deletion, including the TAD border, results in the ectopic activation of FOXG1 by active enhancers in the brain (Allou et al., 2012) GFI1B locus Medulloblastoma Somatic structural variants place GFI1 or GFI1B near active enhancer sites, resulting in activation (Northcott et al., 2014) SNCAIP locus Group 4 medulloblastomas A duplication of SNCAIP gene results in the ectopic activation of the putative oncogene PRDM6 (Arabzade et al., 2020) TAD fusion
EPHA4 locus
Brachydactyly Deletions in the EPHA4 locus that include a TAD border result in a fusion of the neighboring TADs, which attaches a cluster of limb-associated EPHA4 enhancers to the PAX3 gene and its concomitant mis-expression (Lupiáñez et al., 2015) Six TAD boundaries encompassing T-ALL related genes T-cell acute-lymphoblastic leukemia (T-ALL) or medulloblastoma TAD disruption leads to ectopic proto-oncogene activation and abnormal cell proliferation (Northcott et al., 2014;Hnisz et al., 2016b;Weischenfeldt et al., 2017). CRISPR-engineered deletions of the TAD boundaries near the known oncogenes TAL1 and LMO2 result in new interactomes of those oncogenes with distal enhancers, leading to their aberrant activation (Hnisz et al., 2016b) NOTCH1 locus Ovarian cancer Downregulation of NOTCH1 gene due to its altered interactome as a result of mutations in the CTCF sites that disrupt the TAD boundary (Ji et al., 2016) Various CTCF binding sites Colorectal cancer Frequently mutated CTCF binding sites lead to TAD boundary disruption and altered interactomes between genes and their regulatory elements (Katainen et al., 2015) IRS4 locus Lung squamous carcinoma, sarcomas and cervical squamous carcinoma Deletions occurring at TAD boundaries coinciding with CTCF recruitment downstream of the IRS4 locus led to IRS4 overexpression (Weischenfeldt et al., 2017) NEK6 locus B cell lymphoma cell lines Deletion of all CTCF-binding sites in the NEK6 super-enhancer borders decreased the expression of NEK6 while increased the expression of the neighboring LHX2 gene Lupiáñez et al., Spielmann et al., 2018). To mention here, that while the above studies stress out the insulating role of TAD boundaries, it is important to keep in mind that TAD boundaries may not be the only component needed to maintain them (Anania and Lupiáñez, 2020). TADs did not fuse completely after serial deletions at the Sox9 locus. This occurred only after the deletion of other CTCF sites within the locus (Despang et al., 2019). Deletions of CTCF-binding sites at the Shh locus result in structural changes, but TAD insulation is maintained . Overall, these results reveal the ability of TAD borders to successfully organize the genome into distinct regulatory domains, as well as their ability to work and communicate with the internal structure elements. Interestingly, in multiple myeloma 30% of the breakpoints are located at, or close to, TAD boundaries. The number of TADs is increased by 25% and they are smaller when compared to normal B cells, indicating that genomic rearrangements and translocations are driving forces in chromatin topology and creating new TADs . The smaller size of TADs in cancer cells when compared to their healthy controls can also be observed in prostate cancer and therefore seem to be most likely a general phenomenon in cancer cells. In the case of prostate cancer, this smaller size is the consequence of the splitting of one TAD in two, the majority of the TAD boundaries (∼98%) being the same between the prostate cancer cells and the normal ones (Taberlay et al., 2016). In prostate cancer, a deletion on 17p13.1 encompassing the TP53 tumor suppressor locus leads to the division of a single TAD into two distinct smaller TADs, resulting into new chromatin interactomes of the enhancers, promoters and insulators within the TADs and changing gene expression (Taberlay et al., 2016). Similarly, in mammary epithelial and breast cancer several TADs were divided into multiple sub-TADs but kept the same boundaries, as a result of various genomic alterations (Barutcu et al., 2015). In prostate cancer cells (and probably in most cancers) the size of the TADs (2-4 MB) is smaller compared to normal prostate cells (∼8 MB). These new small-TADs reside within the normal TAD architecture rather than forming new TADs, with the majority of the TAD boundaries (∼98%) to be the same between the prostate cancer cells and the normal ones (Taberlay et al., 2016).
Because of their ability to co-localize in the nucleus and/or their abundance within TAD boundaries, transposable elements (TEs) have been related to genome architecture (Dixon et al., 2012;Cournac et al., 2016). It has been shown that during the evolution of mammalian lineages, activation of retrotransposable elements triggered an increase of CTCF-binding events (Schmidt et al., 2012). As shown by changes in chromatin states, many of the new CTCF sites acted as chromatin insulators, affecting genome architecture and transcription. According to this observation, human T-lymphotropic virus type 1 (HTLV-1) translocation introduced an ectopic CTCF-binding site, which could form new loops and induce transcriptional changes in the new locus (Melamed et al., 2018). HTLV-1 results in chronic inflammation in 10% of infect hosts.
Changes in the interactome and local chromatin architecture have also been associated to single nucleotide polymorphisms (SNPs) causing intra-TAD alterations. Intra-TAD alterations lead to abnormal transcriptional control of the genes inside the 2 | Summary of inter-TAD alterations (TAD shuffling, Inter-TAD loss-or gain-of-function alterations and Neo-TADs), the disease/abnormality they caused and their description.
TAD shuffling
Wnt6/Epha4 locus F-syndrome An inversion at the Wnt6/Epha4 locus that misplaces the Epha4 enhancers near Wnt6 gene, causing its mis-expression in the developing limb bud (Lupiáñez et al., 2015;Kraft et al., 2019) Ihh/Epha4 locus Polydactyly Duplications of the previous enhancers and rearranging them in front of the Ihh gene induce overexpression of Ihh (Kraft et al., 2019) TFAP2A locus Branchio-oculofacial syndrome Inversion of the TFAP2A TAD resulted in lower TFAP2A expression due to the fact that the promoter was separated from its associated enhancers (Laugsch et al., 2019) Shh locus Digit syndactyly An inversion at the Shh locus places the Shh gene in a TAD together with a limb enhancer, that induces its activation (Lettice et al., 2011) MEF2C locus 5q14.3 microdeletion syndrome Patients with balanced MEF2C translocations have been shown to be affected by the separation of promoters from their associated enhancers. The influence of these translocations was confirmed in patient-derived LCLs, which showed lower MEF2C expression (Redin et al., 2017) GATA2 locus Acute myeloid leukemia sub-types A chromosomal inversion and translocation in chromosome 3 at two different breakpoints place the GATA2 enhancer in the same TAD as the EVI1 oncogene. The enhancer is then in close proximity with the EVI1 promoter triggering its activation, which is responsible for the development of the disease (Gröschel et al., 2014) IGF2 locus Colorectal cancer Recurrent tandem duplications encompassing a TAD boundary result into new interactions between IGF2 and a cell specific super-enhancer located in the adjacent TAD, leading to its > 250-fold overexpression (Weischenfeldt et al., 2017). The duplications in the abovementioned TAD boundary are tandem rather inverted or dispersed, suggesting that the orientation of the enhancer and IGF2 is probably important for the activation of IGF2 (Beroukhim et al., 2016) Inter-TAD loss-or gain-of-function alterations
IDH locus
Gliomas Mutations in the IDH gene results in accumulation of 2-hydroxyglutarate, which subsequently represses TET proteins. This causes hyper-methylation of CpG sites and increased methylation of CTCF sites affecting CTCF binding and the respective TAD boundaries. New interactions are consequently established between the oncogene PDGFRA with constitutive enhancers, which are normally located outside its normal TAD (Flavahan et al., 2016) FMR1 locus Fragile X syndrome (FXS) The CGG triplet repeat (short tandem repeat or STRs) within the FMR1 gene expands in an erratic way and the FMR1 locus boundary is disrupted due to inability of CTCF to bound, caused by the abnormal DNA methylation levels. FMR1 is silenced as the boundary is disrupted, because of the separation from its associated regulatory elements, which are now located in another TAD (Anania and Lupiáñez, 2020).
Neo-TADs
Kcnj2 and Sox9 loci Limb malformation A neo-TAD where Kcnj2 interacts with the Sox9 regulatory region resulted in overexpression of Kcnj2 (Franke et al., 2016) IGF2 locus Cancer Due to duplications of neighboring TADs, the new TAD incorporates the IGF2 gene and a lineage-specific super enhancer, resulting in oncogenic locus mis-regulation (Weischenfeldt et al., 2017) TAD, without altering its overall conformation. Many GWAS SNPs have now been connected to putative causative genes in hematopoietic cell types (Javierre et al., 2016;Mumbach et al., 2017). How sequence variations in putative regulatory elements lead to gene expression alterations that drive complex illnesses is largely unknown. On one hand, SNPs could restrict TFs or architectural proteins from interacting with their regulatory elements, leading to lower expression of their associated genes ( Table 3; Schoenfelder and Fraser, 2019). SNPs can affect the recruitment of the LDB1 complex to the MYB enhancer, impairing its interaction with the MYB promoter, decreasing its expression and resulting in an increase of HbF expression (Stadhouders et al., 2014). On the other hand, SNPs could result in overexpression of target genes and/or their mis-expression in different cell types (Schoenfelder and Fraser, 2019). Gain or loss of function mutations in regulatory elements such as enhancers (or silencers) can affect the transcription of their cognate gene(s) Schoenfelder and Fraser, 2019), provided that there is no other regulatory element compensating that gain or loss (Heinz et al., 2013).
Another study attempted to identify the causative gene at GWAS neurological disease loci by linking the SNPs with gene promoters and enhancers (Lu et al., 2020). They concluded that a SNP may only have subtle effects on looped target gene in healthy donors, but plays a more prominent role when the locus gains a diseasespecific enhancer in patients. Their results indicated that highquality Hi-C loops have a unique value in the study of disease genetics (Lu et al., 2020). Other GWAS studies have identified mutations in regulatory elements that could contribute to the Inflammatory bowel disease etiology by altering gene expression (Meddens et al., 2016). Duplications can change the copy number of regulatory elements, resulting in loss-or gain-of-function mutations, similar to the principle of gene dosage alterations occurring in the inter-TAD duplications and translocations (Tables 3, 4; Spielmann et al., 2018). While SNPs could alter the content of specific enhancers, resulting to abnormal expression patterns, mutations in genes encoding TFs or architectural proteins could also have similar results (Schoenfelder and Fraser, 2019). Cohesinopathies and laminopathies, however, are the two groups of structural proteinassociated human diseases that receive the most attention. Cohesinopathies are caused by mutations in genes associated TABLE 3 | Summary of intra-TAD alterations (Single nucleotide polymorphisms and gain-of-function alterations), the disease/abnormality they caused and their description.
HSBL1-MYB locus
Hemoglobinopathies SNPs affect the recruitment of the LDB1 complex to the MYB enhancer, impairing its interaction with the MYB promoter. Consequently, decrease of MYB expression results in an increase of HbF expression (Stadhouders et al., 2014) CLEC16A locus Autoimmune disease SNPs in intron 19 of the CLEC16A gene have been shown to promote the interaction of the intron with the adjacent DEXI gene, resulting to its expression (Davison et al., 2012) FTO locus Obesity An intron of the FTO gene containing obesity-associated SNPs interacts with the distal IRX3 gene, and thus controlling its expression (Smemo et al., 2014;Schoenfelder and Fraser, 2019) SNCA locus Parkinson disease A common Parkinson disease SNP in a non-coding distal enhancer factor prevents two repressive transcription factors, EMX2 and NKX6-1, from binding to a regulatory element, and thus, resulting in SNCA transcriptional upregulation (Soldner et al., 2016) Various loci Chronic Kidney Disease (CKD) SNPs in both coding and non-coding regions have been discovered in studies of CKD, and dysregulation of gene expression of the 23 genes identified to be associated with such SNPs is possibly a contributing factor in CKD pathophysiology (Brandt et al., 2018) Intra-TAD gain-of-function alterations
SHH locus
Polydactyly Point mutations in the Sonic hedgehog (SHH) regulatory region ZRS result in the ectopic expression of SHH at the anterior margin in mouse. Although not formerly demonstrated in this study, these mutations allow the formation of chromatin looping between the ZRS region and the SHH promoter (Lettice et al., 2003).
MYC locus
Lung adenocarcinoma Amplification of MYC-regulating enhancers results in a slightly higher MYC expression than in samples without amplification of MYC enhancers. The enhancer-amplified samples had a comparable MYC expression levels when compared to samples with MYC coding area amplification Agrawal et al., 2019) IHH locus
Craniosynostosis and synpolydactyly
Duplications of regulatory elements within the IHH locus led to misexpression or overexpression of IHH and by this affect the complex regulatory signaling network during digit and skull development respectively (Klopocki et al., 2011) CTSB locus Keratolytic winter erythema Overexpression of CTSB as a result of enhancer duplications (Ngcungcu et al., 2017) Various loci Prostate cancer SNPs, associated with prostate cancer, co-localize/affect regions of active histone modification and transcription factor binding sites. 15 of the 17 identified genes in these loci exhibit a substantial change in expression, suggesting that the genes physically interacting with risk loci are associated to prostate cancer (Du et al., 2016) Various loci Atherosclerotic disease 294 additional candidate expressed genes for coronary artery disease (CAD) and large artery stroke (LAS) have been identified as potential factors in the pathophysiology of human atherosclerotic disease (Haitjema et al., 2017) Various loci Inflammatory bowel disease (IBD) Mutations in DNA regulatory elements (DREs) can contribute to IBD etiology by altering gene expression (Meddens et al., 2016) Pitx1 locus Liebenberg syndrome Deletion mutations upstream of the hindlimb expressed Pitx1 gene result in intra-TAD conformation changes, merging a forelimb and hindlimb Pitx1 gene enhancer (Kragesteen et al., 2018) PITX1 locus As previous Translocation of two enhancers from chromosome 18 upstream of the PITX1 on chromosome 5 (TAD shuffling), resulted in an increased PITX1 expression (Spielmann et al., 2012) with Cohesin complex and/or its regulators (Banerji et al., 2017;Norton and Phillips-Cremins, 2017;Davis et al., 2018;Olley et al., 2018;Krumm and Duan, 2019). CTCF and cohesin associated SNPs have been related to a number of human disorders and developmental defects. The significance and role of genome organizing factors like CTCF and the cohesin complex has been highlighted for a number of diseases. For example, CTCF depletion leads to pathological effects that are quite comparable to heart failure (Rosa-Garrido et al., 2017). Altered interactions and accessibility was shown at a substantial number of enhancer areas and the genes in the surrounding chromosomal areas were implicated in cardiac pathological pathways (Rosa-Garrido et al., 2017). Another example are the laminopathies caused by mutations in nuclear lamins (LMNA) and the lamin B receptor (LBR) genes. Given that LADs organize a large portion of the genome, the nuclear lamina and its components appear to play an important role in genome architecture. Laminopathies are distinct from other disorders in that a variety of disorders may be developed from just different mutations located in the same gene (Worman and Bonne, 2007). Cancer is a particularly important area of disease where changes in the interactome are important. Alterations in TAD boundaries, which are observed in cancer, can lead to oncogene activation by affecting gene regulation in the flanking TADs via the establishment of new unusual promoter-enhancer interactions (Figure 6). Oncogene activation by TAD disruption and consequent enhancer adoption has been described in leukemia (Gröschel et al., 2014;Hnisz et al., 2016b), neuroblastoma (Peifer et al., 2015), colorectal cancer (Weischenfeldt et al., 2017), medulloblastoma (Northcott et al., 2014), glioma (Flavahan et al., 2016), sarcoma and squamous cancers (Weischenfeldt et al., 2017). Notably, the most prominent alterations in binding sequences at TAD boundaries, are located at CTCF binding motifs (Ji et al., 2016), although it should be noted that many CTCF binding sites are not boundaries. Approximately 11% of 922 deletion cases affect TAD boundaries at the vicinity of a disease-associated gene, resulting in "enhancer TABLE 4 | Summary of intra-TAD alterations (loss-of-function alterations), the disease/abnormality they caused and their description.
Shh locus
Preaxial polydactyly (PPD) In the ZRS, two ETV4/ETV5 binding sites have been discovered. In transgenics, a single ETV binding site is sufficient to suppress ectopic expression; the absence of both sites leads in repressor activity loss and, as a result, in ectopic Shh expression in the limb bud (Lettice et al., 2012) PAX6 locus Aniridia Point mutations (disruption of binding sites) in enhancers of PAX6, PTF1A and TBX5 impair the expression of these genes. While it has not been demonstrated properly, these studies suggest that these point mutations impair the chromatin looping between these enhancers and their associated promoters (Smemo et al., 2012;Bhatia et al., 2013;Weedon et al., 2014) PTF1A locus Pancreatic agenesis
TBX5 locus
Congenital heart disease SOX9 locus Campomelic dysplasia Sex reversal occurs when the relevant testis enhancer of SOX9 is deleted, while deletions and point mutations further upstream induce the Pierre-Robin syndrome, which is characterized by cranial skeleton growth defects but normal sexual development (Benko et al., 2011) DYNC1I1 locus Split-hand/split-foot malformation (SHFM) Exons 15 and 17 of DYNC1I1 act as tissue specific limb enhancers of DLX5/6. Enhancer deletions in the DYNC1I1 gene result in down regulation of the DLX5/6 genes about 1Mb away (Allen et al., 2014;Tayebi et al., 2014) ATOH7 locus Non-syndromic congenital retinal non-attachment (NCRNA) A deletion that covers a distal cis regulatory element upstream from ATOH7 is responsible for NCRNA (Ghiasvand et al., 2011) SHH locus Holoprosencephaly (HPE) The loss of function (disruption of binding sites) of Shh brain enhancer-2 (SBE2) in the hypothalamus of transgenic mouse embryos was caused by a rare nucleotide variant upstream of SHH gene found in an individual with HPE (Jeong et al., 2008) MYC locus Cleft lip with or without cleft palate (CL/P) Deletion of a 640-kb non-coding region at 8q24, which contains distal cis-acting enhancers that regulate Myc expression in the developing face, causes modest facial morphological changes in mice and, on rare occasions, CL/P BCL11A locus β-hemoglobinopathies A common variant in an erythroid enhancer of BCL11A is associated with reduced TF binding, modestly diminished BCL11A expression, and elevated HbF (Bauer et al., 2013) adoption" (Swaminathan et al., 2012). A comprehensive analysis among various cancer cell lines, indicated that the formation of neo-TADs, encompassing cancer driver genes, is the result of SV alterations in cancer cells (Dixon et al., 2018). However, whether neo-TAD formation is a recurrent phenomenon in a given cancer cell type needs to be investigated further.
THE IMPORTANCE OF A REFINED IDENTIFICATION OF CHROMATIN CONFORMATION AND POTENTIAL THERAPEUTIC APPROACHES
An important question is what underlying mechanism protects TAD boundaries from deletions and disruptions? Using machine learning approaches, TAD boundaries were recently categorized based on strength (Gong et al., 2018). Strong TAD boundaries are less frequently lost in cancer, as they act as building blocks of the genome and encompass super-enhancers (Gong et al., 2018). In cancer, strong boundaries are notably safe from SVs and co-duplicated with super-enhancer elements (Gong et al., 2018). These observations and the observations that enhancers lead to aberrant activation of oncogenes due to genetic or epigenetic alterations highlight the importance of the chromatin architecture integrity (Lupiáñez et al., 2015;Flavahan et al., 2016;Hnisz et al., 2016b;Weischenfeldt et al., 2017). An interesting question is whether mis-regulation of TFs causes the altered 3D chromatin organization or whether the opposite takes place? Intriguingly, studies advocate both options. A gene fusion in prostate cancer, causes the overexpression of oncogenic ERG resulting in changes in chromatin organization and territories encompassing genes associated with aggressive prostate cancer (Rickman et al., 2012). This hypothesis may also be true for other TFs whose aberrant expression is involved in many other cancers (Rickman et al., 2012). In contrast, chromosomal inversion and translocation in chromosome 3 at two different breakpoints, tethering the enhancer of GATA2 in the same TAD as EVI1, activate expression of EVI1 and downregulate GATA2, resulting in the development of acute myeloid leukemia (Gröschel et al., 2014).
Thus, is chromatin architecture characteristic of each disease and can we predict the effect of SVs in chromatin organization? A support vector machine classifier (3D-SP) can separate leukemia sub-types based on the information contained in the chromatin architecture and specifically the interactome of the HOXA gene cluster in various leukemia cell lines (Rousseau et al., 2014), while a recently developed approach can be used to predict in silico the altered 3D conformation resulting from structural variants (Bianco et al., 2018). Hence, the improvement of new chromatin conformation techniques can help to better understand the biological effect of newly discovered structural variants and TAD alterations in the human genome, that are linked to uncharacterized genetic disorders or diseases and to evaluate their role on chromatin architecture and transcriptional control. Interestingly, chromatin conformation capture techniques which employ selection based on oligonucleotides, like T2C (Kolovos et al., 2018) and capture-promoter Hi-C (cpHi-C) (Schoenfelder et al., 2015a), can identify the interactome of those specific fragments. Especially in the cases of where SNPs are heterozygous in these fragments, oligonucleotides designed for the two alleles can discriminate the interactome of the wild type allele compared to the allele containing the SNP. FIGURE 6 | An example of TAD disruption in cancer and rewiring of promoter-enhancer proximity. The upper panel depicts two distinct TADs, the left containing a gene (depicted with a green box) and two regulatory elements (RE1 and RE2 depicted with red boxes), which can be either an enhancer, or a poised enhancer or a silencer. The right TAD contains one regulatory element (RE3) that would be compatible with the gene. In the upper panel, the gene is located in a confined place with RE1 and RE2 (depicted with round black circles) resulting in its normal transcriptional activation (if RE1 and RE2 are enhancers) or its repression (if RE1 and RE2 are poised enhancers or silencers). Mutation or deletion of the CTCF sites (depicted with yellow) located at the boundary between the TADs, results in the reorganization of the TAD topology and fusion of the two TADs into one. Thus, in the bottom panel, the gene is now in close proximity (and interacts frequently) with RE3 (depicted with round black circle), leading to its expression also by the RE3, if RE3 is an enhancer or its downregulation if RE3 is a silencer. The different combinations of REs could have different results in the expression levels of the gene. Since, RE1 and RE2 contacts are diminished, it could lead to less expression by those two enhancers while the expression levels of the gene remain the same. On the other hand, the combination of RE1, RE2, and RE3 could lead to a super-enhancer and higher levels of expression of the gene.
Targeting chromatin interactions could potentially provide therapeutic approaches (Babu and Fullwood, 2015). Perturbing promoter-enhancer interactions would permit the fine tuning of expression of target genes, in a reversible and specific manner. However, it faces many difficulties that would need to be overcome. CTCF, cohesin and other TFs mediate many different chromatin interactions. Frequently, these TFs are also involved in signaling pathways. Thus, a systemic perturbation of TFs would cause many off-target effects. Moreover, proteins, which mediate chromatin interactions, are often found in the nucleus and are therefore difficult to perturb by antibodies or small molecule inhibitors. Various epigenetic regulators are involved in cancer, but whether they are involved in chromatin organization is poorly understood. Many drugs have been developed for epigenetic regulators but again it has not been examined yet whether they affect chromatin interactions and compartmentalization, although it is likely that many will affect genomic interactions directly by enabling or preventing the binding of TF type protein (e.g., CTCF is DNA methylation sensitive) or indirectly via changes in the transcriptome. Interestingly, a recent study has identified 50 factors that are potentially important for genome organization (Shachar et al., 2015). However, this study applied an siRNA approach, which is known to cause off target effects. To overcome the non-specificity of targeting such proteins, a new tool (CLOuD9) for the precise manipulation of 3D chromatin structure and chromatin looping has been developed by employing the CRISPR/Cas9 approach and establishing stable chromatin loops (Morgan et al., 2017). This approach may be useful in cancer diagnostics, where chromosomal rearrangements interrupt genomic organization and alter gene expression. Thus, screening studies preferably with the use of drugs or CRISPR/Cas9 approach targeting alterations of chromatin conformation structure, could unveil new factors, which mediate chromatin interactions and unveil them as potential new therapeutic targets. More promising would perhaps be the development of genome editing tools to alter the binding sites of TFs or CTCF using Crispr/Cas9 and homing technology to target the appropriate cells (Cruz et al., 2021).
It is clear from the studies above and many others that chromatin conformation plays a key role in cancer. Thus, understanding the modulation of chromatin interactions will unveil the underlying mechanisms of diseases, development and cancer and identify new promising therapeutic targets.
CONCLUSION AND FUTURE PERSPECTIVES
The integrity of the 3D chromatin architecture and the genome interactome is important to ensure proper transcriptional control. Alterations of this topology are often correlated with diseases such as cancer. Since the genome of cancer cells or cells derived from other pathologies are often instable, TAD disruption is observed often, that result in altered gene expression profiles leading to tumorigenesis or other pathology. Hence, mapping the precise location of TAD topology, their boundaries and other structures is an integral part of deciphering the genetic basis of gene expression in cancer and other diseases, and possibly provide new therapeutic targets. Moreover, the recent development of CRISPR-Cas9 technique (Ran et al., 2013) could lead to correcting altered TAD boundaries in patient cells, offering an exciting potential therapeutic strategy. Recently developed high resolution chromatin conformation techniques [e.g., Hi-C (Rao et al., 2014), T2C (Kolovos et al., 2018)] that offer sub-Kbp resolution, could unveil the precise location of TAD boundaries and their detailed features, holding the key to better understand diseases. Finally, we propose a model integrating recent developments in chromatin architecture with the formation of either structural or functional loops, controlling proper transcriptional control. Understanding how these loops are formed and how they evolve is essential to identify new mechanisms triggering pathologies such as cancer and to develop new efficient therapeutic strategies.
However, despite the spectacular recent advances in the field of chromatin architecture and gene regulation, many questions still remain to be answered. Some of those are: is gene activation preceding locus conformation or vice versa? What is the underlying mechanism creating TADs and protecting TAD boundaries from deletions and disruptions, e.g., is it continuous loop extrusion? Is 3D conformation accompanying cell lineage decisions? How were regulatory elements generated during evolution? Are there as yet unknown TFs, which contribute to 3D genome structure? How can we efficiently identify these? Deciphering all these questions could further lead to our understanding of the dynamics and forces of chromatin organization to enable all the necessary functions of cells.
|
2021-08-04T13:22:10.075Z
|
2021-08-04T00:00:00.000
|
{
"year": 2021,
"sha1": "1b7f9b44ad0860b82d8159aa748d0502983cf68e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.723859/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b7f9b44ad0860b82d8159aa748d0502983cf68e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258594766
|
pes2o/s2orc
|
v3-fos-license
|
Negative effect of varicocele on sperm mitochondrial dysfunction: A cross-sectional study
Abstract Background Varicocele is an abnormal dilation and enlargement of the scrotal venous pampiniform plexus that impairs normal blood drainage and finally leads to infertility if not treated. Objective This study aimed to figure out the impact of mitochondria status through the mitochondrial membrane potential (MMP) and adenosine triphosphate (ATP) assessment and its correlation with semen parameters to illuminate the impact of sperm mitochondria healthiness on normal sperm functionality. Materials and Methods This analytical cross-sectional study was conducted with 100 men including 50 cases in the normozoospermic group (normal) and 50 in an infertile group with the non-varicocelectomy operation (varicocele) referring to Infertility Research and Treatment Center, ACECR Khuzestan, Iran. Routine semen analysis was performed according to World Health Organization guidelines, DNA fragmentation index, the MMP assay, ATP content, and apoptosis were carried out for all samples. Results The results showed that the concentration, progressive motility, normal morphology, MMP, and ATP contents of sperm in varicocele were significantly lower than the normal group. In addition, the sperm DNA fragmentation index was significantly higher in the varicocele group in comparison with the normal group. Conclusion Reduction in MMP and ATP contents, besides the loss of sperm parameters quality and increase in sperm DNA fragmentation, were seriously implicating sperm mitochondria dysfunctionality in varicocele men.
Introduction
Varicocele is the most common male factor of infertility with abnormal dilation and enlargement of the pampiniform plexus, which drains blood from each testicle to the single testicular vein.
The decreased backflow of the vein is associated with hypoxia, which may be impaired testicular spermatogenesis, testicular volume, sperm parameters, and embryo implantation and pregnancy rate. Varicocele is the second most common cause of treatable infertility and occurs in around 15-20% of the general population. The incidence of varicocele increases with age (1).
It accounts for 35% of primary infertility, and up to 80% of secondary infertile cases (2,3). There is a controversy on the impact of varicocele before/after treatment of varicocele operation on sperm parameters and fertility. Numerous studies have shown that varicocelectomy not only improves semen parameters but also improves intra cytoplasmic sperm injection outcomes (4,5).
The exact mechanism of varicocele remains unclear. The hypoplasia and insufficient growth of the testis in adolescence due to congenital/acquired valve defects, venous obstruction, and any anatomical variations contributed to its pathogenesis. This affected the ultrastructure of the testis, and testicular hormone dysfunction through high testicular temperature, scrotal hyperthermia, and blood reflux of the testis vein, and it may cause pain and swelling of the testis that needs a proper medical examination to cure infertility in varicoceles cases (6,7). Recent studies have shown that there is a genetic downregulation in heat-shock proteins in varicoceles required for the neutralization of oxidative stress (OS).
Testicular hyperthermia can induce the OS and increase reactive oxygen species (ROS) products (i.e., hydroxyl, peroxyl, hydroperoxyl radicals, superoxide, and nitric oxide) and induce sperm dysfunction (8,9). An imbalance between ROS production and antioxidant protectant leads to OS and damage to lipids and proteins of the nucleic acids, induces sperm DNA and mitochondria nucleic acids fragmentation, damaged chromatin strands, and its crosslinks during the spermatogenesis and nuclear protamination in the varicocele's patients. (10,11 Written consent was obtained from the participants and a questionnaire containing personal information was completed for each person.
Statistical analysis
Statistical analysis was performed using SPSS 22 (IBM Corp., Armonk, NY, USA). Descriptive statistics mean, and standard deviation was used in the study. The Kolmogorov-Smirnov test was used to examine the normality of the quantitative variables.
The results of the test showed that the variables were abnormal. Therefore, a nonparametric test was adopted to analyze the data. The Mann-Whitney U test was used to compare the statistical differences between 2 groups. The Spearman correlation was used to analyze the correlations between DFI, MMP, ATP, apoptosis, and semen parameters. The significance level was set at p < 0.05.
Results
Among 500 men included in the current study,
Discussion
The pathogenesis of varicocele is not clearly Our purpose in the present study was to illuminate that MMP and ATP contents assessments are essential tests for evaluating sperm mitochondrial function. They significantly correlate with DFI and apoptosis and also with semen parameters.
In this study, sperm parameters such as sperm concentration, motility, and sperm morphology were reduced in varicocele compared to normal subjects.
Conclusion
The present study shows that the varicocele has a negative effect on sperm concentration, motility,
|
2023-05-11T15:09:42.850Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "6b6e9e1d109dfa7c2a3b33f1da51502f0fd79599",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18502/ijrm.v21i4.13271",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37dea73c9c7c177e81aaa20e3e1fcf755fd25f31",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
117254453
|
pes2o/s2orc
|
v3-fos-license
|
Direct CP violation for $\bar{B}_{s}^{0}\to K^{0}\pi^{+}\pi^{-}$ decay in QCD factorization
In the framework of QCD factorization, based on the first order of isospin violation, we study direct CP violation in the decay of $\bar{B}_{s}^{0} \to K^{0}\rho^{0}(\omega)\to K^{0}\pi^{+}\pi^{-}$ including the effect of $\rho-\omega$ mixing. We find that the CP violating asymmetry is large via $\rho-\omega$ mixing mechanism when the invariant mass of the $\pi^{+}\pi^{-}$ pair is in the vicinity of the $\omega$ resonance. For the decay of $\bar{B}_{s}^{0} \to K^{0}\rho^{0}(\omega)\to K^{0}\pi^{+}\pi^{-}$, the maximum CP violating asymmetries can reach about 46%. We also discuss the possibility to observe the predicted CP violating asymmetries at the LHC.
I. INTRODUCTION
CP violating asymmetry is one of the most important areas in the decays of bottom hadrons. In the standard model(SM), a non-zero complex phase in the Cabibbo-Kobayashi-Maskawa (CKM) matrix is responsible for CP violating phenomena. In recent years CP violation in several B decays such as B 0 → J/ψK 0 S and B 0 → K + π − has indeed been found in experiments [1,2]. Due to its much higher statistics, the Large Hadron Collider (LHC) will provide a new opportunity to search for more CP violation signals.
Direct CP violating asymmetries in b-hadron decays occur through the interference of at least two amplitudes with the weak phase difference φ and the strong phase difference δ. The weak phase difference is determined by the CKM matrix while the strong phase is usually difficult to control. In order to have a large CP violating asymmetries signal, we have to apply some phenomenological mechanism to obtain a large δ. It has been shown that the charge symmetry violating mixing between ρ 0 and ω can be used to obtain a large strong phase difference which is required for large CP violating asymmetries. Furthermore, it has been shown that the measurement of the CP violating asymmetries can be used to remove the mod(π) ambiguity in the determination of the CP violating phase angle α [3][4][5][6][7].
Naive factorization approximation has been shown to be the leading order result in the framework of QCD factorization when the radiative QCD corrections of order O(α s (m b )) (m b is the b-quark mass) and the O(1/m b ) corrections in the heavy quark effective theory are neglected [8]. In naive factorization scheme, the hadronic matrix elements of fourquark operators are assumed to be saturated by vacuum intermediate states. Since the bottom hadrons are very heavy, their hadronic decays are energetic. Hence the quark pair generated by one current in the weak Hamiltonian moves very fast away from the weak interaction point. Therefore, by the time this quark pair hadronizes into a meson, it is already far away from other quarks and is unlikely to interact with the remaining quarks. This quark pair is factorized out and generates a meson [9,10]. This approximation can only estimate the CP violation order neglecting QCD correction. Furthermore, as pointed out in previous studies [5][6][7], in order to taken into account the nonfactorizable contributions, an effective parameter, N c , is introduced. The deviation of the value of N c from the color number, 3, measures the nonfactorizable effects in the naive factorization scheme. Obviously, N c should depend on the hadronization dynamics of different decay channels. In this scheme, CP violation depends strongly on N c values, which makes the results uncertainties.
In the heavy quark limit, QCD factorization [8] includs nonfactorization strong interaction correction, and the decay amplitudes can be calculated at leading power in Λ QCD m b and at next-to-leading order in α s , which can be expressed in terms of form factors and meson light-cone distribution amplitudes. One can take into account the nonfactorizable and chirally enhanced hard-scattering spectator and annihilation contributions which appear at order O(α s (m b )) and O(1/m b ), respectively. In this work we adopt the QCD factorization scheme including order-α s correction to compute CP violating asymmetry of the decayB 0 s → K 0 π + π − via the ρ − ω mixing mechanism. As will be shown later, the CP violating asymmetries in this decay channel could be large and may be observed in the LHC experiments.
The remainder of this paper is organized as follows. In Sec. II, we present the form of the effective Hamiltonian and the general form of QCD factorization. In Sec. III, we give the formalism for CP violating asymmetries inB 0 s → K 0 π + π − decay. In Sec. IV, we calculate the branching ratio for decay process ofB 0 s → K 0 ρ 0 (ω) via ρ − ω mixing. We briefly discuss the input parameters in Sec. V. The numerical results are given in Sec. VI. In Sec. VII we discuss the possibility to observe the predicted CP violating asymmetries at the LHC. Summary and conclusions are included in Sec. VIII.
II. THE EFFECTIVE HAMILTONIAN
With the operator product expansion [11], the effective Hamiltonian in bottom hadron decays is where c i (i = 1, ...., 10, 7γ, 8g) are the Wilson coefficients, V pb , V pq are the CKM matrix elements. The operators O i have the following form: The Wilson coefficients can be calculated at a high scale M W and then evolved to scale m b using renormalization group equation. In QCD factorization, We consider weak decay B s → M 1 M 2 (M 1 , M 2 refer to K 0 and ρ 0 mesons, respectively) in the heavy-quark limit. Up to power corrections of order Λ QCD /m b , the transition matrix element of an operator O i in the weak effective Hamiltonian is given by [8] Here F B→M 1,2 j (m 2 2,1 ) denotes a B → M 1,2 form factor, and Φ X (u) is the light-cone distribution amplitude for the quark-antiquark Fock state of meson X. T I ij (u) and T II i (ξ, u, v) are hard-scattering functions, which are perturbatively calculable. The hard-scattering kernels and light-cone distribution amplitudes (LCDA) depend on a factorization scale and scheme, which is suppressed in the notation of (3). Finally, m 1,2 denote the light meson masses.
We match the effective weak Hamiltonian onto a transition operator, the matrix element is given by (λ Using the unitarity relation we can get where the sums extend over q = u, d, s, andq s denotes the spectator antiquark. The operators A([q M 1 q M 1 ][q M 2 q M 2 ]) also contain an implicit sum over q s = u, d, s to cover all possible B-meson initial states.
Next we need change the annihilation part where b i , b i,EW and B are given by following. The coefficients of the flavor operators α p i can be expressed in terms of the coefficients a p i defined in [8] as follows: For pseudoscalar (P) meson M 1 , the ratios r M 1 χ are defined as All quark masses are running masses defined in the MS scheme, and m q denotes the average of the up and down quark masses. For vector (V) meson M 2 we have where the scale-dependent transverse decay constant f ⊥ V is defined as Note that all the terms proportional to r M 2 χ are formally suppressed by one power of Λ QCD /m b in the heavy-quark limit.
The general form of the coefficients a p i at next-to-leading order in α s is where N c is the number of colors, the upper (lower) signs apply when i is odd (even). It is understood that the superscript 'p' is to be omitted for i = 1, 2. The quantities V i (M 2 ) account for one-loop vertex corrections, H i (M 1 M 2 ) for hard spectator interactions, and P p i (M 1 M 2 ) for penguin contractions. The N i (M 2 ) and C F are given by The vertex corrections are given by [8] with The constants −18, 6, −6 are scheme dependent and correspond to using the NDR scheme for γ 5 . The light-cone distribution amplitude (LCDA) Φ M 2 is the leading-twist amplitude of M 2 , whereas Φ m 2 is the twist-3 amplitude. LCDA for pseudoscalar and vector mesons of twist-2 are and twist-3 ones where C n (x) and P n (x) are the Gegenbauer and Legendre polynomials, respectively. a n (µ) are Gegenbauer moments that depend on the scale µ. Φ V ⊥ (x, µ) and Φ V (x, µ) are the transverse and longitudinal quark distributions of the polarized mesons.
At order α s a correction from penguin contractions is present only for i = 4, 6. For i = 4 we obtain where n f = 5 is the number of light quark flavors, and s u = 0, s c = (m c /m b ) 2 are mass ratios involved in the evaluation of the penguin diagrams. The function G M 2 (s) is given by For i = 6, if M 2 is a vector meson, the result for the penguin contribution is In analogy with (21), the functionĜ M 2 (s) is defined aŝ Electromagnetic corrections are present for i = 8, 10 and correspond to the penguin diagrams. For i = 10 we obtain if M 2 is a vector meson. The correction from hard gluon exchange between M 2 and the spectator quark is given by for i = 1-4,9,10.
for i = 5, 7, and with Φ B (ξ) is one of the two light-cone distribution amplitudes of the B meson.
where m M 1 and ǫ M 1 are the mass and polarization vector of the vector meson. F B→M 1 0 is the form factor for B → M 1 transition.
We recall that the term involving r M 1 χ is suppressed by a factor of Λ QCD /m b in heavyquark power counting. Since the twist-3 distribution amplitude Φ m 1 (y) does not vanish at y = 1, the power-suppressed term is divergent. We extract this divergence by defining a parameter X M 1 H through The remaining integral is finite (it vanishes for pseudoscalar mesons since Φ p (y) = 1), but X M 1 H is an unknown parameter representing a soft-gluon interaction with the spectator quark. Since X M 1 H varies within a certain range (specified later) and X M H ∼ ln(m b /Λ QCD ) [8], we treat the resulting variation of the coefficients α p i as an uncertainty. We also assume that X M 1 H is universal, i.e., that it does not depend on M 1 and on the index i of H i (M 1 M 2 ). For the convolution integrals, one can find the results in Ref. [8].
For the annihilation contribution, one can get [8]: The weak annihilation kernels exhibit endpoint divergences, which we treat in the same manner as the power corrections to the hard spectator scattering. The divergent subtractions are interpreted as and similarly for M 2 with y →x. The treatment of weak annihilation is model-dependent in the QCD factorization approach. We treat X M A as an unknown complex number of order ln(m b /Λ QCD ) and make the simplifying assumption that this number is independent of the identity of the meson M 1 and the weak decay vertex. Here, and In the vector meson dominance model [12], the photon propagator is dressed by coupling to vector mesons. Based on the same mechanism, ρ − ω mixing was proposed [13]. The formalism for CP violation in the decay of a bottom hadron, B s , will be reviewed in the following. The amplitude for B s → K 0 π + π − , A, can be written as where H T and H P are the Hamiltonians for the tree and penguin operators, respectively. We define the relative magnitude and phases between these two contributions as follows: where δ and φ are strong and weak phase differences, respectively. The weak phase difference φ arises from the appropriate combination of the CKM matrix elements: φ = arg[(V tb V * ts )/(V ub V * us )]. The parameter r is the absolute value of the ratio of tree and penguin amplitudes, The amplitude for B s →K 0 π + π − is Then, the CP violating asymmetry, a, can be written as We can see explicitly from Eq. (40) that both weak and strong phase differences are needed to produce CP violation. ρ − ω mixing has the dual advantages that the strong phase difference is large and well known [3,4]. In this scenario one has where t V (V = ρ or ω) is the tree amplitude and p V is the penguin amplitude for producing a vector meson, V . t a V (V = ρ or ω) is the tree annihilation amplitude and p a V is the penguin annihilation amplitude. g ρ is the coupling for ρ 0 → π + π − ,Π ρω is the effective ρ−ω mixing amplitude, and s V is from the inverse propagator of the vector meson V, with √ s being the invariant mass of the π + π − pair.
B. CP violation via ρ − ω mixing
In the following we will study the CP violating asymmetries in the following decay: B 0 s → K 0 ρ 0 (ω) → K 0 π + π − . With the Eq. (4)(6)(7)(8), we can calculate the decay amplitudes in QCD factorization scheme. The expressions for theB 0 s → K 0 ρ 0 (ω) amplitudes are given by where Here F 0 denote B s → K 0 meson form factor. m ρ 0 , m ω are the mass of ρ 0 and ω mesons. ε * ρ 0 , ε * ω correspond to polarizing vectors. f refers to the decay constant. Then we can get where the form of the coefficients a p i at next-to-leading order in α s is given by Eq. (12), which M 1 is K 0 meson and M 2 is ρ 0 meson. β i is the weak annihilation contribution in QCD factorization. γ χ is chirally-enhanced terms which we have denoted above.
From Eq. (6)(7)(48), one can get In a similar way, with the aid of the Fierz identities, we can evaluate the penguin operator contributions p ρ and p ω . From Eq. (48) we have where and where It can be seen that r ′ and δ q depend on both the Wilson coefficients and the CKM matrix elements, as shown in Eqs. (65). Substituting Eqs. (58) (61) (65) into Eq. (49), we can obtain r, sin δ, and cos δ. Then, in combination with Eqs. (50) and (51) the CP violating asymmetries can be obtained.
The matrix element for B s → P and B s → V (where P and V denote pseudoscalar and vector mesons, respectively) can be decomposed as follows [17]: where J µ is the weak current (J µ =qγ µ (1 − γ 5 )b with q = u, d, s), p Bs (m Bs ), p P (m P ), p V (m V ) are the momenta (masses) of B s , P, V , respectively, k = p Bs −p P (p V ) for B s → P (V ) transition and ǫ µ is the polarization vector of V . F i (i = 0, 1) and A i (i = 0, 1, 2, 3) in Eq. (67) are the weak form factors which satisfy F 1 (0) = F 0 (0), With the factorizable decay amplitudes in Eq. (56)(57) we can calculate the decay rate for B s to a pseudoscalar meson (P ) and a vector meson (V ) transition by using the following expression [18]: where is the c.m. momentum of the product particle and A(B s → P V ) is the decay amplitude.
In the QCD factorization approach. Here V T,P u are the CKM factors, and In our case we take into account the ρ − ω mixing contribution when we calculate the branching ratios since we are working to the first order of isospin violation. we can explicitly express the branching ratio for the processB s → K 0 ρ 0 (ω) as the following: where Γ Bs is the total decay width of B s .
V. INPUT PARAMETERS
In the numerical calculations, we have several parameters, i.e. N c and the CKM matrix elements in the Wolfenstein parametrization. For the CKM matrix elements, which should be determined from experiments, we use the results of Ref.
In QCD factorization scheme, since power corrections have been considered, N c is only color parameter, hence we use N c = 3. In naive factorization N c includes the nonfatorizable effects which are model and process dependent and cannot be theoretically evaluated accurately and can be determined by experiment. The running quark masses is taken by the scale µ in B s decay. One has The values of the scale dependent quantities f ⊥ V (µ) and a ⊥ 1,2 (µ) are given for µ = 1GeV . The value of Gegenbauer moments are taken from [19]. a ρ 1 = 0, a ρ 2 = 0.15 ± 0.07 a ω 1 = 0, a ω 2 = 0.15 ± 0.07 a ⊥ρ 1 = 0, a ⊥ρ 2 = 0.14 ± 0.06 a ⊥ω 1 = 0, a ⊥ω 2 = 0.14 ± 0.06 a K 1 = 0.06 ± 0.03, a K 2 = 0.25 ± 0.15 For B s meson, we use the value [2]: The Wilson coefficients c i can be found in [8]. As discussed in detail in [8], there are large theoretical uncertainties related to the modeling of power corrections corresponding to weak annihilation effects and the chirally-enhanced power corrections to hard spectator scattering. So we parameterize these effects in terms of the divergent integrals X H (hard spectator scattering) and X A (weak annihilation). We also model these quantities by using the parameterization[8] and similarly for X H . Here ϕ A is an arbitrary strong-interaction phase, which may be caused by soft rescattering. The fitted ̺ A and ϕ A are taken from [20].
For the estimate of theoretical uncertainties, we shall assign an error of ±0.1 to ρ A and ±20 • to φ A [20].
The form factors associated with the weak transitions depend on the inner structure of the hadrons and are hence model dependent. Here we will consider the form factors obtained in several phenomenological models. For B s decay form factors, we will use the results (form factors are referred to the ones at q 2 = 0):
In above Models, the k 2 dependence of the form factors has the following form under the nearest pole dominance assumption: where h could be F 0 , and m h is the pole mass. It is noted that since the value of k 2 (which is actually the square of the mass of the factorized light meson) is much smaller than the square of the pole mass which is of order m 2 b , only the values of the form factors at k 2 = 0 are most relevant and hence how the form factors depend on k 2 has little effects (less than 2%). From the above values we see that the form factor B s → K at q 2 = 0 ranges from 0.23 to 0.31.
VI. NUMERICAL RESULTS AND DISCUSSIONS
In the numerical calculations, we find the CP violating asymmetry, a, is large when the invariant mass of π + π − is in the vicinity of the ω resonance within QCD factorization scheme.
In the respective error ranges, when In QCD factorization, the theoretical errors are large which follows to the uncertainties of results. Generally, power corrections beyond the heavy quark limit give the major theoretical uncertainties. This implies the necessity of introducing 1/m b power corrections. Unfortunately, there are many possible 1/m b power suppressed effects and they are generally nonperturbative in nature and hence not calculable by the perturbative method. There are more uncertainties in this scheme. The first error refers to the variation of the CKM parameters. The second error comes from form factors and decay constants. The third error corresponds to the Gegenbauer moments. The last error is the wave function of the B s meson characterized by the parameter λ B , the power corrections due to weak annihilation and hard spectator interactions described by the parameters ρ A,H , φ A,H , respectively. Using the central values of above parameters, we first calculate the numerical results of CP violation and branching ratio, and then add errors according to standard deviation. In Fig.1, We give the central value of CP violating asymmetry as a function of √ s. From the figure one can see the CP asymmetry parameter is dependent on √ s and changes rapidly due to ρ − ω mixing when the invariant mass of π + π − is in the vicinity of the ω resonance. The CP violating asymmetry vary from around −37% to around 45%. From Eq. (43), one can see that the CP violating asymmetry parameter depends on both sin δ and r. The plots of sin δ and r as a function of √ s are shown in Fig. 2 and Fig. 3. It can be seen that when ρ − ω mixing is taken into account sin δ and r change sharply when the invariant mass of π + π − is around 0.782 GeV. From the Fig. 2, one can find ρ − ω mixing make the sin δ value oscillate from −0.56 to 0.44 which can not reach the value −1. This result is not in agreement with conclusion from naive factorization which can measure the CP violating parameter to remove the mod(π) phase uncertainty in the determination of the CKM angle α arising from the conventional determination through sin 2α [7]. We have shown that ρ − ω mixing does enhance the direct CP violating asymmetries and provide a mechanism for large CP violation in QCD factorization scheme. On the other hand, it is important to see whether it is possible to observe these large CP violating asymmetries in experiments. This depends on the branching ratio forB 0 s → K 0 ρ 0 (ω). We will study this problem in the next section.
VII. DISCUSSIONS ON POSSIBILITY TO OBSERVE CP VIOLATING ASYM-METRIES AT THE LHC
The LHC is a proton-proton collider currently have started at CERN. With the designed center-of-mass energy 14 TeV and luminosity L = 10 34 cm −2 s −1 , the LHC gives access to high energy frontier at TeV scale and an opportunity to further improve the consistency test for the CKM matrix. The production rates for heavy quark flavours will be large at the LHC, and the bb production cross section will be of the order 0.5 mb, providing as many as 0.5×10 12 bottom events per year [26]. In particular, the LHCb detector is designed to exploit large number of b-hadrons produced at the LHC in order to make precise studies on CP asymmetries and on rare decays in b-hadron systems. The other two experiments, ATLAS and CMS, are optimized for discovering new physics and will complete most of their B physics program within the first few years [26,27]. Obviously, the LHC has a great advantage over the current experiments on b-hadrons [28].
In the present work, we have predicted possible large CP violating asymmetries in decay channel ofB 0 s → K 0 ρ 0 (ω) → K 0 π + π − via the ρ − ω mixing. At the LHC, the b-hadrons are produced from the pp collisions. The possible asymmetry between the numbers of the b-hadrons, H b , and those of their antiparticles,H b , has been studied in the Lund string fragmentation model and the intrinsic heavy quark model [29,30]. It has been shown that this asymmetry can only reach values of a few percent. In our following discussions, we will ignore this small asymmetry and give the numbers of H bHb pairs needed for observing the CP violating asymmetries we have predicted. These numbers depend on both the magnitudes of the CP violating asymmetries and the branching ratios of heavy hadron decays which are model dependent. For one (three) standard deviation signature, the number of H bHb pairs we need is [31][32][33] where BR is the branching ratio for H b → f ρ 0 . For central value of CP asymmetry in Eq. (78), we present the numbers of B sBs pairs for observing the large CP violating asymmetries at LHC. For the channel B 0 s → K 0 ρ 0 (ω) → K 0 π + π − , the numbers of B sBs pairs are 3.8 × 10 6 (3.4 × 10 7 ) for 1σ (3σ) signature. At the LHC the average B sBs production is about 10% out of 10 12 bb events [26]. From Fig.1, one can see CP violating asymmetries vary sharply at small energy range, and reach peak value at √ s = 0.782 GeV . Hence, it is very possible to observe the large CP violating asymmetries in small energy range of ρ 0 ∼ ω resonance at the peak values of CP violating asymmetries from the LHC experiment. For the experiments, it is possible to reconstruction π + , π − and K 0 mesons when the invariant masses of π + π − pairs are in the vicinity of the ω resonance. Therefore, it is very possible to observe the large CP violating asymmetries inB 0 s → K 0 ρ 0 (ω) → K 0 π + π − at the LHC.
VIII. SUMMARY AND CONCLUSIONS
In this paper, we have studied CP violation inB 0 s → K 0 ρ 0 (ω) → K 0 π + π − . It has been found that, by including ρ − ω mixing, the CP violating asymmetries can be large when the invariant masses of π + π − pairs are in the vicinity of the ω resonance. For the decaȳ B 0 s → K 0 ρ 0 (ω) → K 0 π + π − , the maximum CP violation can reach 46%. Furthermore, taking ρ − ω mixing into account, we have calculated the branching ratio of the decay. We have also presented the numbers of B sBs pairs required for observing the predicted CP violation in experiments at the LHC. We have found the channel is the likely channel in which the large CP violating asymmetries may be observed at LHC. We expect that our predictions will provide a useful guidance for future investigations and experiments.
In our calculations there are some uncertainties. We have worked in the QCD factorization which is expected to be a reliable approach in the heavy-quark limit. In the QCD factorization scheme, α s (m b ) and some 1/m b (annihilation) corrections are included. In this framework, there is cancellation of the scale and renormalization scheme dependence between the Wilson coefficients and the hadronic matrix elements. However, the QCD factorization scheme suffers from endpoint singularities which are not well controlled. The CP violating asymmetry depends on the unknown parameters which are associated with such endpoint singularities. The CKM matrix elements also lead to some uncertainties in the CP violating asymmetry. Uncertainties also come from the weak form factors associated with the hadronic matrix elements. This lead to uncertain CP violating asymmetries in the QCD factorization scheme. This needs further detailed investigations.
|
2010-11-15T05:30:01.000Z
|
2010-11-15T00:00:00.000
|
{
"year": 2010,
"sha1": "15f916aa0d492a90428811d5f2e7d7e54b12cb1f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1011.3294",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "15f916aa0d492a90428811d5f2e7d7e54b12cb1f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
118205162
|
pes2o/s2orc
|
v3-fos-license
|
All-Heusler giant-magnetoresistance junctions with matched energy bands and Fermi surfaces
We present an all-Heusler architecture which could be used as a rational design scheme for achieving high spin-filtering efficiency in the current-perpendicular-to-plane giant magnetoresistance (CPP-GMR) devices. A Co2MnSi/Ni2NiSi/Co2MnSi trilayer stack is chosen as the prototype of such an architecture, of which the electronic structure and magnetotransport properties are systematically investigated by first principles approaches. Almost perfectly matched energy bands and Fermi surfaces between the all-Heusler electrode-spacer pair are found, indicating large interfacial spin-asymmetry, high spin-injection efficiency, and consequently high GMR ratio. Transport calculations further confirms the superiority of the all-Heusler architecture over the conventional Heusler/transition-metal(TM) structure by comparing their transmission coefficients and interfacial resistances of parallel conduction electrons, as well as the macroscopic current-voltage (I-V) characteristics. We suggest future theoretical and experimental efforts in developing novel all-Heusler GMR junctions for the read heads of the next generation high-density hard disk drives (HDDs).
Continuous evolution of the HDD read heads, with higher sensor output, lower resistance, and higher bit resolution, is essential for further increase in the areal density in massive magnetic recording. Low-resistance magnetoresistance (MR) devices are of urgent requirement for impedance matching between read sensors and the preamplifiers, for lower electric noises, and for high frequency data transfer. 1-3 A read-sensor resistance-area product (RA) less than 0.1 Ωμm 2 is necessary for a recording density higher than 2 Tbit/in 2 . 2 This is a big challenge for the current magnetic tunnel junctions (MTJs) with a high impedance insulator spacer, but can be easily achieved using the currentperpendicular-to-plane giant magnetoresistance (CPP-GMR) spin valves (SVs) composed of all-metallic layers. The RA values of CPP-GMR SVs are typically below 0.05 Ωμm 2 . The drawback of the CPP-GMR SVs based on conventional ferromagnetic (FM) materials is their low signal-to-noise (SR) ratio. For example, the resistance change-area product (ΔRA) of the CoFe-based CPP-GMR SV is ~1 mΩ μm 2 , 4, 5 which must be improved substantially. The utilization of highly spin polarized FM materials such as Co-based Heusler compounds is expected to provide large spin-dependent scatterings in the FM layers and at the interfaces between the FM and the spacer layers, thereby improving ΔRA. 6 However, to the best of our knowledge, the largest room-temperature (RT) GMR ratio achieved so far is only 74.8%, which is still too low for practical application. 7 The much lower-than-expected MR value could be partially attributed to the imperfect band matching between the Heusler electrodes and TM spacer. The performance can be improved by using an all-Heusler structure with intrinsic lattice and band matching. Efforts have been made in designing and fabricating all-Heusler GMR junctions in the past decade. [8][9][10][11][12][13][14] Even though some design principles have been proposed and some systems have been investigated, further studies are required to understand the interface physics and identify robust device structures. In the current work, we employ a Co 2 MnSi (CMS)/Ni 2 NiSi (NNS)/Co 2 MnSi trilayer GMR stack, of which the FM electrode and nonmagnetic (NM) spacer have good lattice-matching, 11 as a prototype to illustrate the physics of the electronic structure and magneto-transportation of the all-Heusler scheme and to demonstrate its advantages over the conventional Heusler/TM combination. Figure 1 shows a model of the CMS/NNS/CMS trilayer being studied. The tetragonal supercell includes two CMS electrodes of 12 atomic layers each, and a NM NNS spacer of 1.67 nm, i.e., 13 atomic layers. For comparison, we carry out similar calculation on the CMS/Ag/CMS system with approximately the same spacer thickness (9 atomic layers of Ag). The MnSi/NiNi or MnSi/Ag interface termination was shown to be energetically stable compared to other terminations and is assumed here. The structure is optimized by relaxing the scattering region until the Hellmann-Feynman force on each atom is less than 0.01 eV/Å. First principles electronic structure calculations based on density functional theory (DFT) are performed using the Vienna Ab-initio Simulation Package (VASP), 15 whereas non-equilibrium Green's function (NEGF) method combined with DFT implemented in the Atomistix ToolKit package (ATK) is utilized for the transport calculation. 16,17 The spin-polarized generalized-gradient approximation (SGGA) proposed by Perdew et al. is employed as the exchange and correlation functional consistently throughout this work. 18 For transport calculation, the double-ζ polarized (DZP) basis set is used for the electron wave function. A cutoff energy of 150 Ry and a Monkhorst-Pack kmesh of 8 × 8 × 100 yield a good balance between computational time and accuracy in the results. It should be noted that conductance calculated using the DFT-NEGF method may deviate systematically from experimental measurement for the three-dimensional metallic multilayer structure. This, however, is not a concern here as the aim of our study is to understand the effect of the NM spacer in the FM/NM/FM trilayer structure instead of obtaining quantitative conductance of the system. Other parameters of the first-principles calculations are the same as those in Ref. 19.
The calculated majority-spin band structures of CMS and NNS and, for comparison, CMS and Ag pairs are shown in Figs. 2(a) and 2(b), respectively, whereas the corresponding Fermi surfaces are presented in Figs. 2(c)-(e), respectively. In contrast to the rather poor energy band and Fermi surface matching between CMS and Ag, the Fermi surfaces and the energy bands in the vicinity of Fermi levels of CMS and NNS almost coincide with each other. Such a good matching would ensure a smooth propagation of majority electrons across the interface, suppress the spin-flip scattering, and consequently, enhance the interfacial spin-asymmetry as well as the GMR ratio. 8,10,[20][21][22] To provide further evidence for high GMR ratio of the all-Heusler architecture and demonstrate its suitability for CPP-GMR application, we calculated the transmission spectrum and present in Fig. 3 the in-plane wave-vector k // -resolved transmission spectrum at the Fermi energy for the parallel majority spin within the CMS/NNS/CMS and CMS/Ag/CMS junctions. As can be seen in the figure, the majority transmission channel of the all-Heusler junction is largely enhanced in a vast region around the Γ point compared with that of the CMS/Ag junction. This significant enhancement of the transmittance in the parallel electrode magnetic configuration could be qualitatively attributed to the perfect Fermi surface match between the Heusler pair as shown in Fig. 2(c) and 2(d), since at the interface of perfect metallic junctions, only those propagating states with their conserved in-plane wave-vector k // coexisting within both metals can contribute to the conduction. 23 This is also confirmed by the fact that the area of enhanced transmittance and the zone where the Fermi surfaces of CMS and NNS overlap have the same shape.
The transmission mechanism stated above can be directly visualized and verified by the macroscopic conduction behaviours, e.g., the in-plane averaged voltage drop along the transmission direction (z-axis) of the whole junction, of which the results are illustrated in Fig. 4 for the all-Heusler (red dash line) and Heusler/TM (black solid line) junctions, both in the parallel magnetization configuration. It is noted that the drop is nearly linear throughout the all-Heusler stack, which behaves as a "homogeneous" junction. However, the voltage across the CMS/Ag/CMS junction exhibits obvious heterogeneous characteristics, with a couple of step-like drops located right at the Heusler/TM interfaces, indicating comparatively heavier interfacial scattering for the majority conduction electrons. As a result, the interfacial spin-asymmetry is weakened, which, in turn, decreases ΔRA according to the Valet-Fert two-current model. 24 Moreover, this point is quantitatively confirmed by the conductance difference between the all-Heusler and Heusler/TM junctions, as listed in Tab. 1. The higher parallel interfacial conductance of the all-Heusler junction leads to a GMR ratio around 30 times larger than that of its Heusler/TM counterpart.
We also calculated the bias-dependent transmission curves of the GMR devices and obtained their macroscopic current-voltage (I-V) characteristics. The currents through the two junctions under parallel and antiparallel electrode magnetization configurations are calculated based on the Landauer-Buttiker formula. As can be seen in Fig. 5, the antiparallel currents of the two systems show similar trends against the bias voltage, whereas for the parallel case, there is large disparity. The parallel current of the all-Heusler junction is, unsurprisingly, much larger that of the Heusler/TM one, which can be attributed to its aforementioned smaller interfacial resistance. This explicitly confirms that the all-Heusler interfaces have much improved spin-valve performance compared with the widely-used Heusler/TM interfacial structure.
In summary, we present an all-Heusler architecture for the rational design of highefficiency GMR junctions for the HDD read heads. The intrinsic advantages of the all-Heusler structure, i.e., the almost perfectly matched energy bands and Fermi surfaces between the electrodes and spacer, are expected to lead to much higher spin-injection efficiency at the FM/NM interface, and hence a much larger ΔRA compared with its Heusler/TM counterparts. It should be noted that we discuss the all-Heusler CMS/NNS junction only as a representative example of an optimum design scheme superior to the state of art. Further theoretical and experimental efforts, following the all-Heusler scheme, are strongly recommended in designing and fabricating well-crystallized NM Heusler compounds as the tag spacer materials which can well match the Co-based full-Heusler electrodes. Table 1
|
2019-04-13T04:54:18.279Z
|
2013-01-25T00:00:00.000
|
{
"year": 2013,
"sha1": "41b22eb158655631724591b570fad3542e5415c0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2697bd0248c10287797afb885c3c1d5c328e7c0a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
222178637
|
pes2o/s2orc
|
v3-fos-license
|
Altered EEG Signal Complexity Induced by Hand Proximity: A Multiscale Entropy Approach
Visual short-term memory (VSTM) is an important cognitive function that acts as a temporary storage for visual information. Previous studies have shown that VSTM capacity can be modulated by the location of one’s hands, where hand proximity enhances neural processing and memory of nearby visual stimuli. The present study used traditional event-related potentials (ERP) along with multiscale entropy (MSE) analysis to shed light on the neural mechanism(s) behind such near-hand effect. Participants’ electroencephalogram (EEG) data were recorded as they performed a VSTM task with their hands either proximal or distal to the display. ERP analysis showed altered memory processing in the 400–700 ms time window during memory retrieval period. Importantly, MSE analysis also showed significant EEG difference between hand proximal and distal conditions between scales 10 to 20, and such difference is clustered around the right parietal cortex – a region that is involved in VSTM processing and bimodal hand-eye integration. The implications of higher MSE time scale in the parietal cortex are discussed in the context of signal complexity and its possible relation to cognitive processing. To our knowledge, this study provides the first investigation using MSE to characterize the temporal characteristics and signal complexity behind the effect of hand proximity.
INTRODUCTION
Visual short-term memory (VSTM) is an important cognitive function that acts as a temporary storage for visual information. Such storage allows visual and spatial information to stay intact and accessible in the brain although the actual physical stimulus is no longer in view (e.g., occlusion, blink, saccade, etc.). Such temporary information can then be accessed to support other functions such as goal-directed actions (e.g., Baddeley, 2002;Bridgeman and Tseng, 2011).
In the laboratory, VSTM integrity is often assessed with a change detection paradigm, which is similar to the popular spot-the-difference game, but in a more controlled laboratory setting. Participants see one image for a few 10 milliseconds, followed by a brief blank display, and then the image would reappear but sometimes may contain a slight change from its first appearance.
The participant's job is to respond whether the second image is totally identical to the first or not. Change detection tasks like these have been shown to positively correlate with one's fluid intelligence (Kane and Engle, 2002;Fukuda et al., 2010). However, studies have also consistently shown that people's VSTM performance is not as good as we subjectively think it is (Simons and Rensink, 2005;Simons and Chabris, 2011), and the capacity estimate on average is around 3 to 4 simple items (Bays and Husain, 2008;Luck and Vogel, 2013). Combining electroencephalogram (EEG) and eventrelated potentials (ERP) with a change detection task, Vogel and Machizawa (2004) have found that EEG signals near the right posterior parietal region showed larger amplitude as people's VSTM load increased, suggesting an association between the right parietal cortex and VSTM.
Given the importance of VSTM and its link with various daily functions, the investigation of various methods to boost VSTM capacity, such as memory training (e.g., Shipstead et al., 2012;Blacker et al., 2014) and brain stimulation (e.g., Tseng et al., 2012b;Hsu et al., 2014), has attracted much attention in the field. Among these, one interesting yet under-investigated factor is the placement of one's hands. That is, the closer the hands are to the things to be remembered, the better the memory is (for a review, see Reference Tseng et al., 2012a). This is known as the effect of hand proximity, or nearby-hand effect, and has been hypothesized to alter magnocellular processing (Gozli et al., 2012;Taylor et al., 2015) and attentional selection (Reed et al., 2006;Tseng et al., 2014). Specifically, this handproximity effect can enhance one's change detection performance when hands are placed on both sides of the computer monitor, and this enhanced performance is most noticeable on the right side of the screen . The effect of hand proximity, however, is likely non-specific to VSTM since studies have also shown that placing one's hands near the visual stimuli can facilitate attentional orienting (Reed et al., 2006;Sun and Thomas, 2013), slow down visual search (Abrams et al., 2008), speed up figure-ground segregation (Cosman and Vecera, 2010), shield attention from distraction , and bias attention toward visual details . These behavioral effects have been assumed to be the byproduct of bimodal neurons located in the premotor and parietal cortex (Reed et al., 2006), whose receptive fields follow the locations of the hands (Graziano and Botvinick, 2002). ERP evidence thus far has shown an amplitude increase that is nonselective (between target and distractors) during the early sensory stage (e.g., <200 ms from stimulus onset time), which then becomes target-selective in the later time window (e.g., >300 ms; Reed et al., 2013).
Although studies have begun to look into the electrophysiological basis of the effect of hand proximity, EEG studies remain scarce in the hand-proximity literature and no investigation has yet looked into the EEG effect of hand proximity beyond ERP. This is unfortunate because recent studies have shown that EEG, even in the absence of cognitive tasks, can deliver very promising results in neuroscience research and healthcare such as using EEG signals to classify between patients with Alzheimer's disease, mild cognitive impairment (MCI; prodromal stage of Alzheimer's disease), and healthy control (Mammone et al., 2019), as well as accurately predicting which MCI may eventually "convert" to Alzheimer's disease in the future (Mammone et al., 2018). Therefore, to provide more insights to the electrophysiological signatures of the hand effect on VSTM, the present study aims to perform both the conventional ERP, as well as a multiscale entropy analysis (MSE), to quantify the possible different levels of complexity in EEG signals.
The MSE analysis is of importance here not only because of its novelty in the context of hand proximity studies, but it is able to better quantify and characterize the complexity and adaptability between two neural systems (i.e., hand proximal vs. hand distal in this case) than conventional ERP analyses. This is based on the assumption that biological systems need to operate across multiple spatial and temporal scales, and thus their complexity perhaps are also multi-scaled; and such information would not be observable in the traditional ERP approach. Indeed, in the medical field, MSE has already been applied to the analysis of EEG signals for two decades, and has been shown to be sensitive to the EEG differences between healthy and epileptic children (El Sayed Hussein Jomaa et al., 2019), healthy from MCI and Alzheimer's disease patients (Morabito et al., 2012), and even between awake, light, and deep anesthesia (Li et al., 2010). In cognitive domains, previous research has also been able to use MSE to differentiate the EEG signals between good and poor performance in visuospatial working memory (Wang et al., 2014) and cognitive control Huang et al., 2015), especially at higher time scales (e.g., 10-20 time scales). Notably, Wang et al. (2014) were able to show that, in the context of VSTM, physically active elderly adults performed better than inactive controls, and such behavioral distinction was also observable in MSE analysis at higher time scales. Therefore, if the effect of hand proximity is indeed acting on the adaptability and complexity of the system, we expect such modulatory effect to be visible in select time scales in the MSE analysis. This paper is organized as follows. Section 2 describes the VSTM task and how behavioral and EEG data were collected as well as preprocessed. Section 3 describes the behavioral and EEG findings, and contrasts the results from ERP and MSE approaches. Section 4 discusses the observed results and the potential usefulness of MSE in EEG studies, and the paper ends with a brief note on the theoretical implications of current findings in the literature of hand proximity.
Participants
Sixteen participants from the National Central University with normal or corrected-to-normal vision participated in this experiment. All participants gave informed consent prior to their participation. Two participants were excluded from analysis due to too many movement artifacts in their EEG data, which resulted in eight male and six female subjects (mean age = 23) taken for the analysis. All experimental procedures were approved by the Institutional Review Board of National Cheng Kung University Hospital.
Task and Procedures
This study used a within-subject design, thus the formal experimental session was divided into two blocks (proximal vs. distal) in counterbalanced order across all participants. Participants' sat 48 cm from the monitor and, in the proximal session, placed both hands right next to the monitor with cushion below their elbows (Figure 1, solid lines). In the distal session, participants' hands were placed on their lap under the desk (Figure 1, dotted lines). In each session, participant performed a change detection task similar to the Tseng and Bridgeman study (2011, Experiment 2). The task consisted of 144 trials. In each trial, participant was instructed to memorize an array of 10 colored rectangles (16 × 13 mm), and compare it with a subsequent display to indicate whether any one rectangle had changed color. This would be analogous to the experimental and computerized version of the "spot-the-difference" game, except that we used simple color squares and that the two displays were presented in succession. The locations of the squares were randomized across every trial. Unlike the original Tseng and Bridgeman (2011) study that used colors of contrasting brightness, the present study used eight similarly darker colors (red, yellow, green, dark cyan, orange, blue, purple, gray) to control for the varying degrees of brightness that was present in the original study. Half of trials contained a color change of one square and the other half did not. Each trial began with a 1000 ms fixation cross, then sequentially followed by a 200 ms memory array, 900 ms retention interval, and a 2200 ms test array. The entire trial would be over with one sequence (i.e., one-shot change detection) in a total of 4300 ms per trial. There was no repetition of the memory or test arrays, nor could the participants switch to previous displays and look again. During the test array presentation time (2200 ms), participants simply had to judge whether a color change was present or not by pressing one key for "change" and another for "no change" with their right index and middle fingers (all participants were right-hand dominant). Participants' EEG signals were recorded concurrently as they performed the change detection task.
This one-shot change detection task is designed in a way that participants are primarily concerned with encoding information on the display during the memory array, and retrieving such stored information for comparison during the test array, without too much overlap between the two stages. Because of this well-segregated temporal structure, cognitive processes or neuroimaging signals during these two distinct time windows have often been referred to as the encoding period (i.e., memory array) and retrieval (i.e., test array) period. Accordingly, the present study also uses the same structure to segment the eventrelated EEG signals.
Participants' VSTM capacity was estimated using Cowan's K, that is computed from the VSTM memory array set size (S), which is 10 in this study, and participants' hit rate, or true positive rate (TPR), and false alarm rate, or false positive rate (FPR) (Cowan, 2001): The hit rate, or TPR, was defined as the conditional probability that the participants responded "change-present" when a color change indeed took place. The false-alarm rate, or FPR, was defined as the conditional probability that the participants responded "change-present" when there was in fact no changes in color. The difference between TPR and FPR is then multiplied by S, which is the set size, or memory load, of the VSTM stimuli (i.e., 10). We conducted a paired t-test to test whether hand proximity would improve participants' VSTM performance by comparing the mean K values between the proximal and distal conditions. To test for any left or right bias that was previously reported in the literature , where proximal hands were found to induce a bias toward the right side of the screen in right-handed participants (Le Bigot and Grosjean, 2012), participants' regional gain (or bias, B) was also computed relative to their hit rates from the distal condition: Proximal true positive rate Distal true positive rate The display would be divided into the left, center, and right region, and B would computed for each region to give a proportional estimate of the hand-driven bias toward a certain region over and above the distal baseline. The transformation of hit rates into B ratio is needed because the center region is always the region with highest hit rates, but such hit rates would only reflect participants' natural tendency to look at the middle of the screen, and mask the lateral bias that may be introduced by hand proximity. Therefore, a ratio that takes the original hit rate from the distal baseline is more suitable to reveal such a directional shift of attentional focus.
Electroencephalography Recordings
Electroencephalogram activity was recorded with Ag/AgCl electrodes mounted in an elastic cap using a 32-electrode arrangement following the International 10-20 System, referenced to the left and right mastoid. Vertical and horizontal electro-oculograms were also recorded. Electrode impedances were kept below 10 k for all electrodes. The online low-pass filters were set at 300 Hz. Data were recorded with Neuroscan software, with a sampling rate of 1000 Hz.
Event-Related Potential Data Analysis and Averaging
The continuous EEG data was applied a digital low-pass filter of 30 Hz (24 dB/octave) in order to filter out highfrequency noise. The EEG data were then segmented into epochs that starts from 200 ms before the (memory or test) array onset, and continues until 800 ms after the same array onset. Baseline correction was executed using a pre-stimulus interval by subtracting averaged pre-stimulus voltage from each EEG data point in the whole epoch. Epochs with artifacts fluctuating over ±100 µV and incorrect response were rejected. Each trial was divided into two segmented epochs including encoding (i.e., memory array display) and retrieval (i.e., test array display) waveforms. ERP analysis was performed by averaging artifactfree trials based on stimulus type (i.e., proximal vs. distal conditions). In the encoding period, all artifact-free trials were averaged based on proximal and distal conditions. In the retrieval period, only true-positive trials were averaged in proximal and distal conditions. In order to investigate the neurophysiological mechanism of the hand proximity, a three-way repeated-measure ANOVA with the factors of within-subject factors of hand proximity (proximal vs. distal), anterior/posterior electrodes (frontal vs. central vs. parietal regions), and laterality (left vs. middle vs. right) was conducted based on the mean amplitude from 400 to 700 ms after onset of memory or test array in both proximal and distal sessions. Three scalp regions were chosen to perform the statistical analysis as a within-subject factor of anterior/posterior electrodes: frontal (F3, FZ, F4), central (C3, CZ, C4), and parietal regions (P3, PZ, P4). Another within-subject factor was laterality: left (F3, C3, P3), middle (FZ, CZ, PZ), and right (F4, C4, P4). The other within-subject factor was hand proximity (proximal vs. distal conditions). Greenhouse-Geisser correction was applied to repeated measures with more than one degree of freedom.
MSE Analysis
Complexity in EEG signals at different time scales was analyzed with MSE analysis (Costa et al., 2002(Costa et al., , 2005Goldberger et al., 2002). The electrode of interest here is P4, since activities in the posterior right parietal cortex have been repeatedly shown to be critically involved in the visuospatial change detection task employed here (Tseng et al., 2010(Tseng et al., , 2013Juan et al., 2017). MSE analysis was performed from time scale 1 through 25 both for the encoding/retention stage (0-200 ms in the memory array through 0-900 ms in the retention interval) and the retrieval stage (0-1000 ms in the test array) of the change detection task (Figure 1B). This was done in two steps: first, the algorithm down-samples the EEG post stimulus time series {x 1 ,. . ., x i ,. . ., x N } for every trial in each condition. The down-sampling procedure used a coarse-grained procedure along different time scales: for timescale τ, the coarse-grained time series was obtained by averaging data points within non-overlapping windows of length τ. Thus, each element of a coarse-grained time series, denoted as j, is calculated as: We then compute the sample entropy for each coarse-grained time series. Sample entropy is defined by the negative natural logarithm of the conditional probability that a time series of length N/τ, having repeated itself within a tolerance r (similarity factor) for m points (pattern length), will also repeat itself for m + 1 points, without allowing self-matches. Note that the tolerance factor r is set as the percentage of the signal SD, and it is calculated for scale 1, then kept fixed for all the other scales. Due to the scarcity of MSE studies in human EEG signals, there is no golden standard or consensus on the best parameters for calculating sample entropy. However, some studies using clinical applications have suggested the parameters of m = 1 or 2 and r = 0.1 to 0.25 to provide a high validity for sample entropy in EEG signals (e.g., Escudero et al., 2006;Takahashi et al., 2009;Yang et al., 2013). With these suggested parameters we have also obtained good results in the past when analyzing EEG signals in the context of cognitive tasks similar to the current study (Wang et al., 2014;Huang et al., 2015). Specifically, in this study the pattern length, m, was set to 1 (i.e., one data point was used for pattern matching). The similarity criterion, r, was set to 0.30, meaning that data points were considered to be indistinguishable if the absolute amplitude difference between them was ≤30% of the time series standard deviation (Huang et al., 2015). Data preprocessing was performed using SPM8 (Statistical Parametric Mapping) and custom MATLAB (MathWorks) scripts 1 . Paired t-tests were conducted, scale by scale, to test the difference of sample entropy between proximal and distal conditions among 32 channels, including Fp1, Fp2, F7, F3, Fz, F4, F8, FT7, FC3, FCz, FC4, FT8, T3, C3, Cz, C4, T4, TP7, CP3, CPz, CP4, TP8, A1, T5, P3, Pz, P4, T6, A2, O1, Oz, O2. Due to the number of paired t-tests we are performing, we adjusted p-value for multiple comparisons by taking into account the false discovery rate (Tseng et al., 2010), with significance level set at p < 0.05.
Behavioral Results
To estimate participants' VSTM capacity, Cowan's K was computed for both proximal (mean: 3.63; range: 0.69-6.81) and distal (mean: 3.51; range: 0.97-5.83) conditions. There was no significant difference between K values between the distal and proximal conditions (t(13) = 0.519, p = 0.612) (Figure 2, left panel). Because the sample size was small, a non-parametric test was also conducted. Wilcoxon signed-rank test showed that the effect of proximity did not elicit a statistically significant (Z = −0.668, p = 0.504) difference. Indeed, median K values of proximal and distal conditions were 3.40 and 3.47, respectively.
To test for any lingering traces of the effect of hand proximity, we explored whether a difference in regional gains may persist. As in the Tseng and Bridgeman (2011) study, we broke down the hit rates into left, center, and right regions according to the location of change for both the proximal and the distal conditions. We then divided the proximal hit rates from the distal hit rates, which gives a proportional estimate of the hand-driven bias in relation to the distal baseline (Figure 2, right panel). These trends do show a stronger right bias (almost 20% more than the distal condition), mild left bias, and a weak center enhancement in terms of hit rates, though they do not show a statistically significant difference in a one-way repeated-measures ANOVA (between left, center, and right: F(2,26) = 0.360, p = 0.621). These trends are consistent with the observations from the Tseng and Bridgeman (2011) study, which in the absence of statistical tests also showed a rightward bias in the proximal condition.
For complexity, MSE analysis showed no MSE difference between proximal and distal conditions. Although there seemed to be a trend in lower time scales (Figure 4, lower panel), such trend did not reach statistical significance after FDR correction for all time scales (all p > 0.05 FDR corrected, as shown in Figure 4). This suggests that hand proximity may have similar effects on brain signal complexity during encoding period.
Retrieval Period
The mean amplitude from 400 to 700 ms after the onset of test array in correct change-detection trial conducted a repeatedmeasures 2 × 3 × 3 ANOVA. The main effect of hand proximity FIGURE 2 | Behavioral results. There was no difference in VSTM capacity estimates between the distal and proximal conditions (left panel). However, despite the absence of enhanced change detection performance with hand proximity, participants' attention was still biased to the right side of the display (right panel).
FIGURE 3 | The waveforms in the proximal (blue) and distal (orange) conditions during the encoding period. There was a marginally significant (p = 0.054) difference between the proximal and distal conditions, which is likely a result of a higher overall activity in the proximal condition (blue). No further post hoc tests were performed, since there was no significant interaction between hand proximity and other factors. was not significant [F(1,13) = 0.276, p = 0.608]. There was also no significant interaction between hand proximity and laterality [F(2,26) = 2.565, p = 0.097]. However, we observed a significant interaction between hand proximity and anterior/posterior electrodes [F(2,26) = 6.877, p = 0.015], as well as a significant three-way interaction [F(4,52) = 6.069, p = 0.005]. To further explore the three-way interaction, we first conducted a two-way ANOVA with hand proximity and laterality as within-subject factors in frontal, central and parietal regions. The interactions between hand proximity and laterality in these regions were not significant (Figure 5, channels Pz and P4). To our surprise, however, the significant differences at Pz and P4 are driven by a lower overall and peak amplitude in the proximal condition, instead of the other way around as one might suspect. The waveforms in the proximal (blue) and distal (orange) conditions during the retrieval period in all hit trials (i.e., correctly detecting a color change). There was a three-way interaction driven by the effect of hand proximity that was only significant at the right posterior sites (Pz and P4, bottom row). Note that the proximal condition actually has lower ERP amplitude and peak than the distal condition, supporting the idea of a magnocellular and hand-induced impairment that occurs specifically during the retrieval period. Bottom panel: The lower ERP amplitude introduced by hand proximity seems to be driven by the right parietal region, which is depicted on the right as a contrast between proximal and distal conditions, though we note that results at the electrode level should be interpreted with caution.
Multiscale entropy results showed that signal complexity from time scales 13-25 in proximal condition was significantly lower than its counterpart from the distal condition, over EEG channels from mid-central to right parietal brain regions (Figure 6, lower panel). The effect of hand proximity was significant within right parietal regions. Therefore, we compared the effect of hand proximity on MSE at P4 electrode. For scales from 10 to 25, the MSE at P4 was higher in distal condition than proximal FIGURE 6 | Upper panel: difference in EEG-based multiscale entropy (m = 1, r = 0.3) at P4 electrode in the proximal (blue) and distal (red) conditions during the retrieval period in all hit trials (i.e., correctly detecting a color change). The yellow region denotes significant different between proximal and distal conditions (p < 0.05, FDR corrected). Lower panel: contrast of MSE between proximal and distal conditions among 32 channels. Colors represent t-values from paired-samples t-test between the proximal and distal conditions. For each scale, the EEG electrodes enclosed by white circles denote that the difference of sample entropy between proximal and distal conditions on these electrodes was significant (p < 0.05, FDR corrected)(Due to the number of channels and scales available, it is possible that not all channels/scales are normally distributed. Thus a cluster-based non-parametric permutation (CBnPP) test (Maris and Oostenveld, 2007;Groppe et al., 2011) was also conducted to test the differences of multi-channel MSE between two conditions during the retrieval stage. The contrast of MSE between proximal and distal conditions among 32 channels in CBnPP test is similar to the contrast in paired t-test with p < 0.05 false discovery rate correction. The non-parametric test revealed the same results as Figure 6, where complexity between proximal and distal conditions diverged significantly from scale 10 and on at P4. In terms of topography, the non-parametric test revealed more significant channels, but also in the central and parietal regions as Figure 6. Because of the high degree of similarity between the parametric and non-parametric tests, and that the parametric tests seemed to be more conservative with fewer electrodes with FDR, we have kept the results from the parametric tests in the main "Results" section).
condition during retrieval period (Figure 6, upper panel). These MSE results were consistent with our ERP findings, though it should be noted that these are electrode-level findings, and thus localizations at P4 location should be interpreted cautiously.
DISCUSSION
The present study aimed to test whether hand proximity would alter neural processing at varying levels of complexity. To this end, we observed that at scale 10 and beyond, EEG signal complexity becomes significantly different between the hand proximal and distal conditions. To our knowledge, this is the first evidence documenting altered visual processing near the hands based on entropy and MSE analysis. This fills the void of traditional ERP analyses, since such average-based analysis cannot provide enough insight regarding differences in the lowfrequency range at higher MSE time scales.
Effect of Hand Proximity on Neural Processing: Location and Timing
The findings from EEG data are twofold: location and timing. In terms of location, both EEG analyses suggest activities in the parietal region to be responsible for the effect of hand proximity. Although the right parietal cortex has long been hypothesized to be involved in the effect of hand proximity (e.g., Reed et al., 2006;Tseng et al., 2012a), the present study is able to provide electrophysiological evidence for the sensor locations underlying the near-hand effect. In ERP results, a significant difference between distal and proximal condition is only observed in the parietal sites ( Figure 5). MSE analysis is perhaps more specific, and shows that such altered neural processing in terms of signal complexity is also concentrated in the right parietal region (Figure 6, lower panel). Therefore, both analyses suggest a parietal involvement behind the effect of hand proximity on visual processing. This can perhaps be linked to the bimodal neuron account originally put forth by Reed et al. (2006), who proposed that the bimodal neurons whose receptive field move along the egocentric hand-centered coordinates (Graziano and Botvinick, 2002) may be a contributing factor for the effect of hand proximity on visual processing.
In terms of timing, there are two levels of temporal characteristics worth discussing: one at the cognitive stage level (encoding vs. retrieval stage), and the timing of EEG signals within a particular cognitive stage. At the cognitive stage level, both MSE and ERP results showed a pronounced proximaldistal difference in the retrieval period (Figures 5, 6), and less so in the encoding period (Figures 3, 4). Therefore, although hand positions are fixed throughout the entire trial and block (i.e., participants hands were near the display in both encoding and retrieval stages), the effect of hand proximity on EEG signals is not constant at every stage of cognitive processing that mediate VSTM. This suggests that hand proximity is not a simple additive factor to whatever cognitive process that is being carried out at the moment; rather, it interacts with the task (and its associated cognitive demand) at hand. Although counterintuitive, this observation is actually consistent with previous neuropsychological (Berryhill and Olson, 2008) and EEG (Hsu et al., 2014) studies that suggest an important role for the parietal cortex in VSTM retrieval. Specifically, boosting parietal activities prior to the experiment with external stimulation also alters parietal activities throughout the experimental session but mostly at the VSTM retrieval stage (Juan et al., 2017). In a similar vein, Reed et al. (2013) used a target detection task combined with hand proximity and found an alteration to EEG signals that is non-selective in the sensory window, and selective for task-specific targets in the later time window. Indeed, our previous MSE study comparing EEG signals between physically fit and unfit elderly adults while the participants performed a VSTM task also showed marked complexity differences in the memory retrieval period, but not the encoding period (Wang et al., 2014). Therefore, the attentional effect of hand proximity is not uniform at every stage of the task although hand positions were kept in place throughout the entire block. In this light, our findings here converge on the same conclusion, and suggest that hand proximity induces a task-dependent modulation of attentional processes during the memory-retrieval stage of VSTM processing.
Regarding finer-level temporal characteristics of EEG signals within the retrieval stage, we observed that the effect of hand proximity on neuronal processing is more evident at a later time window. In other words, we did not observe a change in early sensory components (e.g., N1, P1). This is first evident in the ERP analysis, where the distal-proximal difference is observable in the 400-700 ms window during the retrieval period, but not in the 100-200 ms sensory window. Such 400-700 ms window after stimulus onset is too late for sensory processes and is mostly considered as the component of sustained parietal contralateral negativity (SPCN), which is indicative of attentional orienting and memory retrieval in the context of change detection task (Tseng et al., 2012b;Hsu et al., 2014). This timing and the attentional nature would be consistent with the larger P3 amplitude reported by Reed et al. (2013) using an orienting task. As such, these results strongly suggest an altered attentional processing during the attentional processing period within the VSTM retrieval stage.
Lastly, it is worth noting that a hand-induced difference in encoding processes is also observed, although its marginal statistical significance prevents us from further exploration into its time windows and particular sites (Figure 3). However, it may be useful to point out that the ERP amplitude is higher in the proximal condition during the encoding period, which possibly suggests a stronger attentional engagement or encoding process. However, this attentional engagement, even if true, seems not to be very helpful, or else we would have observed an enhancement in behavioral performance. In the context of the present study, we have observed a shift of attentional bias toward the right side, but no enhanced performance in the proximal condition over the distal condition.
The Significance of Complexity in EEG Signals
The notable contribution of this study is the use of MSE as an index for charactering the dynamic changes in EEG signals. Although the neural mechanism of such signal complexity is not yet known, it is assumed that biological systems and their related signals tend to reach an optimal level of complexity that is neither too high nor too low (Costa et al., 2008). For example, entropy measures can also be obtained from heartbeat signals, and in such case atrial fibrillation tend to show higher signal entropy because of the random signals at high frequency that seem more complex. Over multiple time scales, however, high-frequency signals get combined together and eventually show lower MSE than healthy heartbeat at scale 12 or above (Costa et al., 2002). As such, MSE has been suggested as an indicator of "meaningful structural richness" in the form of longrange correlations on multiple temporal (and probably spatial) scales, in the midst of underlying biological, chaotic deterministic dynamics (Costa et al., 2002). In a similar vein, researchers have also suggested complexity as a way of quantifying how the brain codes information within neural signals; therefore, higher signal complexity would be indicative of an information-rich biological system (Deco et al., 2010;Heisz et al., 2012).
In the field of cognitive neuroscience, the concept of "adaptability" has recently been associated with MSE in EEG signals and neural processing. This is based on the view that biological systems need to achieve rapid adaptability in the face of fast environmental changes, which presumably requires integrative multiscale functionality. In the world of cognitive neuroscience, this "environmental change" would be equivalent to the purposely designed task structure of the experiment, and most importantly, the cognitive (and neural) responses that they demand. For example, using a stop-signal task that is designed to induce inhibitory control mechanism, studies have shown that people who are better able to suppress a motor response tend to show higher EEG complexity in MSE analysis. This is true in between-subject studies (Huang et al., 2015), as well as within-subject studies where the same participants' EEG signals are measured pre-and post-intervention . In a VSTM study, Wang et al. (2014) also showed that physically active elderly adults had higher EEG signal complexity compared to their sedentary counterparts. Because most of the cognitive tasks (including the ones in our study) involve multiple fastpaced presentations of visual or auditory stimuli on the computer screen, and require an accurate and prompt response from the participants, it is plausible that such temporally and cognitively demanding interaction would require more information capacity and "structural richness" (Costa et al., 2002(Costa et al., , 2005Garrett et al., 2013). In EEG, this structural richness has been hypothesized to be achieved via coherence in neural oscillations and interregional communication (e.g., Buzsáki and Schomburg, 2015), where oscillatory coherence in lower frequencies is crucial for longrange interregional information transfer (Lisman and Buzsáki, 2008;Lisman and Jensen, 2013), which possibly is what high MSE time scales have preserved in our results. If this is true, it would make sense that the MSE effects in cognitive studies tended to focus on higher time scales Wang et al., 2014;Huang et al., 2015), whereas resting-state studies that involves no cognitive tasks tended to focus on lower time scales (e.g., Yang et al., 2013). In the context of our observations, proximal and distal conditions also showed a significant difference during the retrieval stage between scales 10 and 25 (Figure 6, top panel). Because high-frequency or random noises tend to get "washed out" at higher time scales, our observation here highlights the possibility that long range, large temporal-scale neurophysiological dynamics may be a key factor underlying the effect of hand proximity. This may hint at a low-frequency and long-range connectivity between parietal and other regions, which is worth investigating for future EEG studies. Nevertheless, the observed distal-proximal differences at higher time scales actually demonstrates the importance of multiscale analyses and highlights the value of MSE analysis.
Finally, although our ERP and MSE analyses both converged on parietal sites to be the loci of hand proximity, MSE analysis of the sensor-based signals actually gave a slightly more accurate localization toward the right hemisphere. This right parietal localization is in line with previous fMRI and brain stimulation studies on VSTM, which suggests a higher involvement of the right parietal cortex in processing visuospatial information in VSTM. Therefore, it is possible that MSE can provide a better approximation of brain regions that is previously not available in the traditional ERP approach. However, because the current results are based on electrode-level findings, precise localization based only on MSE results is not possible and would require further research and validation.
Theoretical Implications
So far two mechanistic explanations have been proposed to account for the phenomenon and effect of hand proximity on visual processing. There is the bimodal neuron account, which stresses the role of bimodal neurons in the parietal and premotor cortices, whose receptive fields move with the hands (Reed et al., 2006;Tseng et al., 2014). There is also the magnocellular account, which emphasizes on the enhanced processing of magnocellular information in lateral geniculate nucleus due to hand proximity (Gozli et al., 2012;Taylor et al., 2015). Particularly, the magnocellular account can offer a new interpretation to some previous findings. For example, Tseng and Bridgeman (2011) have previously argued that hand proximity may have increased participants' attentional engagement with the visuospatial stimuli due to the stronger bimodal neuron activities (induced by hand proximity; Reed et al., 2006). On the other hand, the magnocellular account would argue that such effect was perceptual (as opposed to attentional), which was driven by the fact that Tseng and Bridgeman did not control for the luminance level of each stimulus on the display, and therefore some brighter colors were perceived better (and consequently remembered better) than other darker colors, which fits the color-insensitive but luminance-sensitive profile of the magnocellular pathway, and can also account for Tseng and Bridgeman's results that were previously interpreted as an attentional effect. In the context of the current study, our results seem to suggest that the two competing accounts are not mutually exclusive. This is because our behavioral data surprisingly did not show any near-hand advantage for VSTM performance, which in hindsight may have been due to the similar luminance control that we employed to better control for the varying degrees of brightness in VSTM stimuli in the original Tseng and Bridgeman (2011) study. This lack of behavioral effect due to better luminance control, however, would be consistent and predicted by the magnocellular account (Taylor et al., 2015). Interestingly, despite the lack of enhancement effect in color change detection, hand proximity still biased participants' attention to the locations near the hands (Figure 2, right panel), which is highly similar to the regional gain patterns observed by Tseng and Bridgeman (2011) and the right bias reported by Le Bigot and Grosjean (2012). Indeed, as previously mentioned, we also observe strong parietal activities induced by hand proximity in both ERP and MSE analyses, which is temporally too late for early visual processing and is more compatible with the bimodal attentional account. These consistent observation of biased attentional shift to the right side (i.e., dominant hand side) has been attributed to the bimodal neurons that respond both to visual and tactile stimuli, which biases one's attention to the "action space" (i.e., usually where the dominant hand is). Neurophysiological support comes from findings that monkey's right parietal cortex also shows stronger activities toward their free-moving limbs, and such activity can even transfer to tools held by the hand once the tool has been well-practiced in use (and thus well incorporated into one's body schema; Graziano and Botvinick, 2002;Reed et al., 2006;Tseng et al., 2012a). Our results are also consistent with this account.
Taken together, our results seem to suggest a dissociable mechanism between altered magnocellular processing and attentional bias near the hands -where the absence of the brightness-driven magnocellular enhancement does not hinder the occurrence of attentional bias toward the dominant hand. That is, the two systems can operate independently, where enhanced magnocellular processing (though absent here due to luminance control) is activated by hand proximity, and such information then gains biased attentional processing in the 400-700 ms time window. If true, this would imply that the attentional and magnocellular accounts may not be mutually exclusive, and such compatibility between the two accounts would explain why both accounts have received ample empirical support in the past decade (Reed et al., 2013;Tseng et al., 2014;Taylor et al., 2015;Thomas, 2015;Thomas and Sunny, 2017a,b). However, this compatibility between the two accounts for now remain a speculation based on the current results, and would need experiments specifically designed to test its plausibility. Nevertheless, the present study demonstrate the utility of MSE analysis on EEG data in the context of hand proximity effects, which opens up many new questions for future investigations.
Future studies should look into the biological underpinnings behind the low-frequency and long-range connectivity that is often observed in MSE at hightime scales, as well as the possible spatial selectivity of MSE over ERP approaches, when trying to apply MSE to EEG analysis in the cognitive domain.
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because participants did not consent to the sharing of their data. Requests to access the datasets should be directed to tsengphilip@gmail.com.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Institutional Review Board of National Cheng Kung University Hospital. The patients/participants provided their written informed consent to participate in this study.
|
2020-10-08T13:08:25.406Z
|
2020-10-08T00:00:00.000
|
{
"year": 2020,
"sha1": "0c782f71deceb83a0ffee2a7593da49311b6b3ab",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.562132/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c782f71deceb83a0ffee2a7593da49311b6b3ab",
"s2fieldsofstudy": [
"Psychology",
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
261215398
|
pes2o/s2orc
|
v3-fos-license
|
Epidemiological characteristics of leukemia in China, 2005–2017: a log-linear regression and age-period-cohort analysis
Background Leukemia is a threat to human health, and there are relatively few studies on the incidence, mortality and disease burden analysis of leukemia in China. This study aimed to analyze the incidence and mortality rates of leukemia in China from 2005 to 2017 and estimate their age-period-cohort effects, it is an important prerequisite for effective prevention and control of leukemia. Methods Leukemia incidence and mortality data from 2005 to 2017 were collected from the Chinese Cancer Registry Annual Report. Joinpoint regression model was used to estimate the average annual percentage change (AAPC) and annual percentage change (APC) response time trend. Age-period-cohort model was constructed to analyze the effects of age, period and cohort. Results The age-standardized incidence rate of leukemia was 4.54/100,000 from 2005 to 2017, showed an increasing trend with AAPC of 1.9% (95% CI: 1.3%, 2.5%). The age-standardized mortality rate was 2.91/100,000, showed an increasing trend from 2005 to 2012 with APC of 2.1% (95%CI: 0.4%, 3.9%) and then a decreasing trend from 2012 to 2017 with APC of -2.5% (95%CI: -5.3%, 0.3%). The age-standardized incidence (mortality) rates of leukemia were not only higher in males than that in females, but also increased more rapidly. The incidence of leukemia in rural areas was lower than in urban areas, but the AAPC was 2.2 times higher than urban areas. Children aged 0–4 years were at higher risk of leukemia. The risk of leukemia incidence and mortality increased with age. The period effect of leukemia mortality risk showed a decreasing trend, while the cohort effect showed an increasing and then decreasing trend with the turning point of 1955–1959. Conclusions The age-standardized incidence rate of leukemia in China showed an increasing trend from 2005 to 2017, while the age-standardized mortality rate increased first and then decreased in 2012 as a turning point. Differences existed by gender and region. The risk of leukemia incidence and mortality increased accordingly with age. The risk of mortality due to leukemia gradually decreased from 2005 to 2017. Leukemia remains a public health problem that requires continuous attention.
Introduction
Malignant neoplasm is the leading cause of death from disease in developed countries and the second cause in developing countries worldwide [1,2].In a latest report from the International Agency for Research on Cancer, there were an estimated 19.3 million new cancer cases and nearly 10 million cancer deaths worldwide in 2020 [3].It seriously affects and threatens human health, and has become one of the most prominent major public health problems worldwide [4].
Leukemia is a common malignancy of the hematologic system, a category of malignant clonal diseases of hematopoietic stem cells characterized by uncontrolled malignant proliferation of mature leukocytes and their precursors in the blood and bone marrow [5].It has a poor prognosis, with the 5-year survival rate of leukemia patient still ≤ 50% [6].The Cancer Statistics Report showed that the number of new leukemia cases globally reached 475,000 in 2020, an 84.82% increase from 2000, and the number of deaths from leukemia globally reached 312,000 in 2020, a 60.0% increase from 2000 [3,7].China accounted for 62,000 deaths due to leukemia in 2020, 19.87% of the global leukemia deaths [1].Leukemia treatment is extremely expensive and difficult to cure, causing great hardship for patients and families [8].
Leukemia is a part of the United Nations' third Sustainable Development Goals, which aims to cut premature mortality from non-communicable diseases by one third by 2030 [9].Tracking changes in the burden of leukemia could provide relevant data for better policy development.Considering the relatively few studies on leukemia incidence, mortality and disease burden analysis in China.The study collected national, gender-specific, region-specific leukemia incidence and mortality data from the Chinese Cancer Registry Annual Report from 2008 to 2020.Analyzed the trends of leukemia incidence and mortality using Joinpoint regression, and analyzed the effects of age, period, and cohort effects on the risk of leukemia incidence and mortality using an age-period-cohort model.In order to produce important basic data for the control of leukemia in China, to provide a scientific basis for leukemia prevention and treatment.
The raw data were collected from cancer registries in 31 provinces (autonomous regions and municipalities) and Xinjiang Production and Construction Corps.The data were reviewed, evaluated, collated and analyzed according to the requirements of the Chinese guideline cancer registration and the standards of International Agency for Research on Cancer/International Association of Cancer Registries [25].
Joinpoint regression model
Trend of leukemia incidence and mortality was performed using Joinpoint regression models to calculate average annual percentage change (AAPC) and annual percentage change (APC) with their 95% confidence intervals (95% CI).The model creates a segmented regression based on the temporal characteristics of the disease distribution, divides the study time into different intervals through multiple connection points, and fits and optimizes the trends in each interval to evaluate the characteristics of specific disease changes in different intervals [26,27].The Monte Carlo permutation test was used to determine the number of connection points, the location of each connection point and the corresponding p-value, with a test level of α = 0.05 (two-sided test).Joinpoint regression models are linear and log-linear models, and the log-linear model is generally chosen when analyzing population-based trends in cancer incidence and mortality [28]: Where e is the natural base, k indicates the number of turning points, τ k indicates the unknown turning points, β 0 is the invariant parameter, β 1 is the regression coef- ficient, δk indicates the regression coefficient of the segment function in paragraph k.
Calculation formula of APC: AP C = e β1 − 1 × 100 age.The risk of mortality due to leukemia gradually decreased from 2005 to 2017.Leukemia remains a public health problem that requires continuous attention.
Whereβ 1 is the regression coefficient, w i is the width of the interval span (i.e., the number of years included in the interval) for each segmentation function, and β i is the regression coefficient corresponding to each interval.
Age-period-cohort model
To analyze the effects of age, period, and cohort on leukemia incidence and mortality.Used 5 years as one age period, 0-89 years were divided into 18 age periods.The Poisson log-linear model was used to evaluate the ageperiod cohort model by solving the intrinsic estimator, and the model fitted goodness of scale was evaluated comprehensively using the red pool information criterion.The formula is expressed as Where α ,β andγ are age, period and cohort effects, respectively, and ε is the residual.
To avoid expanding the birth cohort and reducing the temporal precision of describing the risk of incidence and mortality, this study used age-specific data from 2005, 2010, and 2015 for the simulation of the age-periodcohort model [29].
Statistical analysis
Joinpoint regression analysis was performed using the Joinpoint Regression Program 4.9.1.0software developed by the National Cancer Institute.The age-period-cohort model was conducted using an online web analysis tool developed by Rosenberg and Check et al. [30].A statistically significant difference was considered at P < 0.05.
Incidence of leukemia and its trends in China, 2005-2017
From 2005 to 2017, the total number of new cases of leukemia in the Chinese cancer registry were 144,997, including 82,499 cases in males (56.90%) and 62,498 cases in females (43.10%).A total of 83,343 cases (57.48%) were in urban areas and 61,654 cases (42.52%) were in rural areas.The overall incidence rate was 5.92/100,000 (6.65/100,000 males and 5.18/100,000 females; 6.29/100,000 urban and 5.49/100,000 rural) and the ASIR was 4.54/100,000 (5.14/100,000 males and 3.95/100,000 females; 4.75/100,000 urban and 4.18/100,000 rural).The incidence rate and ASIRs are higher in males and urban areas than that in females and rural areas, respectively.(Table 1).
Mortality of leukemia and its trends in China, 2005-2017
From 2005 to 2017, the total number of leukemia deaths in Chinese Cancer Registry were 96,454 patients, of which 55,944 (58.00%) were males and 40,510 (42.00%) were females.The total number of deaths in urban areas were 55,144 (57.17%) and in rural areas were 41,310 (42.83%).The overall mortality rate was 3.94/100,000 (4.51/100,000 males and 3.35/100,000 females; 4.09/100,000 urban and 3.68/100,000 rural) and the ASMR was 2.91/100,000 (3.40/100,000 males and 2.44/100,000 females; 2.89/100,000 urban and 2.96/100,000 rural).Both the mortality and ASMRs were higher for males than that for females.The mortality rate in urban areas is higher than that in rural areas, while the ASMR rate is lower than that in rural areas (Table 1).
Age-period-cohort model
The trends in age effects on incidence rate were generally consistent across the national, males, females and urban areas.Among those aged 0-49 years, the risk of developing leukemia increased slowly with age.In aged 50-79 years, the risk of developing leukemia all increased rapidly with age, peaking in the 75-79 years.Among those under 18 years, the risk of developing the disease was higher in children aged 0-4 years.In the rural population aged 0-79 years, the age effects all remain consistent with the nation, but the risk of leukemia still tended to increase after the age of 80 years.The trend showed that the contribution of age to the risk of developing leukemia increased gradually with age.In the period effect, the national risk of leukemia incidence gradually increased through time.The cohort effect for the risk of leukemia peaked in 1990-1994 with a relative risk (RR) of 1.37 (95% CI: 0.99, 1.69), then declined slightly in 1995-1999 with an RR of 1.30 (95% CI: 1.08, 1.73), and gradually increased thereafter (Figs.2A and 3).The trend in the age effects on mortality rate were basically the same for the national, males, females and urban and rural.In the 0-49 age group, the risk of mortality from leukemia is the highest in children aged 0-4 years.Among those aged 50-79 years, the risk of leukemia and 4).
Discussion
This study aimed to analyze the epidemiological characteristics of leukemia in China from 2005 to 2017 and estimate their age-period-cohort effects.The results displayed that the ASIR of leukemia from 2005 to 2017 showed an overall increasing trend, and the ASMR showed an increasing trend followed by a decreasing trend with 2012 as the dividing line.The ASIRs and ASMRs of leukemia in males were not only higher than those in females, but also increased faster than in females.The incidence of leukemia was lower in the rural than in urban areas, but the AAPC was 2.2 times higher than urban areas.The risk of leukemia incidence and mortality was higher in children aged 0-4 years.The cohort effect results showed that those born in 2015-2019 had the highest risk of developing leukemia.
The leukemia incidence in China was generally at a moderate level compared to other countries and regions worldwide [31].The ASIR in China in 2017 was 4.83/100,000, which is lower than the estimated global leukemia incidence rate of 8.32/100,000, 7.10/100,000 in Asia, 8.62/100,000 in the United States based on the latest GLOBOCAN 2020 data, and higher than the 3.98/100,000 in Africa and 3.12/100,000 in India.The ASMR in China was 2.65/100,000 in 2017, lower than the global rate of 4.32/100,000, 3.68/100,000 in Asia, 5.22/100,000 in the United States, and higher than 2.44/100,000 in Africa and 2.43/100,000 in India [32].This may be related to living behavior in different countries/regions, different ethnicity, geographical location, and the input of medical resources [33] For example, leukemia incidence and mortality rates are relatively higher in the United States compared to China.In terms of dietary structure, the United States is typically known for its high consumption of processed foods and high-calorie meals, including foods high in fat, sugar, and salt [34].The popularity of fast-food restaurants and chain restaurants in the United States has also made it easier for people to access these high-calorie foods [35,36].This dietary pattern is significantly related to the occurrence of obesity [37].According to the latest data from the United States Centers for Disease Control and Prevention, the adult obesity rate in the United States was about 42.4% [38], significantly higher than in other countries or regions.Previous study showed that a high body mass index (BMI) increased the risk of leukemia, especially chronic lymphocytic leukemia and myeloid leukemia [39].High BMI significantly increases the prevalence of leukemia, with each 5 kg/m 2 increase in BMI associated with a 13% increased risk of leukemia [40].High BMI may affect the occurrence and prognosis of leukemia through a variety of pathways, such as causing a chronic low inflammatory state which can lead to impaired immune system function, thereby increasing the incidence and mortality of leukemia.In addition, the high incidence of childhood leukemia in China may also be related to obesity.A study in the Lancet pointed out that China is the fastest growing country in the world in terms of fast food consumption, and its total consumption is catching up with Western countries [41].However, the entry and continuous popularity of Western fast food into China has led to a significant increase in children's weight, because these fast foods with high sugar and fat are more popular among children [42].This is consistent with another cross-sectional study conducted in China, it found that children with higher BMI consumed fast food more frequently [43].The total ASIR of leukemia in China showed an increasing trend from 2005 to 2017.Improvements in medical testing and diagnostic techniques, such as the use of genetic testing and deep sequencing, and the standardization of diagnostic criteria, have allowed for a more correct diagnosis of leukemia.Moreover, the change in lifestyle, diet structure, coupled with the progress of urbanization and industrialization, increased exposure to environmental pollution and other risk factors have led to an increase in the incidence of leukemia [5,44].The high production and wide range of uses of benzene in China, the world's largest consumer of pure benzene [45], render it more exposed to the population.Numerous studies have shown that benzene exposure increases the risk of leukemia [46,47].Biologically, this relationship also held up, with animals and in vitro experiments found that the target organ of benzene was the bone marrow and that its toxic metabolites can attack hematopoietic stem cells in a variety of ways, causing hematologic toxicity [48].Benzene can also react with DNA, causing DNA damage and chromosomal abnormalities, including genetic mutations and chromosomal aberrations, which can lead to leukemia [49].
The results of this study showed that leukemia incidence and mortality rates were higher in males than in females.Gender differences in leukemia disease burden can be observed, which was consistent with the findings of previous studies [50,51].This may be related to greater exposure of males to risk factors such as environmental pollution, smoking, and unhealthy lifestyles.Studies in countries such as the United States and Canada have shown that residents living in industrial areas have a higher risk of developing leukemia than those living away from industrial areas [52,53].Some studies showed that chemicals in cigarettes may cause damage to stem cells and blood-forming cells in the bone marrow, thus increasing the risk of leukemia.In addition, smoking may also increase the risk of developing leukemia by interfering with the normal function of the immune system [54,55].The disease burden of leukemia caused by smoking factors in China is the most serious among the four risk factors for leukemia collected by Global Burden of Disease 2019 [5].And according to the Chinese Center for Disease Control and Prevention Adult Tobacco Survey report, the current smoking rate in China is 52.1% in males and 2.7% in females [56], and the much higher smoking rate in males than in females is also an important factor [57].This suggests that males are one of the key populations for leukemia prevention and control.The government should regularly conduct health education for males, strengthen tobacco control, and improve occupational work environments.For example, there is still a discrepancy between the current status of implementation of tobacco control measures in China and the requirements of the WHO's MPOWER measures [58].Therefore, policies such as national legislation banning smoking in public places, inclusion of smoking cessation medications in medical insurance, a total ban on tobacco advertising, promotion and sponsorship, and an increase in the tax rate and price of tobacco products (with a view to achieving a cigarette retail tax rate greater than or equal to 75% as recommended by WHO) should be introduced and improved as soon as possible [59].
The incidence of leukemia in rural areas was lower than that in urban areas, but the AAPC was 2.2 times higher than urban areas.The accelerated urbanization and dramatic increase in population have increased urban environmental pollution (e.g., air pollution), which may explain the higher incidence in urban areas.This is because air pollution is a risk factor for leukemia development [60].As shown by the national ecological quality profile in 2020, the substandard rate of urban air pollution in China was as high as 40.1% [61].However, in recent years, large-scale infrastructure construction and gradual implementation of industrialization policies in rural areas have led to accelerated industrialization and a gradual increase in air pollution.According to China's National Development and Reform Commission, development zones established on the basis of administrative units at the county level and below has become a new element of rural industrial development.By the end of 2002, there were 6,866 such development zones in China, with a planned area of 38,600 square kilometers, while the area of 75 urban built-up areas by the end of 2020 was only 30,500 square kilometers [62].China's rural areas are currently facing the most representative air pollution problems.With industrialization, the rural economy continues to develop well, and different forms of modern equipment are applied to rural production and life, leading to an increasing number of automobile exhaust and production waste gas, which accelerates the decline of air quality [63].These may be the reasons for the rapid increase in the incidence of leukemia in rural areas.
The risk of leukemia incidence and mortality was highest in children aged 0-4 years among those under 18 years.The cohort effect results also showed that those born in 2015-2019 had the highest risk of developing leukemia.Childhood leukemia is the most common malignancy in childhood and should be of concern to the public, with emphasis on prevention.Prevention based on environmental factors is a very important part of the prevention and treatment of childhood leukemia.A number of studies have confirmed that the yearly increase in the incidence of childhood leukemia is associated with environmental exposures, including radiation, air pollution, chemical exposure, traffic fumes, tobacco, etc. [64].More than 90% of the world's population breathes dangerously high levels of air pollutants.A meta-review of the relationship between leukemia and air pollution found that traffic-related air pollution was associated with excess risk of childhood leukemia.The dose-response analysis indicated that in the highest levels, traffic indicators near the child's residence, traffic density and NO 2 , there may be associated with excess risk of childhood leukemia [65].Another meta review of maternal exposure to air pollution and risk of leukemia at different times showed that exposure to benzene in the third trimester, as well as exposure to NO 2 in the second trimester and entire pregnancy, could also increase the risk of leukemia [48].This prompts the necessity of developing policies aimed at reducing air pollution exposure and protecting special populations to further reduce risks due to air pollutants.For example, when planning homes, schools or other facilities for children, policy makers should consider their distance from main roads, choose green building materials and furniture, such as solid wood flooring and formaldehyde-free panels, and use low volatile and environmentally friendly paints to reduce the volatile harmful substances.Moreover, not only children, but also pregnant women should strengthen their pregnancy health care and raise awareness of maternal health protection to reduce exposure to relevant risk factors.In addition to environmental pollution and lifestyle influencing factors, study showed a correlation between leukemia and genetic factors [66].A study from Switzerland also found that a family history of oncology was a risk factor for childhood leukemia.When an adult family member is diagnosed with chronic lymphocytic leukemia, the risk of acute lymphoblastic leukemia in children is increased 1.40 times [67].This suggests that people with a family history of related cancers should undergo more frequent screening, monitoring and diagnostic workups under the guidance of a physician.
The results of the period effect for leukemia showed a trend of increased incidence and decreased mortality.And cohort effect showed a high risk of incidence but a relatively low risk of mortality in the late-birth cohort.These changes may be due to the advances in medical technology and improved medical treatments for leukemia in China, which have made the treatments of leukemia more effective.At present, chemotherapy, hematopoietic stem cell transplantation and targeted therapy are effective treatments for leukemia in China.Among them, with the research of biological targets for leukemia, the "Shanghai program", "Beijing program" and chimeric antigen receptor T cell therapy innovation have been widely used, and Chinese leukemia has entered the era of precision targeted therapy.Leukemia patients are expected to have a 5-year survival rate of 60-90% in China [68].It is evident that advances in leukemia treatment have brought it into the era of slow disease management, and lethality has abated.However, the incidence is still rising due to increasing exposure to leukemia risk factors, consistent with another study [60].These were consistented with the cohort effect in this study, where those in the late birth cohort (birth period was 2005-2017) were synchronized with these changes described above.
Limitations exist in this study.The leukemia data were obtained from the Chinese Cancer Registry Annual Report, 2008-2020.The original data were obtained from the national cancer registry rather than random sampling, so the representation and extrapolation results for the whole population are inadequate.This study also has limitations in terms of timeliness, as there is generally a 3-year time delay in the latest cancer registry data.There are certain limitations to the representativeness and comparability of the data.The subtypes of leukemia were not studied in this study due to insufficient available data.
Conclusion
The ASIR of leukemia in China showed an overall increasing trend from 2005 to 2017, while the ASMR showed an increasing trend followed by a decreasing trend, with significant gender and regional differences.Among those under 18 years of age, children aged 0-4 years are more likely to develop leukemia and face a higher risk of mortality.This suggests that people should remain to attach importance to the prevention and treatment of leukemia, to strengthen education.It also strengthens the identification and screening of high-risk groups according to gender and age distribution differences, reduces the burden of leukemia disease in the population, improves the healthy living standard of the population, and provides basic data and scientific support for the prevention and treatment of leukemia in China.
Table 2
Trend in incidence and mortality of leukemia in China, 2005 − 2017 (%)
|
2023-08-28T13:28:57.860Z
|
2023-08-28T00:00:00.000
|
{
"year": 2023,
"sha1": "aeb76d6fd74fc5f4698e4b5c7eb5baa607380ae4",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/s12889-023-16226-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "260108f6d69e17297eb92130f08532ff214af8d4",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264478926
|
pes2o/s2orc
|
v3-fos-license
|
Public food procurement as a tool for more sustainable and high quality food offer in public institutions
Abstract As children spend almost a third of their day in kindergarten or school and consume a large part of their daily energy intake in educational environments, those environments are important factors in the development of healthy eating habits at one hand or unhealthy ones with possible consequence of childhood obesity on the other. Strong evidence of the importance of access to healthy and balanced nutrition in schools for children's health is available, and a nutritionally regulated school environment is associated with a lower risk of childhood obesity, which can also be facilitated by a transparent and quality-oriented procurement system. JRC has estimated the value of the European social food service market as sizeable in both reach and force of €82 billion. Progressive and targeted public procurement of food for health can reward food business operators who provide nutritionally balanced meals. According to the EU AP on childhood obesity 2014-2020, action such as use of nutritional criteria in food service procurement should be required in schools and kindergartens. JA Best-ReMaP situation analyses showed that current implementation of public food procurement across EU Member States differs substantially. Project activities aimed to support the establishment of the intersectoral working mechanisms (groups) for the public procurement of foods in participating MSs, to increase the understanding, knowledge and skills regarding public procurement of food/food products in public health and other selected public institutions, to pilot Slovene best practice tool for PFP as optional approach for implementation of the sustainable and high quality PFP procedures and to recommend further institutionalised implementation of the public procurement procedure for foods, based on quality standards, in the Member States and propose minimum criteria for sustainable public procurement, in line with the developments at the European Commission level.
Obesogenic environments stimulate the consumption of excess calories through HFSS foods.Therefore, activities aimed at reformulating foods into healthier alternatives are the first crucial step in improving the food supply for children.Harmonised EU monitoring of the reformulation actions was addressed as one of the key approaches in creating healthier options for children and their caregivers.Supported by the EU AP on childhood obesity 2014-2020, access to an improved and healthier food offer in supermarkets should be made easier.Currently, only a few European countries are monitoring processed food supply at the brand level, and this deprives European and national policy makers of an objective description of the situation to define and assess nutrition policies.JA Best-ReMaP, building on the JANPA and EUREMO outcomes, aimed to fill in the reformulation monitoring gap and to add to the improved access to high quality foods for the European citizens, especially for children.To do so, in 19 European countries, JA Best-ReMaP implemented a standardised European monitoring system for processed food reformulation.Such a tool, at a country level, enables to monitor food offerings and nutritional content of individual monitored processed foods, as well as to identify best formulation and room for reformulation of the food produced, sold, and consumed.Complex steps were taken in the project implementation, including the identification of the priority processed food groups for a European monitoring of the food supply, exploration of the new technologies and new sources of data capacities for nutrition data collection, development of the knowledge to conduct and analyse own data collection in the participating EU MSs, explorative development of an open European branded foods database, and first attempt of the European analysis of the trends of the nutritional quality of processed food and their impacts.
There is unequivocal evidence that childhood obesity is influenced by marketing of foods and non-alcoholic beverages high in saturated fat, salt and/or free sugars (HFSS).Furthermore, there is convincing evidence that HFSS marketing in traditional media has detrimental effects on children's eating and eating-related behaviour, and HFSS marketing in digital media has similar effects with stronger effect in vulnerable groups.According to the WHO Commission on ending childhood obesity, efforts must be made to ensure that children everywhere are protected against the impact of marketing and given the opportunity to grow and develop in an enabling food environment -one that fosters and encourages healthy dietary choices and promotes the maintenance of healthy weight.Children are being continuously exposed to powerful food marketing, increasingly in the digital environment.Best-ReMaP partners were exploring how to better monitor food marketing and advertising activities, in particular of promotional activities targeted to and/or impacting children at one hand, and how could marketing activities could be better regulated.Main actions of Best-ReMaP consortium in this field were as following, among others: (1) to identify, develop and share best policy practices to reduce exposure of children to the (digital) marketing of unhealthy foods, (2) develop coordinated and comprehensive protocols and tools to monitor the extent and nature of (digital) marketing exposure of children, including the upgraded WHO nutrient profile model, and to (3) support Member States with the implementation of the new EU rules on audiovisual media services.
Abstract citation ID: ckad160.436
Public food procurement as a tool for more sustainable and high quality food offer in public institutions Mojca Gabrijelc ˇic ˇBlenkus M Gabrijelc ˇic ˇ1, N Fras 1 , P Oz ˇbolt 1 1 NIJZ, Ljubljana, Slovenia Contact: mojca.gabrijelcic@nijz.si As children spend almost a third of their day in kindergarten or school and consume a large part of their daily energy intake in educational environments, those environments are important factors in the development of healthy eating habits at one hand or unhealthy ones with possible consequence of childhood obesity on the other.Strong evidence of the importance of access to healthy and balanced nutrition in schools for children's health is available, and a nutritionally regulated school environment is associated with a lower risk of childhood obesity, which can also be facilitated by a transparent and quality-oriented procurement system.JRC has estimated the value of the European social food service market as sizeable in both reach and force of E82 billion.Progressive and targeted public procurement of food for health can reward food business operators who provide nutritionally balanced meals.According to the EU AP on childhood obesity 2014-2020, action such as use of nutritional criteria in food service procurement should be required in schools and kindergartens.JA Best-ReMaP situation analyses showed that current implementation of public food procurement across EU Member States differs substantially.Project activities aimed to support the establishment of the intersectoral working mechanisms (groups) for the public procurement of foods in participating MSs, to increase the understanding, knowledge and skills regarding public procurement of food/food products in public health and other selected public institutions, to pilot Slovene best practice tool for PFP as optional approach for implementation of the sustainable and high quality PFP procedures and to recommend further institutionalised implementation of the public procurement procedure for foods, based on quality standards, in the Member States and propose minimum criteria for sustainable public procurement, in line with the developments at the European Commission level.
Abstract citation ID: ckad160.437 Sustainalibity elements of the Best-ReMaP Joint Action
Valentina De Cosmi M Silano 1 , S Tonello 2 , D Sienkiewicz 2 , M Gabrijelc ˇic ˇ3, M Robnik Levart 3 , V De Cosmi 4 1 Instituto Superiore di Sanita, Rome, Italy 2 EuroHealthNet, Brussels, Belgium 3 NIJZ, Ljubljana, Slovenia 4 Dipartimento Sicurezza Alimentare, Nutrizione, Instituto Superiore di Sanita, Rome, Italy Contact: valentina.decosmi@iss.it To enable sustainabile and effective uptake of recommended actions, key policy-relevant messages are conveyed: (1) enable the implementation of processed food monitoring and reformulation by ensuring personnel and resources for institutionalized actions at governmental level; re-establish platforms at European level like former HLG on nutrition and PA to facilitate knowledge exchange; (2) reduce the marketing of unhealthy foods to children via adoption of government-led regulatory approaches; policies cover the reduction, preferably the removal of marketing of unhealthy foods in a broad set of marketing types and techniques; raise the age threshold for marketing regulations to 18 years; define the foods and drink products to be restricted from marketing through a strict government-led NPM, based on the WHO Europe NPM, as outlined in the revised 2018 AVMSD; enforce a national food marketing code by an assigned national administrative body, and ensure financial and human resources to cover the workload related to food marketing monitoring; (3) support actions in public food procurement policies by the involvement of inter-sectoral collaboration in MSs; facilitate the policies in this field by collaboration with an EU public food procurement network to share knowledge and experiences among MSs; align the implementation of the public food procurement legislation for MSs; guarantee a sufficient budget for public food procurement and cooperate on the interventions with relevant sectors and stakeholders, such as the parents, communities.A Food System Sustainability Scoreboard was proposed as a mean of how to insert a monitoring mechanisms of the food system sustainability in the European Semester.DG JRC has elaborated on the FABLE testing branded foods EU open database.MSs are indicating the overall expressed need for re-establishing the high level collaboration in the areas of nutrition, physical activity and obesity at the EU level.
Introduction:
Vaccination is one of the most effective public health interventions, preventing more than 4 million deaths each year.However, the complex and multifactorial phenomenon of vaccine hesitancy has increased over the years, causing a reduction in vaccination coverage (VC) and the resurgence of epidemics from vaccine-preventable diseases (VPD).Moreover, the COVID-19 pandemic had a major impact on the capacity of health systems to continue the delivery of essential health services, including vaccination.In this context, the objective of this study is to analyse the trend of 10 vaccinations in Italy (measles, mumps, rubella, tetanus, diphtheria, pertussis, chickenpox, hemophilus, hepatitis B, and polio), from 2000 to 2021, evaluating the impact of the introduction of the mandatory law in 2017 and the pandemic.Methods: Data were obtained from the Italian Ministry of Health.The joinpoint regression model was used to estimate changes in vaccination coverage trends for each indicator.It allows estimation of an annual percentage change (APC) in vaccination coverage, reflecting an increase or decrease over time.For each indicator, the presence of a joinpoint expressed significant changes in APC trends.APC was considered significant when p < 0.05.
16th
European Public Health Conference 2023
|
2023-10-26T15:11:40.046Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "a9789f57d33a343f077e984af0fefc469f3c5059",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/eurpub/article-pdf/33/Supplement_2/ckad160.436/52416760/ckad160.436.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "84c74ad7f1f1d68ad6a79c272be42b8d545239f2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
15611371
|
pes2o/s2orc
|
v3-fos-license
|
On counting permutations by pairs of congruence classes of major index
For a fixed positive integer n, let S_n denote the symmetric group of n! permutations on n symbols, and let maj(sigma) denote the major index of a permutation sigma. For positive integers k<m not greater than n and non-negative integers i and j, we give enumerative formulas for the cardinality of the set of permutations sigma in S_n with maj(sigma) congruent to i mod k and maj(sigma^(-1)) congruent to j mod m. When m divides n-1 and k divides n, we show that for all i,j, this cardinality equals (n!)/(km).
Introduction
Denote by S n the symmetric group of all n! permutations on the n symbols 1, . . . , n. First recall some combinatorial definitions pertaining to permutations. See, e.g., [4]. Definition 1.1. Let σ ∈ S n . For 1 ≤ i ≤ n − 1, i is said to be a descent of σ if σ(i) > σ(i + 1).
Definition 1.2. The major index of σ, denoted maj(σ), is the sum of the descents of σ.
The values of the statistic maj range from 0 (for the identity) to n 2 . In [1], the following result was discovered using certain representations of the symmetric group S n , and then proved by means of a bijection as well. (n − 1)! = |{σ ∈ S n : maj(σ) ≡ i mod n}|.
The paper is organised as follows. In Section 2 the main technical lemmas are presented. In Section 3 we derive the enumerative formulas, and in Section 4 we give purely bijective proofs of special cases of Theorem 3.1 and Proposition 2.5.
Preliminaries
This section contains the main lemmas that are needed for the rest of the paper. Let γ ∈ S n be the n-cycle which takes i to i + 1 modulo n, for all i. We will sometimes write γ n for clarity. The circular class of σ is the set of permutations The following observation is due to Klaychko [3]. For our purposes it is more convenient to state the result in terms of the inverse permutation. This formulation also admits an easy proof, which we give below for the sake of completeness.
Then the function τ → maj(τ −1 ) takes on all n possible values modulo n in the circular class of σ. More precisely, Proof. Let τ = a 1 . . . a n (written as a word). Then τ γ = a 2 . . . a n a 1 . Note that i is a descent of τ −1 if and only if i appears to the right of i + 1 when τ is written as a word in {1, 2, . . . , n}.
By looking at occurrences of i to the right of i + 1, it is easy to see that Hence in all cases the difference is +1 modulo n.
Lemma 2.2. Let σ ∈ S n−1 , and let σ i denote the permutation in S n obtained by inserting n in position i of σ, 1 ≤ i ≤ n. Then for each k between 1 and n, the values of the major index on the set {σ 1 , . . . , σ k } form a consecutive segment of integers [m + 1, m + k], and the value of maj(σ k+1 ) is either m or m + k + 1 according as k is a descent of σ or not, respectively. Note that maj(σ n ) = maj(σ).
Proof.
Let σ ′ = a 1 . . . a n−1 , with descents in positions i 1 , . . . i d . Let σ be the permutation in S n obtained by appending n to σ ′ . Hence maj(σ) = maj(σ ′ ). We shall show that the value of maj(σ ′ ) increases successively by 1 as n is inserted into σ ′ in the following order: (1) first in the positions immediately following a descent, starting with the rightmost descent and moving to the left; (2) then in the remaining positions, beginning with position 1, from left to right.
The following facts are easily verified: (1) If n is inserted immediately after a descent of σ ′ , i.e., if k = i j + 1, 1 ≤ j ≤ d, then n contributes a descent in position i j + 1, but the i j th element ceases to be a descent. Also the (d − j) descents to the right of n are shifted further to the right by one. Thus and hence the difference ∆ k ranges from 1 through d.
(2) If 1 ≤ k ≤ i 1 , then the d descents to the right are shifted over by 1, and thus and hence ∆ k ranges from d + 1 through d + i 1 .
(3) If n is inserted in position k between two descents, but not immediately following a descent, i.e., if 1 and hence ∆ k ranges from i d + 2 through n − 1. This establishes the claim. It also shows that as n is inserted into σ ′ from left to right, the difference in major index goes up (from maj(σ ′ )) first by (d + 1), then up by one at each step, except when it is inserted immediately after the jth descent, in which case it goes down to (d − j + 1). Since when n is in position n, maj(σ ′ ) is unchanged, this establishes the statement of the lemma. Lemma 2.4. Let σ ∈ S n−1 , and let σ i denote the permutation in S n obtained by inserting n in position i of σ, for 1 ≤ i ≤ n. Then maj(σ −1 i ) = maj(σ) mod (n−1). Proof. Consider the effect of inserting n on the set of descents of σ −1 . If n is inserted to the right of (n − 1), there is no change; if n is inserted to the left of (n − 1), then (n − 1) becomes a descent of σ −1 i . In either case, the major index of the inverse permutation is unchanged modulo (n − 1).
Finally we shall need the following result, which generalises Proposition 1.3. It is perhaps known, although we do not know of a precise reference. There is an easy generating function proof which we include for the sake of completeness. In Section 4 we will give a constructive proof of the equivalent statement for inverse permutations.
Proof. Recall the well-known formula (see [4]) Note that Lemma 2.2 gives an immediate inductive proof of formula (B). Now fix integers 1 ≤ k ≤ n and 0 ≤ j ≤ k − 1. To show that the number of permutations in S n with major index congruent to j mod k is n!/k, it suffices to show that, modulo the polynomial (1 − q k ), the left-hand side of (B) equals (n!/k) · (1 + q + . . . q k−1 ).
, it follows from the generating function that for fixed k ≤ n, the sum on the left-hand side vanishes at all kth roots of unity not equal to 1. Hence, modulo ( Putting q = 1 yields c = n!/k, as required.
Theorem 3.1. Let ℓ be a divisor of n − 1, ℓ = 1, and let k be a divisor of n, k = 1.
By examining Lemma 2.2 more closely, we obtain the following recurrence on n for these numbers in the case when k and ℓ are divisors of (n − 1).
Proof. Let σ ∈ S n−1 , and construct σ i , i = 1, . . . , n in S n as in Lemma 2.2, by inserting n in position i. Since ℓ|(n − 1), we have by Lemma 2.4 that for all i, Now let k|(n − 1). By Lemma 2.2, the major indices of the first (n − 1) elements σ i , i = 1, . . . , n − 1, form a segment of (n − 1) consecutive integers, and hence the residue class i modulo k appears exactly n−1 k times among them. Also note that maj(σ) = maj(σ n ).
Hence we have m n (i\k; j\ℓ) Collecting terms and using Proposition 2.5, we obtain as required.
We note that while the above arguments are not symmetric in k and ℓ, the numbers m n (i\k; j\ℓ) satisfy (C) m n (i\k; j\ℓ) = m n (j\ℓ; i\k).
This follows by applying the involution τ → τ −1 . Note that in view of Proposition 2.5, we know that, for fixed ℓ, the sum over i = 0, 1, . . . , k − 1 of the numbers m n (i\k; j\ℓ) is n! ℓ .
Some bijections
In this section we present bijective proofs for some of the results derived in Sections 3 and 2. Recall that this paper was originally motivated by the algebraic discovery of the formula (A). We now give a bijective proof of (A), which is the special case k = n, ℓ = n − 1 of Theorem 3.1.
Proposition 4.1. (Bijection for the case k = n, ℓ = n − 1 of Theorem 3.1.) Fix integers 0 ≤ i ≤ n − 1, 0 ≤ j ≤ n − 2. Then the number of permutations σ in S n such that maj(σ) ≡ i mod n and maj(σ −1 ) ≡ j mod (n − 1), equals (n − 2)! Proof. First note that (n − 2)! counts the number of permutations in S n−1 having (n − 1) as a fixed point. Let A n−1 be this set of permutations, and let B n be the subset of S n with major indices as prescribed in the statement of the theorem. Given σ ∈ A n−1 , by Lemma 2.1 there is a unique circular rearrangement σ ′ in S n−1 whose inverse has major index congruent to j mod (n − 1). Lemma 2.2 then shows that, for each i = 0, 1, . . . , n − 1, there is a unique position in σ ′ in which to insert n, in order to obtain a permutation σ ′′ ∈ S n such that maj(σ ′′ ) ≡ i mod n. By Lemma 2.4, the passage from σ ′ to σ ′′ does not change the major index of the inverses modulo (n − 1), and thus maj(σ ′′ −1 ) = maj(σ ′ −1 ) ≡ j mod (n − 1). Hence σ → σ ′′ gives a well-defined map from A n−1 to B n . To see that this is a bijection, given σ ′′ ∈ B n , erase the n to obtain σ ′ ∈ S n−1 , and let σ be the unique circular rearrangement of σ ′ such that σ(n − 1) = n − 1. Then σ ∈ A n−1 , and clearly the map is a bijection. Example 4.1.1. Let n = 6, i = 2, j = 3. Take σ = 21345 ∈ A 5 . Note that maj(σ −1 ) = 1. The unique circular rearrangement whose inverse has major index equal to 3(≡ 3 mod 5) is σ ′ = 34521. Now maj(σ ′ ) = 7, (descents in positions 3 and 4). Now use (the proof of ) Lemma 2.2. To obtain a permutation with major index 8 (≡ 2 mod 6), insert 6 into position 5 (immediately after the right-most descent). This gives σ ′′ = 345261 ∈ B 6 .
The remainder of this section is devoted to giving a constructive proof of Proposition 2.5. A bijection for the case k = n was given in [1], using Lemma 2.1. We do not know of a bijection for arbitrary k, but a bijection for the case k = n − 1 is given in the proof which follows. Proof. Let B n denote the set {σ ∈ S n : maj(σ −1 ) ≡ j mod (n − 1)}. It suffices to show that this set has cardinality n(n − 2)! Let C n denote the set of permutations τ ∈ S n such that, when n is erased, (n − 1) is a fixed point of the resulting permutation τ ′ in S n−1 . Observe that C n has cardinality n(n−2)!, since the number of permutations in S n−1 which fix (n − 1) is (n − 2)!, and there are n positions in which n can be inserted.
We describe a bijection between C n and B n . If τ ∈ C n , let τ ′ be the permutation in S n−1 obtained by erasing n. By definition of C n , τ ′ (n − 1) = n − 1. By Lemma 2.1, there is a unique circular rearrangement τ ′′ ∈ S n−1 of τ ′ such that the major index of the inverse of τ ′′ is congruent to j mod (n − 1). Now constructτ ∈ S n by inserting n into τ ′′ in the same position that it occupied in τ, i.e.,τ −1 (n) = τ −1 (n). By Lemma 2.4, maj(τ −1 ) = maj(τ ′′ −1 ) ≡ j mod (n − 1). Hence we have a map τ →τ ∈ B n . It is easy to see that this construction can be reversed exactly as in the proof of Proposition 4.1, and hence we have the desired bijection.
Proof. Let A n denote the set of permutations in S n which fix n, and let B n denote the subset of S n subject to the conditions in the statement of Part (1). Let τ ∈ A n . Consider the circular class of τ consisting of the set {τ, τ γ, . . . , τ γ n−1 }. The proof of Lemma 2.1 shows that because τ (n) = n, we have the exact equality maj(τ γ i ) = maj(τ ) + i, for 0 ≤ i ≤ n − 1. In particular, for any 1 ≤ k ≤ n, the first k circular rearrangements τ γ i , 0 ≤ i ≤ k − 1, have the property that the major indices of their inverses form a complete residue system modulo k. More generally, this observation holds for any k consecutive circular rearrangements τ γ i , a ≤ i ≤ a + k − 1, where a is any fixed integer 1 ≤ a ≤ n − k + 1.
Hence for every τ ∈ A n , there is a unique i, a ≤ i ≤ a + k − 1 such σ = τ γ i has maj(σ −1 ) ≡ j mod k. Since n is in position n − i in τ γ i , clearly n − a − k + 2 ≤ σ −1 (n) ≤ n−a+1. Thus τ → σ gives a well-defined map from A n to B n . Conversely given σ ∈ B n , with σ −1 (n) = n − i + 1, a ≤ i ≤ a + k − 1, let τ ∈ S n be defined by τ γ i = σ. Then clearly τ (n) = n, and τ ∈ A n . This shows that our map is a bijection, and (1) is proved.
For (2), again we start with the set A n of the (n − 1)! permutations in S n which fix n. Let τ ∈ A n . Then as in the preceding proof, for i = 0, 1, . . . , sk − 1, the first sk circular rearrangements τ γ i have n in position (n − i), and maj((τ γ i ) −1 ) = maj(τ ) + i. In particular, for each J = 1, . . . , s, the major index of the inverse permutations in the subset {τ γ (J−1)k+i : 0 ≤ i ≤ k − 1} is a complete residue system modulo k. Hence the first sk rearrangements contain exactly s permutations with inverse major index congruent to j mod k. This establishes (2).
We are now ready to give a constructive proof of an equivalent restatement of Proposition 2.5, by looking at the circular classes of permutations τ ∈ S n which fix n. Note that the statement of Proposition 4.4 ( or Proposition 2.5) is invariant with respect to taking inverses, i.e., it says that n! k is also the number of permutations in S n with constant major index modulo k. Our constructive proof, however, works only for the inverse permutations. n! k = |{σ ∈ S n : maj(σ −1 ) ≡ j mod k}|.
Proof. We proceed inductively. We assume k ≤ n − 1, since the case k = n was dealt with in Proposition 1.3. It is easy to verify directly that the statement holds for n = 3. Assume we have constructed the permutations in S n−1 with inverse major index congruent to j mod k. Note that this means we can identify these permutations in the subset A n of S n . Let τ ∈ A n . We show how to pick out the permutations in the circular class of τ with inverse major index congruent to j mod k. Let n = qk + r. Taking s = q in Lemma 4.3 (2), the proof shows how to pick out the q permutations in the first qk circular rearrangements τ γ i , 0 ≤ i ≤ qk − 1. Now consider the remaining r (recall r < k) rearrangements τ γ i , qk ≤ i ≤ qk+r−1.
These will contain a (necessarily unique) permutation with inverse major index congruent to j mod k, iff maj(τ −1 ) ≡ j − i mod k, for qk ≤ i ≤ qk + r − 1, i.e., iff maj(τ −1 ) ≡ j − t mod k, for t = 0, . . . , r − 1. By induction hypothesis for each t = 0, . . . , r − 1, there are exactly (n − 1)!/k such permutations in A n . Hence there are r(n − 1)!/k permutations in A n whose circular class is such that, among the last r rearrangements, there is a permutation with inverse maj congruent to j mod k.
|
2014-10-01T00:00:00.000Z
|
2001-09-26T00:00:00.000
|
{
"year": 2001,
"sha1": "e65ddb2cc580bc95e680ce15c1d9f3361e50d538",
"oa_license": null,
"oa_url": "https://www.combinatorics.org/ojs/index.php/eljc/article/download/v9i1r21/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e65ddb2cc580bc95e680ce15c1d9f3361e50d538",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
231899387
|
pes2o/s2orc
|
v3-fos-license
|
Photophysical Properties of Protoporphyrin IX, Pyropheophorbide-a, and Photofrin® in Different Conditions
Photodynamic therapy (PDT) is an innovative treatment of malignant or diseased tissues. The effectiveness of PDT depends on light dosimetry, oxygen availability, and properties of the photosensitizer (PS). Depending on the medium, photophysical properties of the PS can change leading to increase or decrease in fluorescence emission and formation of reactive oxygen species (ROS) especially singlet oxygen (1O2). In this study, the influence of solvent polarity, viscosity, concentration, temperature, and pH medium on the photophysical properties of protoporphyrin IX, pyropheophorbide-a, and Photofrin® were investigated by UV-visible absorption, fluorescence emission, singlet oxygen emission, and time-resolved fluorescence spectroscopies.
Introduction
Photodynamic therapy (PDT) is a targeted technique for the treatment of malignant or diseased tissues that relies on three non-toxic elements-a light-activated drug (photosensitizer, PS), light, and molecular oxygen. Illumination of the PS induces the production of the triplet excited state 3 PS* which is able to transfer protons, electrons, or energy, leading to the formation of reactive oxygen species (ROS). ROS cause apoptosis or necrosis of tumor cells by photochemical oxidation [1][2][3].
A PS should ideally possess some valuable properties including (i) absorption peak in the near-infrared (NIR) region (700-1000 nm) of the UV-visible spectrum that provides enough penetration of light into deep tissues and energy to excite molecular oxygen to its singlet state efficiently, (ii) minimal skin photosensitivity, (iii) no dark toxicity, (iv) selective uptake by cancer tissues, thereby enabling the decrease of side effects [4,5], and (v) fast elimination.
The self-assembly or aggregation of PS in aqueous environments can be caused by different reasons and is favored for amphiphilic PSs showing a negligible PDT activity due to the emission reduction of the 3 PS* state in aggregated form [6,7]. The physicochemical properties of aggregates differ from those of monomers. They exhibit a broadened Soret band and red-shifted Q bands in the UV-visible absorption spectra, low fluorescence intensity and lifetime [8][9][10][11][12][13], and low singlet oxygen ( 1 O 2 ) production.
The hematoporphyrin derivative (HPD) and its purified form Photofrin ® (PF) were the first used PSs in PDT and PF was approved for the treatment of solid tumors [14][15][16][17]. PF was also indicated as a specific and selective radiosensitizing agent by several in vitro and in vivo studies [18][19][20][21][22]. PF is being used for the treatment of esophageal and non-small-cell and in vivo studies [18][19][20][21][22]. PF is being used for the treatment of esophageal and non-small-cell lung and pancreatic cancers as well as a possible therapy against Karposi's sarcoma and brain, breast, skin, and bladder cancers [23].
The selective accumulation of protoporphyrin IX (PpIX) in the tumor cells following administration of 5-aminolevulinic acid (5-ALA) has made this PS precursor very popular for skin cancer PDT and fluorescent diagnostics of tumor tissues [24,25]. Topical, oral, or intravenous administration of 5-ALA prodrug in excess leading to the formation and accumulation of PpIX in vivo [26] is used by dermatologists to treat several malignant neoplasms of the skin, such as Bowen disease or actinic keratosis [27].
Pyropheophorbide-a (PPa) is a natural second-generation bacteriochlorin PS which presents a significant absorption in the far-red spectral region and high 1 O2 formation upon light illumination, suggesting it for PDT [28,29].
This study aimed to explore the different parameters (i.e., solvent polarity, concentration, temperature, and pH medium) that influence the photophysical properties (absorption, fluorescence emission, and 1 O2 formation) of the three PSs (PpIX, PPa, and PF).
Results and Discussion
Chemical structures of PpIX, PPa, and PF are shown in Figure 1. PpIX has two ionizable propionate groups and a hydrophobic ring core, which gives it amphiphilic properties leading to an aggregation through π-π stacking interaction and vesicle formation [30]. PPa has only one propionate group and a hydrophobic ring core, it can also aggregate in aqueous solutions [31]. PF is composed of monomers, dimers, and some very large oligomers [32].
Influence of the Solvent
The structure of the molecule, ionic strength, pH, and temperature should play a main role in the photophysical properties [33]. UV-visible absorption spectra of PSs presented in Figure 2, were recorded in different solvents. ET (30) is a solvent polarity parameter that characterized the polarity of the different solvents. The bigger the ET (30) value, the more solvent polarity it is associated with.
Influence of the Solvent
The structure of the molecule, ionic strength, pH, and temperature should play a main role in the photophysical properties [33]. UV-visible absorption spectra of PSs presented in Figure 2, were recorded in different solvents. ET (30) is a solvent polarity parameter that characterized the polarity of the different solvents. The bigger the ET(30) value, the more solvent polarity it is associated with. two bands [34]. This can be explained by the fact that PpIX is aggregated in aqueous solutions. In polar solvents the QI band was red-shifted compared to the QI band in less polar solvents (629 nm and 641 nm in EtOH and PBS, respectively) ( Table 1), and the intensity decreased drastically. In the literature, it is often claimed that PpIX should be excited in vitro or in vivo at 630 nm. This wavelength of excitation is based on the absorption spectrum in EtOH. As it can be seen in Table 1 in water, PBS, and FBS, the QI band was located at 641, 641, and 640 nm, respectively. The UV-visible absorption spectra of PPa in various solvents exhibited a Soret band and four Q bands in the spectral range 300-700 nm ( Figure 2B and Table 1). The Soret band in non-aqueous solution was located between 409 nm and 415 nm whereas it was blue-shifted to 380 nm in water and PBS and 405 nm in FBS. QIV, QIII, QII, and QI were red-shifted from, respectively, 510 nm to 526 nm, 539 nm to 558 nm, 612 nm to 630 nm, and 671 nm to 677 nm, from toluene to PBS. The UV-visible absorption spectra of PPa showed a broad Soret band in glycerol, water, PBS, and FBS due to the formation of aggregates. Interestingly, all shifts of PPa in glycerol and FBS showed close values.
UV-visible absorption spectra of PF in the different solvents were less impacted by the change of the polarity than PpIX and PPa. The shape of the Soret band was broader in toluene, AcOEt and it became intense when the polarity of solvent increased. The Soret band was blue-shifted by 23 nm from toluene to PBS. The positions of QI and QII bands in water and PBS were blue-shifted by 12 nm, respectively, compared to toluene ( Figure 2C). The maxima of the absorption bands are presented in Table 1. In FBS, due to the presence of proteins (30-45 g·L −1 ), the behavior was different to than in water. The absorption spectra in FBS were similar to those in toluene.
On the basis of the UV-visible absorption spectra of PSs, molar extinction coefficients (ε) were calculated for all observed bands in all solvents. For PpIX and PPa, there As expected, UV-visible absorption spectra of PpIX ( Figure 2A) exhibited an intense Soret band centered at around 406 nm and four weaker Q bands in the visible range in toluene, ethyl acetate (AcOEt), ethanol (EtOH), and methanol (MeOH). These similar spectra are typical of monomeric PpIX. Nevertheless, in glycerol, water, phosphate-buffered solution (PBS), and fetal bovine serum (FBS), the Soret band was split into two bands [34]. This can be explained by the fact that PpIX is aggregated in aqueous solutions. In polar solvents the QI band was red-shifted compared to the QI band in less polar solvents (629 nm and 641 nm in EtOH and PBS, respectively) ( Table 1), and the intensity decreased drastically. In the literature, it is often claimed that PpIX should be excited in vitro or in vivo at 630 nm. This wavelength of excitation is based on the absorption spectrum in EtOH. As it can be seen in Table 1 in water, PBS, and FBS, the QI band was located at 641, 641, and 640 nm, respectively. Toluene 33.9 409 506 540 577 632 415 510 539 612 671 388 509 539 578 628 AcOEt 38.1 402 503 536 575 630 410 506 536 608 667 394 503 536 574 625 EtOH 51.9 402 503 537 575 629 411 509 539 609 667 398 503 536 574 625 MeOH 55.4 401 502 537 574 628 409 507 538 608 665 394 503 536 574 625 Glycerol 57.0 404 536 561 590 642 415 512 544 616 671 371 507 536 574 625 Water 63.1 352 532 557 589 641 380 522 554 625 677 365 507 542 567 616 PBS ≈63.1 365 532 557 589 641 379 526 558 630 677 365 507 542 567 616 FBS -409 506 537 585 640 405 515 546 617 675 391 506 538 573 624 The UV-visible absorption spectra of PPa in various solvents exhibited a Soret band and four Q bands in the spectral range 300-700 nm ( Figure 2B and Table 1). The Soret band in non-aqueous solution was located between 409 nm and 415 nm whereas it was blue-shifted to 380 nm in water and PBS and 405 nm in FBS. QIV, QIII, QII, and QI were red-shifted from, respectively, 510 nm to 526 nm, 539 nm to 558 nm, 612 nm to 630 nm, and 671 nm to 677 nm, from toluene to PBS. The UV-visible absorption spectra of PPa showed a broad Soret band in glycerol, water, PBS, and FBS due to the formation of aggregates. Interestingly, all shifts of PPa in glycerol and FBS showed close values. UV-visible absorption spectra of PF in the different solvents were less impacted by the change of the polarity than PpIX and PPa. The shape of the Soret band was broader in toluene, AcOEt and it became intense when the polarity of solvent increased. The Soret band was blue-shifted by 23 nm from toluene to PBS. The positions of QI and QII bands in water and PBS were blue-shifted by 12 nm, respectively, compared to toluene ( Figure 2C). The maxima of the absorption bands are presented in Table 1. In FBS, due to the presence of proteins (30-45 g·L −1 ), the behavior was different to than in water. The absorption spectra in FBS were similar to those in toluene.
On the basis of the UV-visible absorption spectra of PSs, molar extinction coefficients (ε) were calculated for all observed bands in all solvents. For PpIX and PPa, there was a single abrupt jump of ε (for the Soret band) when moving from toluene, AcOEt, EtOH, or MeOH into glycerol, water, or PBS. That was not observed for PF due to the fact that PF is a mixture of different compounds that do not all behave in the same way ( Table 2). The high value of ε for the QI band of PPa is interesting for PDT applications. ε for the QI band of PPa was 3.5 times higher than the one of PpIX and 16.5 times higher than the one of PF [35]. Fluorescence emission spectra presented in Figure 3 were recorded in different solvents at room temperature and at a concentration of 1.87 µM.
PpIX was excited at 400 nm. The highest was the polarity of the solvent and the blueshift of the two fluorescence emission bands in agreement with the UV-visible absorption spectra ( Figure 3A). All the maximum wavelengths are in the supporting information. Moreover, fluorescence intensity decreased with increase of the polarity of the solvent due to the lack of solubility of PpIX in aqueous media.
PPa was excited at 415 nm. The fluorescence emission spectra presented two bands and they were blue-shifted in polar solvents in accordance with the blue shift observed in the UV-visible absorption spectra. In water, PBS, and FBS at this concentration, the fluorescence intensity was very weak ( Figure 3B), which could be explained by the aggregation [31].
PF was excited at 400 nm and a different behavior was observed in fluorescence emission spectra. Very weak maximum emission peaks at 633 nm and 696 nm were observed in toluene. The PF cores might have been in their highly quenched state, which did not generate fluorescence. The fluorescence intensity increased around 10 times in EtOH, and decreased again in polar solvents ( Figure 3C). The emission bands of PF in water and PBS were blue-shifted for 15 ± 2 nm (Table S1) like the UV-visible absorption spectra indicating highly ordered aggregated structures [36]. In FBS, the fluorescence spectrum was similar to the one in non-polar solvent. The fluorescence quantum yield (Φƒ) of PpIX was evaluated to be higher in less polar solvents than in water, PBS, and FBS (Table 3). This is in good agreement with the fact that PpIX tends to aggregate in aqueous media [37]. Among the three PSs, PPa presented the best Φƒ which was 0.39 in toluene and EtOH. Φƒ of PF was low (below 0.1) and the highest was obtained in EtOH and MeOH, possibly due to a better solubilization (Table 3). The fluorescence lifetime (τf) of PSs was measured by time-resolved fluorescence after excitation at 408 nm. An exponential decay was fitted with an R 2 ≈ 1.000. A bi-exponential decay of PpIX in polar solvents confirmed the presence of two popula- The fluorescence quantum yield (Φ f ) of PpIX was evaluated to be higher in less polar solvents than in water, PBS, and FBS (Table 3). This is in good agreement with the fact that PpIX tends to aggregate in aqueous media [37]. Among the three PSs, PPa presented the best Φ f which was 0.39 in toluene and EtOH. Φ f of PF was low (below 0.1) and the highest was obtained in EtOH and MeOH, possibly due to a better solubilization (Table 3). The fluorescence lifetime (τ f ) of PSs was measured by time-resolved fluorescence after excitation at 408 nm. An exponential decay was fitted with an R 2 ≈ 1.000. A biexponential decay of PpIX in polar solvents confirmed the presence of two populations-Pharmaceuticals 2021, 14, 138 6 of 20 monomers (long decay) and aggregates (short decay). PpIX exhibited mono-exponential decay ( Figure 3D) in non-polar solvents, though the monomer-aggregate equilibrium was observed in glycerol, water, PBS, and FBS solutions: τ f values of 10.3-15.9 ns and 2.5-3.0 ns for PpIX monomer and aggregates, respectively (Table 4).
As it is known that aggregates reduce the inter-system crossing (ISC) transition from 1 PS* to 3 PS* and τ f of aggregated PPa was shorter. PPa exhibited mono-exponential and bi-exponential decay in toluene, AcOEt, EtOH, MeOH, glycerol, and in water, PBS, and FBS, respectively ( Figure 3E), confirming the presence of two forms-monomers (long decay) and aggregates (short decay). The τ f value of PPa was between 6.1 to 7.5 ns for monomers and 0.3 to 2.1 ns for aggregates ( Table 4).
The solution of PF in toluene, EtOH, and MeOH exhibited mono-exponential decay with τ f values of 8.7, 10.8, and 10.2 ns, respectively, which was in good agreement with literature values [38,39], and two decays in the other solvents ( Figure 3F). Once again, the short decay corresponded to the aggregated parts with lifetime of 2.4, 3.0, 3.4, 2.2, and 3.2 ns in AcOEt, glycerol, water, PBS, and FBS, respectively, since there was a longer decay for monomers ( Table 4).
The 1 O 2 production of PSs in different solvents was carried out and 1 O 2 emission was detected at 1270 ± 5 nm after excitation at 400 nm for PpIX, PF, and 415 nm for PPa ( Figure 4).
As expected, it was not possible to determine the Φ ∆ of PpIX in aqueous solutions due to possible aggregation of the PS, but it generated 1 O 2 very efficiently in toluene, AcOEt, EtOH, and MeOH ( Figure 4A). The same observation could be made with PPa. In our conditions we could not detect 1 O 2 emission in glycerol, water, FBS, or PBS ( Figure 4B). On the contrary, PF ( Figure 4C) generated 1 O 2 in EtOH, AcOEt, and MeOH (Table 5. The detection of 1 O 2 was performed in D 2 O, since the τ ∆ value is higher than in H 2 O. Indeed, solvents with high vibrational frequencies are more able to quench 1 O 2 [40]. However, no emission could be detected for PpIX and PPa whereas a Φ ∆ of 0.15 was obtained for PF. Additionally, 1 O 2 generation from PpIX, PPa, and PF in D 2 O was monitored by using the most common fluorescence probe Singlet Oxygen Sensor Green (SOSG), which is not sensitive to hydroxyl radicals or superoxide. It clearly appeared that fluorescence emission intensity of SOSG increased during the time due to the production of 1 O 2 after excitation of PpIX, PPa, and PF. Sodium azide quenched 1 O 2 very efficiently and fluorescence emission intensity of SOSG in the presence of quencher decreased ( Figure 5). As expected, it was not possible to determine the Φ∆ of PpIX in aqueous solutions due to possible aggregation of the PS, but it generated 1 O2 very efficiently in toluene, AcOEt, EtOH, and MeOH ( Figure 4A). The same observation could be made with PPa. In our conditions we could not detect 1 O2 emission in glycerol, water, FBS, or PBS ( Figure 4B). On the contrary, PF ( Figure 4C) generated 1 O2 in EtOH, AcOEt, and MeOH (Table 5. The detection of 1 O2 was performed in D2O, since the τΔ value is higher than in H2O. Indeed, solvents with high vibrational frequencies are more able to quench 1 O2 [40]. However, no emission could be detected for PpIX and PPa whereas a Φ∆ of 0.15 was obtained for PF. Additionally, 1 O2 generation from PpIX, PPa, and PF in D2O was monitored by using the most common fluorescence probe Singlet Oxygen Sensor Green (SOSG), which is not sensitive to hydroxyl radicals or superoxide. It clearly appeared that fluorescence emission intensity of SOSG increased during the time due to the production of 1 O2 after excitation of PpIX, PPa, and PF. Sodium azide quenched 1 O2 very efficiently and fluorescence emission intensity of SOSG in the presence of quencher decreased ( Figure 5).
In solution, τ ∆ is governed by solvent deactivation through electronic-vibrational energy transfer [41]. If no reaction happens between 1 O 2 and PS, τ ∆ value should be the same for the three PSs in each solvent. What we can observe is in good relation with the literature data (Table 6). [41]. If no reaction happens between 1 O2 and PS, τΔ value should be the same for the three PSs in each solvent. What we can observe is in good relation with the literature data (Table 6).
Influence of the Medium Viscosity
To evaluate the influence of the viscosity on the photophysical properties, a water/glycerol (W/G) mixture at various ratios was used. The higher the glycerol concentration, the higher the viscosity of the medium. The UV-visible absorption and fluorescence emission spectra of all PSs are shown in Figure 6.
Influence of the Medium Viscosity
To evaluate the influence of the viscosity on the photophysical properties, a water/glycerol (W/G) mixture at various ratios was used. The higher the glycerol concentration, the higher the viscosity of the medium. The UV-visible absorption and fluorescence emission spectra of all PSs are shown in Figure 6.
For the three PSs, fluorescence emission decreased with addition of water. The highest was the viscosity and the lowest was the non-radiative decay.
The Soret band became larger and split in the solutions of PpIX with high concentrations of water, but the maximum wavelengths of four Q bands were not affected. A thin Soret band at 406 nm was only observed in 100% glycerol. This might be due to the fact that with the increase of the viscosity, the movement of the molecules was reduced and the formation of aggregates decreased or just due to the fact that aggregation occurred in water ( Figure 6A).
UV-visible absorption spectra of PPa in water and in the mixture of water/glycerol showed a blue-shifted Soret band and weak, red-shifted Q bands. These band shifts might have been a result of the viscosity, which reduced the molecule's mobility for aggregate formation ( Figure 6B).
A totally different behavior was observed for PF. The intensity of the Soret band of PF decreased by increasing the viscosity of the medium and the Soret band became wider in 100% glycerol with a red-shift of the maximum of absorption ( Figure 6C). The intensity of fluorescence emission of PpIX in the W/G mixture increased with the viscosity of the medium ( Figure 6D). Φƒ value of PpIX in the W/G mixture increased in highly viscous media due to the fact that the formation of aggregates was less important. PPa in glycerol showed two emission bands located at 675 nm and 724 nm (Figure 6E). The fluorescence emission intensity increased with the viscosity of medium. The viscous medium might prevent non radiative deactivation. The fluorescence emission intensity of PF also increased with the viscosity of the medium (except in water) and was red-shifted ( Figure 6F) for 10 nm. It is interesting to note that for PF when the viscosity increased, fluorescence emission increased but absorption decreased. The highest Φƒ value for all PSs was calculated for the solution in glycerol (Figure 7). UV-visible absorption spectra of PPa in water and in the mixture of water/glycerol showed a blue-shifted Soret band and weak, red-shifted Q bands. These band shifts might have been a result of the viscosity, which reduced the molecule's mobility for aggregate formation ( Figure 6B).
A totally different behavior was observed for PF. The intensity of the Soret band of PF decreased by increasing the viscosity of the medium and the Soret band became wider in 100% glycerol with a red-shift of the maximum of absorption ( Figure 6C).
The intensity of fluorescence emission of PpIX in the W/G mixture increased with the viscosity of the medium ( Figure 6D). Φ f value of PpIX in the W/G mixture increased in highly viscous media due to the fact that the formation of aggregates was less important. PPa in glycerol showed two emission bands located at 675 nm and 724 nm ( Figure 6E). The fluorescence emission intensity increased with the viscosity of medium. The viscous medium might prevent non radiative deactivation. The fluorescence emission intensity of PF also increased with the viscosity of the medium (except in water) and was red-shifted ( Figure 6F) for 10 nm. It is interesting to note that for PF when the viscosity increased, fluorescence emission increased but absorption decreased. The highest Φ f value for all PSs was calculated for the solution in glycerol (Figure 7). Pharmaceuticals 2021, 14, x FOR PEER REVIEW 10 of 20 Fluorescence emission decays presented in Figure S1 were measured in the different media. τf were evaluated and are presented in Table 7. In all mixtures two lifetimes were detected, probably because of the presence of both monomers and aggregates. The τf value of PpIX increased with the viscosity ( Figure S1A). The solution of PPa in W/G (100/0, 80/20, and 60/40) ratio exhibited two decays, but starting at a ratio of 40/60 showed mono-exponential decay and τf increased in line with the medium viscosity ( Figure S1B). The solution of PF exhibited bi-exponential decay ( Figure S1C) in all W/G mixtures. 3.2 ± 0.02; 12.5 ± 0.2 6.5 ± 0.02 3.5 ± 0.01; 15.2 ± 0.05 Unfortunately, no correlation could be established between the fraction of monomers/aggregates and the viscosity of the medium. One reason might be that the polarity of the medium also changes when different amounts of glycerol and water are mixed. Therefore, the changes observed in W/G mixtures cannot only be attributed to the solution viscosity.
Influence of the Concentration
The influence of the concentration of PSs in PBS and FBS on photophysical properties was evaluated. As expected, the increase in concentration induced an increase in intensity, but no change in the absorption band maximum wavelength was observed in this concentration range for all PSs (Figure 8A-C). Figure S1 were measured in the different media. τ f were evaluated and are presented in Table 7. In all mixtures two lifetimes were detected, probably because of the presence of both monomers and aggregates. The τ f value of PpIX increased with the viscosity ( Figure S1A). The solution of PPa in W/G (100/0, 80/20, and 60/40) ratio exhibited two decays, but starting at a ratio of 40/60 showed mono-exponential decay and τ f increased in line with the medium viscosity ( Figure S1B). The solution of PF exhibited bi-exponential decay ( Figure S1C) in all W/G mixtures. Unfortunately, no correlation could be established between the fraction of monomers/ aggregates and the viscosity of the medium. One reason might be that the polarity of the medium also changes when different amounts of glycerol and water are mixed. Therefore, the changes observed in W/G mixtures cannot only be attributed to the solution viscosity.
Influence of the Concentration
The influence of the concentration of PSs in PBS and FBS on photophysical properties was evaluated. As expected, the increase in concentration induced an increase in intensity, but no change in the absorption band maximum wavelength was observed in this concentration range for all PSs (Figure 8A-C).
The concentration increase led to a decrease of the fluorescence emission intensity for all PSs. Aggregation was higher in concentrated solutions ( Figure 9D-F). The higher was the concentration, the lower was the fluorescence. Φ f of all PSs in PBS at different concentrations were measured and were all less than 1%. However, the results obtained in FBS turned out to be the opposite in comparison with PBS. As the concentration of all PSs increased, fluorescence emission of all PSs increased. The aggregation process might be lower in FBS than in PBS due to the interaction with the proteins. The concentration increase led to a decrease of the fluorescence emission intensity for all PSs. Aggregation was higher in concentrated solutions ( Figure 9D-F). The higher was the concentration, the lower was the fluorescence. Φƒ of all PSs in PBS at different concentrations were measured and were all less than 1%. However, the results obtained in FBS turned out to be the opposite in comparison with PBS. As the concentration of all PSs increased, fluorescence emission of all PSs increased. The aggregation process might be lower in FBS than in PBS due to the interaction with the proteins. Fluorescence decays were recorded ( Figure S2A) and τf were evaluated (Table 8). PpIX in PBS or FBS at different concentrations exhibited bi-exponential decay. The longest τf likely corresponded to the monomer decay time and the shorter lifetime was likely due to aggregates' decay time. PpIX in PBS at different concentrations exhibited bi-exponential decay. We could observe a slight increase of the ratio aggregate/monomers with the concentration increase in PBS but not in FBS. For PPa, only one Fluorescence decays were recorded ( Figure S2A) and τ f were evaluated (Table 8). PpIX in PBS or FBS at different concentrations exhibited bi-exponential decay. The longest τ f likely corresponded to the monomer decay time and the shorter lifetime was likely due to aggregates' decay time. PpIX in PBS at different concentrations exhibited bi-exponential decay. We could observe a slight increase of the ratio aggregate/monomers with the concentration increase in PBS but not in FBS. For PPa, only one population was observed in PBS between 5.6 and 6.8 ns. In FBS, a bi-exponential decay suggested the presence of both aggregates and monomers. For PF, no effect of the concentration could be observed. In PBS, 8% of aggregates and 92% of monomers can be evaluated whereas it was 14-17% of aggregates and 83-86% of monomers in FBS.
Influence of the Temperature
The influence of temperature on UV-visible absorption, fluorescence emission, and lifetime of PpIX, PPa, and PF in aqueous media was evaluated. UV-visible absorption and fluorescence emission spectra of the PSs in PBS and FBS after heating from 10 • C to 40 • C are presented in Figure 10.
For PpIX, in PBS, the intensity of the Soret band decreased with the increase of temperature ( Figure 10A), whereas the intensity of the Q bands increased, except QI. In FBS ( Figure 10D), a different behavior could be observed with a decrease of all band intensities, except QI. For PPa in PBS ( Figure 10B), the intensity of the Soret band decreased with the increase of temperature and the QI band shape changed and was red-shifted from 680 nm to 712 nm with an isobestic point at 685 nm. In FBS (Figure 10E), the intensity of the Soret band increased with the increase of temperature as well as the QI band, with a change of shape and an isobestic point at the same 685 nm. For PF, almost no change could be observed in PBS ( Figure 10C) whereas in FBS, a blue shift of the Soret band and a decrease of intensity was observed by increasing the temperature, as well as an increase of QI intensity ( Figure 10F). with the increase of temperature and the QI band shape changed and was red-shifted from 680 nm to 712 nm with an isobestic point at 685 nm. In FBS (Figure 10E), the intensity of the Soret band increased with the increase of temperature as well as the QI band, with a change of shape and an isobestic point at the same 685 nm. For PF, almost no change could be observed in PBS ( Figure 10C) whereas in FBS, a blue shift of the Soret band and a decrease of intensity was observed by increasing the temperature, as well as an increase of QI intensity ( Figure 10F). Fluorescence emission spectra were recorded in PBS and FBS at different temperatures ( Figure 11). Whatever PS, the fluorescence emission intensity increased when the temperature rose from 10 to 40 °C. This might be have been because more monomers Fluorescence emission spectra were recorded in PBS and FBS at different temperatures ( Figure 11). Whatever PS, the fluorescence emission intensity increased when the temperature rose from 10 to 40 • C. This might be have been because more monomers were in solution exhibiting fluorescence. For PpIX in PBS a slight red shift could be observed for the first band ( Figure 11A) whereas in PBS, it was a slight blue-shift. No shift was detected for PPa ( Figure 11B,E). Concerning PF, a red shift was observed both in PBS and FBS.
Fluorescence decays were recorded ( Figure S3) and τ f were evaluated (Table 9). PpIX in PBS of FBS at different temperatures exhibited bi-exponential decay. The longest τ f likely corresponded to the monomer decay time and the shorter lifetime and was likely due to the aggregate decay time. A decrease of the shortest τ f could be observed with the increase of the temperature in both solutions. Moreover, the ratio aggregate/monomer also seemed to decrease with the increase of the temperature. For PPa, only one τ f was calculated in PBS. At low temperature (10 and 2 • C), both monomers and aggregates were present in FBS whereas aggregates disappeared at high temperature (30 and 40 • C). For PF, no effect of temperature was detected in PBS, whereas both short and long τ f decreased with temperature increase in FBS. were in solution exhibiting fluorescence. For PpIX in PBS a slight red shift could be observed for the first band ( Figure 11A) whereas in PBS, it was a slight blue-shift. No shift was detected for PPa ( Figure 11B,E). Concerning PF, a red shift was observed both in PBS and FBS. Fluorescence decays were recorded ( Figure S3) and τf were evaluated (Table 9). PpIX in PBS of FBS at different temperatures exhibited bi-exponential decay. The longest τf likely corresponded to the monomer decay time and the shorter lifetime and was likely due to the aggregate decay time. A decrease of the shortest τf could be observed with the increase of the temperature in both solutions. Moreover, the ratio aggregate/monomer also seemed to decrease with the increase of the temperature. For PPa, only one τf was calculated in PBS. At low temperature (10 and 2 °C), both monomers and aggregates were present in FBS whereas aggregates disappeared at high temperature (30 and 40 °C). For PF, no effect of temperature was detected in PBS, whereas both short and long τf decreased with temperature increase in FBS.
Influence of pH Medium
pH could also have an influence on photophysical properties. The UV-visible absorption, fluorescence emission, and lifetime of all PSs in PBS with a concentration of 3.1 µM were measured under different pH conditions (pH 5.0-8.0). For PpIX, increasing pH from 5 to 8 led to a red-shifted Soret band from 354 nm to 375 nm whereas the Q bands were pH-independent ( Figure 12A).
Influence of pH Medium
pH could also have an influence on photophysical properties. The UV-visible absorption, fluorescence emission, and lifetime of all PSs in PBS with a concentration of 3.1 μM were measured under different pH conditions (pH 5.0-8.0). For PpIX, increasing pH from 5 to 8 led to a red-shifted Soret band from 354 nm to 375 nm whereas the Q bands were pH-independent ( Figure 12A). By increasing the pH, PPa showed an increase of the Soret and QI band intensities ( Figure 12B). Two isobestic points could be observed at 415 nm and 685 nm, exactly the same as those observed by changing the temperature. This is in good agreement with the presence of two different species that could be monomers or aggregates. The Soret band of PF increased with pH and the maximum of absorption and Q bands were not affected ( Figure 12C).
For all PSs ( Figure 12D-F), fluorescence increased with the increase of pH in relation to the formation of monomers and disappearance of aggregates [44][45][46] but we could also observe a decrease of the band at 717 nm for PPa ( Figure 12E). Φ f of all PSs in PBS under different pH medium were below 0.01. τ f value of PpIX, PPa, and PF in PBS was also measured at different pH ( Figure S4A). For PpIX, no aggregation could be observed at pH = 5 whereas aggregation occurred at pH 6-8 with the appearance of a short decay. At pH 5 and 6, the height of fast decay of PPa was higher than at pH 7 and 8, which was due to aggregation. At pH 5 and 6, the PPa was more aggregated with low τ f value, so only τ f of longer decay is given in the Table 10. For PF, τ f value of fast decay was around 3.0 ns and long decay 14.5 ns (Table 10).
Materials and Methods
Protoporphyrin IX, Pyropheophorbide-a, and Porfimer sodium (Photofrin ® ) were purchased from Sigma (Saint-Louis, MO, USA), BOC Sciences (Shirley, NY, USA), and Oncothai (Lille, France), respectively, and used without further purification. The stock solution of PpIX and PPa was prepared in dimethylsulfoxide (DMSO), and PF in methanol (MeOH). FBS was purchased from Sigma (Saint-Louis, MO, USA). PBS was prepared by mixing the exact volume of 0.2 M sodium phosphate, dibasic dehydrate and 0.2 M sodium phosphate, monobasic, monohydrate, and pH was adjusted to 7.4. The stock solution of SOSG in methanol was prepared by dissolving 100 µg vial in 33.0 µL of methanol and sodium azide solution was prepared in water with concentration of 0.15 M.
Spectroscopic Measurements
UV-visible absorption spectra were recorded on a UV-3600 UV-visible double beam spectrophotometer (Shimadzu, Marne La Vallee, France). Fluorescence spectra were recorded on a Fluorolog FL3-222 spectrofluorimeter (Horiba JobinYvon, Longjumeau, France) equipped with 450 W Xenon lamp, a thermo-stated cell compartment (25 • C), a UV-visible photomultiplier R928 (Hamamatsu, Japan) and an InGaAs infrared detector (DSS-16A020L Electro-Optical System Inc, Phoenixville, PA, USA). The excitation beam was diffracted by a double ruled grating SPEX monochromator (1200 grooves/mm blazed at 330 nm). The emission beam was diffracted by a double-ruled grating SPEX monochromator (1200 grooves/mm blazed at 500 nm). The 1 O 2 phosphorescence detection was measured with a HORIBA SpectraLED emitting at 415 nm, by a Multi-Channel Scaling (MCS) technique. The excitation pulse length was 102 µs and 600,000 pulses were averaged. 1 O 2 emission was detected through a double-ruled grating SPEX monochromator (600 grooves/mm blazed at 1 µm) and a long-wave pass (780 nm). All spectra were measured in 4-face quartz cuvettes. All the emission spectra (fluorescence and 1 O 2 luminescence) were displayed with the same absorbance (less than 0.2) with the lamp and photomultiplier correction.
Fluorescence quantum yield (Φ f ) was calculated with tetraphenylporphyrin (TPP) in toluene as reference (Φ f = 0.11) [47], using the following Equation (1): where Φ f and Φ f0 , I f and I f0 , DO and DO 0 , and n and n 0 are the quantum yields, fluorescence emission intensities, optical densities, and refraction indices of the sample and reference, respectively. 1 O 2 quantum yield (Φ∆) was measured with TPP in toluene (Φ∆ = 0.68), rose Bengal in ethanol (EtOH) (Φ∆ = 0.68) and MeOH (Φ∆ = 0.76) as references [48,49] by Equation (2): where Φ ∆ and Φ ∆0 , I and I 0 , and DO and DO 0 are the luminescence quantum yields of singlet oxygen, the luminescence intensities, and the optical densities of the sample and references, respectively.
Fluorescence and Luminescence Decays
Time-resolved experiments were performed using, for excitation, a pulsed laser diode emitting at 408 nm (LDH-P-C-400M, FWHM < 70 ps, 1 MHz) coupled with a driver PDL 800-D (both PicoQuant GmbH, Berlin, Germany) and for detection, an avalanche photodiode SPCM-AQR-15 (EG&G, Vaudreuil, QC, Canada) coupled with a 550 nm long-wave pass filter as detection system. The acquisition was performed by a PicoHarp 300 module with a 4-channel router PHR-800 (both PicoQuant GmbH, Berlin, Germany). The fluorescence decays were recorded using the single photon counting method. Data were collected up to 1000 counts accumulated in the maximum channel and analyzed using Time Correlated Single Photon Counting (TCSPC) software Fluofit (PicoQuant GmbH, Berlin, Germany) based on iterative deconvolution using a Levensberg-Marquandt algorithm. 1 O 2 lifetime (τ ∆ ) measurements were performed on a TEMPRO-01 spectrophotometer (Horiba Jobin Yvon, Palaiseau, France). The apparatus was composed of a pulsed diode excitation source SpectralLED-415 emitting at 415 nm, a cuvette compartment, a Seya-Namioka type emission monochromator (between 600 and 2000 nm) and a H10330-45 near-infrared photomultiplier tube with a thermoelectric cooler (Hamamatsu, Massy, France) for the detection. The system was monitored by a single-photon counting controller FluoroHub-B and the software DataStation and DAS6 (Horiba Jobin Yvon, Palaiseau, France).
Conclusions
This study focalized on three PSs that are used clinically (PpIX and PF) or for in vivo experiments (PPa). Our team proposed PPa coupled to folic acid to treat ovarian metastases by PDT (Patent WO/2019/016397).
By analyzing the photophysical properties of these three PSs in different conditions, we highlighted the fact that each PS is unique and reacts very differently depending on its chemical structure and concentration.
If the change of the medium polarity does not greatly affect the UV-visible absorption spectrum of PF, there is a drastic change for PpIX and PPa. In the literature, it is often claimed that PpIX should be excited at 630 nm in vitro or in vivo. This excitation wavelength is based on the absorption spectrum in ethanol. In FBS and PBS, which are aqueous media more similar to physiological media, the QI band is located at 641 nm.
Depending on the localization of the PS in the cells, the local viscosity can be very different. We could also observe that modifying the solvent viscosity did not greatly affect the maximal wavelengths of absorption of QI in PpIX and PF but it was blue-shifted for PPa for 10 nm (from 678 nm to 668 nm).
Temperature change slightly affected the UV-visible absorption spectra of PpIX and PF but drastically modified the UV-visible absorption of PPa in the range of 10 to 40 • C.
Finally, modifying pH also induced a shift of QI band for PPa of 25 nm (from 704 nm to 679 nm).
Perhaps the most interesting results are the Φ ∆ obtained in different solvents. Depending on the solvent, the values were totally different. In toluene, we could not detect any 1 O 2 whereas the Φ ∆ were quite good for PpIX and PPa 0.68 and 0.49, respectively. In EtOH, the Φ ∆ was 0.92, 0.53, and 0.80 for PpIX, PPa, and PF, respectively. If we switched to D 2 O, we could not detect any 1 O 2 of PpIX or PPa and the Φ ∆ was 0.15 for PF. Moreover, in real-life applications, the PS is ideally in a cellular context. The presence of protein, lipid, and other biomolecules molecules will also affect the photophysics of the PS. This raised the question of what type of experiments and which solvent should be used in the solution when performing in vitro studies.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2021-02-13T06:16:37.113Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "9755164599ec268b42b10633fb96c465961e547b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8247/14/2/138/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "870bde2aa65c13ce6147191b32b084872320053e",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221568574
|
pes2o/s2orc
|
v3-fos-license
|
Pandemic politics—lessons for solar geoengineering
Responses to the COVID-19 emergency have exposed break-points at the interface of science, media, and policy. We summarize five lessons that should be heeded if climate change ever enters a state of emergency perceived to warrant stratospheric aerosol injection.
Five lessons: 1. Narrow metrics seem user-friendly-but can create new problems.
Using narrow metrics to define a problem, or the success of a policy, can obscure other important goals. The response to the pandemic has been guided by calculable parameters, such as case count, or the reproduction number, R 0 . This bears the danger that the metric becomes the target of policy, and thereby the metric starts to define the solution.
With the pandemic, we could have weighed the public health effects of lockdown measures themselves, including missed screenings and treatments, delayed vaccinations, mental health impacts, and food insecurity, as well as the economic and social impacts. In addition, the blanket strategies aimed at particular metrics missed the equity dimensions of who bears the harms from the measures: much of the burden of lockdowns falls upon the global poor 4 . Moreover, longer-term aims of dealing with root causes of zoonotic disease transmissions were eclipsed. The media's interest in trackable metrics may exacerbate such narrowing of policy goals. But the challenge of translating model knowledge into real-world actions also contributes. For example, the reproduction number has been mobilized in policy as a population-wide metric in some jurisdictions 5 .
Climate policy too has become focused on a single metric, global average temperature 6 . This number is merely a proxy for a multitude of desired outcomes including human security and sustainable development, biodiversity, intergenerational justice, etc. If this metric begins to define the solution, a strategy that is measured in terms of global mean temperature, like stratospheric aerosol injection, looks attractive. However, for climate change as for the pandemic, whole-of-society strategies that take in a broader range of goals are also needed. We must strive to understand the second-order effects of various measures, from mitigation and adaptation policies to climate engineering. And the root causes must not be forgotten. We are in the early days of understanding complex socioecological interactions. But the remedy here is not simply to perfect a single, dominant methodology. Rather, multiple methods are needed to inform whole-of-society responses to wicked problems-complex social problems that are interdependent on other problems and have no single or final solution 7 -such as climate change. 2. Global governance is fragmented or missing.
The diverse responses to COVID-19 in different countries sharply illustrate that we do not have a single global society. Different societies have different priorities, interests, and cultures of knowledge and policy, and game-theoretic modeling is not sufficient to explain them. As a result, the COVID-19 responses have been a mosaic of different strategies and controversies. The rush to compete for vaccines, medicines, and equipment, along with attacks on the World Health Organization-the only international organization available-show that global governance and common interests are ideals to aim for, but cannot be assumed to be in place 8 This could be dangerous if the mainstream media position emerges as a blanket votum, regardless of whether it is for or against stratospheric aerosol injection: for any narrative to become unquestionable dogma is against the core idea of scientific inquiry. In addition, like for COVID-19, a sense of anxiety around the idea of climate emergency could influence decision-making on stratospheric aerosol injection in unexpected-and potentially unhelpful-ways. What remains to be studied is how the dynamics of social media are feeding back into the conduct of the science itself. For example, the high-profile retraction 10 at The Lancet of a paper indicating that hydroxychloroquine was not effective provoked questions in the media about the politics of the journal. The decision to publish the article in the first place highlights the structural question of how media and political implications might influence research in unhelpful ways.
Consider the case of a model, published 11 in Science, which suggested a herd immunity threshold for COVID-19 at 43%. The editors were concerned that the finding would be used to downplay concerns about COVID-19, and discussed whether publishing the results was in the public interest 12 Similarly, being seen to carry out an adequate or even aggressive response to climate change may become a part of maintaining regime legitimacy. Stratospheric aerosol injection may be a similarly performative measure that a politician can introduce before there is a strong evidence base supporting or detracting from it. In a limited attention economy, stratospheric aerosol injection may then distract from a regime's failure to adapt or mitigate, as well as draw focus away from other climate goals.
Buy time only with a plan in hand.
Stopgap measures to buy time for longer-term action carry the particular risk that the initial objective is forgotten, and eventually maintaining the stopgap becomes the goal. Alternatively, there is a risk that the time that is bought is not used efficiently, which makes it necessary to perpetuate the stopgap. The definitions and conditions of ill thoughtout stopgaps can morph as time passes. With the pandemic, lockdown measures were introduced as a way to "flatten the curve". They were intended to buy time to scale up testing and contact-tracing capacity, procure protective equipment, and learn how to treat the virus. This strategy was effective in some nations. However, in the US context, the time that was bought with the lockdowns in March and April of 2020 was not used well, and by the summer of 2020, the US faced a strong resurgence in cases.
When it comes to climate change, stratospheric aerosol injection has been discussed as a stopgap measure that can buy time for more systemic solutions 13 . Experience with COVID-19 illustrates how, especially under poor leadership, publics may misunderstand the goal, duration, and nature of the stopgap measure, and politicians may not be held accountable for failing to make use of the time. For stratospheric aerosol injections, ideally, the bought time could be used to decarbonize, bringing emissions to net-zero and developing capacities to remove carbon from the atmosphere. But the mechanism for holding politicians accountable to those goals has yet to be developed, and future politicians may decide to change the goals. With stratospheric aerosols, widespread public discussion well in advance may help mitigate some of the risks of it becoming an interminable stopgap.
Towards anticipatory research COVID-19 has been a stress test for the interactions between science, media, and politics both nationally and globally, and it has revealed complex and potentially harmful dynamics in the links between these spheres. The pandemic response further highlights the need not just for anticipatory governance, but for transdisciplinary, anticipatory research ahead of an actual emergency. For the case of stratospheric aerosol injection in a climate change emergency, we need research that is reflexive about how its implementation may be attempted by real-world (instead of imagined) policymakers in sub-optimal situations, for example, as a performative measure or as a shifting stopgap. Some of the open questions and governance challenges identified here cannot be addressed by scientists alone. Others, however, are well within the influence of individual research groups and institutions. We need a very broad range of expertises, including psychologists, sociologists, economists, development practitioners, International Relations experts, and others working together to produce this reflexive research.
It would be desirable to have a pre-developed policy tool that helps foresee complex socio-economic consequences, can be employed by a transdisciplinary network and is legible to diverse publics. Such a process cannot be summoned at will during a crisis. Given the centrality of scenario analysis in the climate discourse, international, transdisciplinary scenario research (combining climate science, impact assessment, and integrated assessment) would be highly desirable 14 . It is important to have a diversity of thought within-not only between-disciplines, to avoid groupthink and bandwagoning. Researchers can inoculate against this risk by using what's been called a "red team/blue team" approach, where some research groups work on best-case use scenarios while other teams systematically look for failure modes, as David Keith and others have discussed 15 .
Despite what COVID-19 has revealed about the dysfunction of the science-media-policy ecosystem, it also contains a hopeful lesson: people are willing to take radical action to save the lives of the vulnerable. Around the world, there has been wide compliance with social distancing during the first months of the pandemic even though many groups bear little risk from the virus themselves. The experience with COVID-19 suggests that the possibility of an altruistically motivated climate intervention should not be discounted.
|
2020-09-10T13:58:30.112Z
|
2020-09-10T00:00:00.000
|
{
"year": 2020,
"sha1": "d30b7f5adf0b32e25ca5e149a8a7c6fec0d14357",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s43247-020-00018-1.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "d30b7f5adf0b32e25ca5e149a8a7c6fec0d14357",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
258959185
|
pes2o/s2orc
|
v3-fos-license
|
T2FNorm: Extremely Simple Scaled Train-time Feature Normalization for OOD Detection
Neural networks are notorious for being overconfident predictors, posing a significant challenge to their safe deployment in real-world applications. While feature normalization has garnered considerable attention within the deep learning literature, current train-time regularization methods for Out-of-Distribution(OOD) detection are yet to fully exploit this potential. Indeed, the naive incorporation of feature normalization within neural networks does not guarantee substantial improvement in OOD detection performance. In this work, we introduce T2FNorm, a novel approach to transforming features to hyperspherical space during training, while employing non-transformed space for OOD-scoring purposes. This method yields a surprising enhancement in OOD detection capabilities without compromising model accuracy in in-distribution(ID). Our investigation demonstrates that the proposed technique substantially diminishes the norm of the features of all samples, more so in the case of out-of-distribution samples, thereby addressing the prevalent concern of overconfidence in neural networks. The proposed method also significantly improves various post-hoc OOD detection methods.
Introduction
The efficacy of deep learning models is contingent upon the consistency between training and testing data distributions; however, the practical application of this requirement presents challenges when deploying models in real-world scenarios, as they are inevitably exposed to OOD samples. Consequently, a model's ability to articulate its limitations and uncertainties becomes a critical aspect of its performance. While certain robust methodologies exist that endeavor to achieve generalizability despite domain shifts, these approaches do not always guarantee satisfactory performance.
OOD detection approaches can be broadly grouped into three approaches: post-hoc methods, outlier exposure, and training time regularization. Post-hoc methods, deriving OOD likelihood from pre-trained models, have significantly improved while outlier exposure, despite the challenges in predefining OOD samples ideally, is prevalently adopted in industrial contexts. Another approach involves training time regularization. This line of work due to its capacity to directly impose favorable constraints during training, potentially offers the most promising path to superior performance. The training-time regularization method, LogitNorm [1], employs L2 normalization at the logit level to mitigate overconfidence, leading to an increased ratio of ID norm to OOD norm compared to the results from simple cross-entropy baseline or Logit Penalty [1]. Nonetheless, the importance • We propose T2FNorm -a surprisingly trivial yet powerful plug to regularize the model for OOD detection. We quantitatively show that train time normalization approximately projects the features of ID samples to the surface of a hypersphere differentiating it from OOD samples thereby achieving significantly higher separability ratio. • We show T2FNorm is equally effective across multiple deep learning architectures and multiple datasets. It also works well in conjunction with multiple post-hoc methods. • We perform both qualitative and quantitative analysis showing our method's ability to reduce overconfidence and also perform a sensitivity study to show the robustness of our model to the temperature parameter τ . • We show that skipping normalization during OOD scoring time is a key contributor to our method thus paving the way for exploring the effectiveness of other forms of normalization discrepancies during OOD scoring.
Preliminaries: Out of Distribution Detection
Setup Let X be input space, Y be output space and P X Y be a distribution over X × Y . Let the P in be the marginal distribution of X which represents the distribution of input we want our classifier to be able to handle. This is the in-distribution (ID) of the input labels x i .
Supervised Classification
In supervised classification, the goal is to minimize the empirical loss L function formulated as: min θ 1 N N i=1 L(f θ (x i ), y i ) over the input dataset which is sampled i.i.d. from the in-distribution P in . Here, θ is the model parameters, f θ (x i ) is the classification predicted for input x i by the model with parameters θ.
OOD Detection During test time the environment can present samples from a different distribution P out instead of from P in . The goal of Out of Distribution Detection is to differentiate between samples from in-distribution P in and out-of-distribution P out . In this work we treat OOD detection as a binary classification where a scoring function S(x) and a corresponding threshold λ provide a decision function the performs OOD detection: The simplest of the scoring function S(x) is the Maximum Softmax Probability (MSP) obtained by passing the logits from the final layer of the network to the softmax function and taking the maximum value. Then samples with MSP exceeding a certain threshold λ are classified ID and the rest are OOD. The threshold λ is usually chosen so as to have a true positive rate of 95% over the input dataset.
Motivation
A recent work LogitNorm [1] directly aims to address the overconfidence issue by decoupling the effect of the norm of logits by L2 normalization in logits. Since the fully connected (FC) layer is directly responsible for logit computation and normalization is performed at logits, empirical observation ( Figure 1) suggests that the optimization process induces smoother uniform weight values closer to zero in the FC layer. However, a recent work [5] also has shown that non-trivial dependence on unimportant weights (and units) can directly attribute to the brittleness of OOD detection. The presence of smoother weights implies irrelevant features contributing non-trivially to the classification for some predictions resulting in higher output variance for OOD samples. Furthermore, suppressing the logit norm by forcefully learning predominantly near-zero FC might only suboptimally address overconfidence at the feature level. Hence, we address the normalization in feature space to avoid unwanted implications on FC weights.
Feature normalization
Our work proposes a method T2FNorm to improve the robustness of the network itself for OOD detection which can be used in conjunction with any downstream scoring function. We perform feature normalization to alleviate the issue of over-confidence predictions at the feature level. In particular, we normalize the feature vectors in the penultimate layer and scale with a factor 1/τ . The normalized vector is then passed on, as usual, to the classification FC layer and to cross-entropy loss function. Importantly, this normalization is performed (Algorithm 1) only during training and inference time, however, we skip the normalization part for OOD detection (Algorithm 2). Figure 2 shows the schematic diagram for our method. The proposed approach is simple and easy to implement, and as we will show later, it produces improved performance for OOD detection while maintaining predictive abilities.
Algorithm 1 T2Norm: Training
Significance of Feature Norm As observed by recent works [3,4,6], generally ID samples have a more significant penultimate feature norm in comparison to OOD data. In CNN models, high-level spatial features are generated by convolution operations. The penultimate feature is derived from globally pooling post-ReLU spatial features. ReLU activation signifies the presence of specific in-distribution features, while their absence corresponds to smaller norms, often seen in out-of-distribution samples. Therefore, a neural network having better ID/OOD separability should demonstrate a higher relative norm for in-distribution versus out-of-distribution samples, enhancing discriminability.
Working Principle and Details Our operational hypothesis is that the network learns to produce high-level semantic ID features lying on the hypersphere due to the normalization performed during training. However, this happens only for ID samples only as the network was trained with them, while for OOD samples, high-level semantic ID features are not activated because of their absence, causing OOD feature representation to lie significantly beneath the hypersphere's surface. The superior the degree to which this occurs, the greater the distinction between ID and OOD data samples occurs. Quantitatively, we can formalize such distinction as the ratio of ID norm to OOD norm, which we term as separability ratio (S) in this work. We observe that depending upon the separability ratio, LogitNorm [1] and LogitPenalty [1] can perform OOD detection supporting the evidence of the preferability of a higher separability ratio.
As most out-of-distribution (OOD) detection metrics concentrate on logits, and prior research has primarily focused on these logits, we investigate the impact of normalization at the feature representation level on both the separability ratio and the norm in feature and logit spaces for OOD detection. Given that feature-level normalization implies normalization within a higher-dimensional space than logit-level normalization, we postulate that high-dimensional normalization of training ID samples would enable the network to significantly reduce OOD norm relative to ID norm while preserving ID-specific features. As a result, we anticipate a substantial decrease in overconfidence, which is intrinsically linked to logit and feature norms, primarily since overconfidence is addressed at the penultimate feature level, inherently tackling the norm at the logit level. Confirming this, recent work, ReAct [2], has observed that the penultimate layer is most effective for OOD detection due to the distinct activation patterns between ID and OOD data.
On the significance of avoiding normalization at OOD scoring Should feature normalization be adopted during OOD scoring, it erroneously activates the feature for OOD samples, causing them to mimic the behavior of ID samples within the feature space. But, the removal of normalization during OOD scoring helps to preserve the difference in response of the network towards OOD and ID samples in the feature space.
Experiments
In this section, we discuss the experiments performed in various settings to verify the effectiveness of our method.
Metrics and OOD scoring: We report the experimental results in three metrics: FPR@95, AUROC and AUPR. FPR@95 gives the false positive rate when the true positive rate is 95%. AUROC denotes the area under the receiver operator characteristics curve and AUPR denotes the area under the precision-call curve. We use multiple OOD scoring methods, including parameter-free scoring functions such as maximum softmax probability [17], parameter-free energy score [18] and GradNorm [3] as well as hyperparameter-based scoring functions such as ODIN [19] and DICE [5]. We use the recommended value of 0.9 for the DICE sparsity parameter p and recommended τ = 1000 and ϵ = 0.0014 for ODIN. Training pipeline: We perform experiments with three training methods: a) Baseline (cross-entropy), LogitNorm [1], and T2FNorm (ours) by following the training procedure of open-source framework OpenOOD [16]. Experiments were performed across ResNet-18, WideResnet(WRN-40-2), and DenseNet architectures with an initial learning rate of 0.1 with weight decay of 0.0005 for 100 epochs based on the cross-entropy loss function. We set the temperature parameter τ = 0.04 for LogitNorm as recommended in the original setting [1] and τ = 0.1 for T2FNorm. Please refer to Figure 9 for the sensitivity study of τ . Five independent trials are conducted for each of 18 training settings (across 2 ID datasets, 3 network architectures, and 3 training methods). We trained all models on NVIDIA A100 GPUs.
Results
Superior OOD Detection Performance Quantitative results are presented in Table 1. It shows that our method is consistently superior in FPR@95, AUROC as well as AUPR metrics. Our method reduce FPR@95 metric by 34% compared to Baseline and 7% compared to LogitNorm using DICE Scoring for ResNet-18. Figures 3 and 4 show the FPR@95 values across different OOD datasets using MSP scoring in ResNet-18 where our method reduces FPR@95 by 33.7% compared to baseline and by 4.4% compared to LogitNorm. Interestingly, for both ID datasets, we can also observe the incompatibility of LogitNorm with DICE scoring in DenseNet architecture where it underperforms even when compared to the baseline. On the other hand, our method is more robust regardless of architecture or OOD scoring method. Architecture Agnostic without Compromising Accuracy Our experiments across three architectures as reported in Table 1 show the compatibility of our method with various architectures evidencing the agnostic nature of our method to architectural designs. An essential attribute of OOD methods employing regularization during training is the preservation of classification accuracy in ID datasets, independent of their OOD detection performance. The evidence supporting these assertions can be found in Table 2.
Significant Reduction in Overconfidence In Figure 5 we show the comparison between Baseline, LogitNorm, and T2FNorm in terms of distribution of maximum softmax probability. It can be observed that overconfidence has been addressed by T2FNorm to a greater extent in comparison with the baseline. Though the issue of overconfidence is also reduced in LogitNorm, the separability ratio is significantly higher in T2Norm, as we show in Figures 6 and 15. Norm and Separability Ratio The statistics of norm and separability ratio for ResNet-18 model trained with CIFAR-10 datasets are given in Table 3. The average ID norm of 0.9 ∼ 1 for the penultimate feature implies empirically that, ID samples approximately lie on the hypersphere even at the pre-normalization stage. Again, the average norm for OOD samples is found to be 0.15 implying OOD samples lie significantly beneath the hypersphere as ID-specific features are not activated appreciably. This depicts a clear difference in the response of the network towards OOD and ID. Similar observations can be found on logits as the feature representation has a direct implication on it. More importantly, from the comparison of various methods, we observe that the separability factor S induced by our method is highly significant. For instance, we achieve (S = 6.01) at the end of training in the penultimate feature. The progression of S over the epochs in both the feature and logit space can be observed from Figure 6.
Compatibility with existing OOD scoring methods T2FNorm is compatible with various existing OOD scoring functions. Figure
Discussion
Ablation Study of Normalization Imposing normalization during OOD scoring enforces a constant magnitude constraint on all inputs, irrespective of their originating distribution. This effectively eradicates the very characteristic (the magnitude property) that could potentially differentiate whether an input originates from the training distribution or not. It results in the trained network incorrectly assuming OOD samples as ID samples. As demonstrated in Figure 10, the separability of the nature of input distribution is compromised by normalization during OOD scoring. Quantitatively, for trained ResNet-18 architecture with CIFAR-10 as ID, this degrades the mean FPR@95 performance from 19.7% (T2FNorm) to 48.66%.
Sensivity Study of Temperature τ Figure 9 shows that the classification accuracy and OOD Detection performance (FPR@95) are not much sensitive over a reasonable range of τ . We found the optimal value of τ to be 0.1. And while the performance is good for τ ∈ [0.05, 1], both accuracy and FPR@95 score degrades substantially for τ > 1 and τ ≤ 0.01. Implication on FC Layer Weights Figure 11 shows the weights of the final classification layer corresponding to the Airplane class for T2FNorm and LogitNorm. In comparison to the smoother weight of LogitNorm, weights of T2Norm have higher variance and are sharply defined. Quantitatively, we find the average variance to be about 10 times higher in T2FNorm as compared to LogitNorm. Roughly speaking, it can be inferred that T2FNorm encourages the clear assignment of important features for a given category classification. It necessitates the activation of the specific important features for ID sample predictions. Conversely, OOD samples, which lack these important features, fail to activate them, leading to lower softmax probabilities. Table 4 also further shows that the mean of both the negative weights and positive weights are greater in magnitude for T2FNorm.
Related Works
OOD Detection Numerous studies have emerged in recent years focusing on OOD detection. A straightforward method for OOD detection is a simple maximum softmax probability [17]. However, it remains an unreliable scoring metric for OOD detection because of inherent overconfidence imposed by training with one-hot labels [20]. OOD detection has been primarily tackled with three lines of approach in the literature (a) post-hoc methods, (b) outlier exposure and (c) train-time regularization.Post-hoc methods [5,17,18,21,22,23,24,25,26] aim to improve the ID/OOD separability with pretrained models trained only with the aim for accuracy. Outlier exposure is another less studied line in academic research, as the assumption of the nature of OOD limits the ideal applications. However, it is found to be commonly used for industrial purposes. Training time regularization [27,28,29,30,31,32] employs some form of regularizer in the training scheme, and this line of work due to its capacity to directly impose favorable constraints during training potentially offers the most promising path to superior performance for OOD detection. For instance, LogitNorm [1] employs logit normalization as training time regularization to address the overconfidence issue and, thereby, improve OOD detection. Furthermore, LogitNorm [1] shows overconfidence can somewhat be addressed sub-optimally with logit penalty too. Different from LogitNorm [1], our work pertains to addressing overconfidence in the feature space thereby automatically addressing overconfidence in the logit space. Needless to say, our work deals with high-dimensional normalization.
Normalization The utility of normalization in ensuring consistent input distribution and reducing covariate shift has proven beneficial in various subareas of deep learning [33,34,35,36]. Normalization consisting of learnable parameters such as Batch Normalization [37], Layer Normalization [38], and Group Normalization [39], have been effective in mitigating training issues of neural networks.
On the other hand, the strategic placement of L2 normalization has also been a popular recipe for training more effective deep learning models. Similar to our work, [36] constrains the features to lie on the hypersphere of fixed radius for face verification purposes but does so in both the training and testing phase without scaling. Further works in deep metric learning such as ArcFace [40], CosFace [41], SphereFace [42], etc realize the effectiveness of normalization. Specifically, [43] shows the hyperparameter-free OOD detection method introducing cosine loss by taking inspiration from norm face [44] where both the penultimate feature and fully connected layer are normalized. Our approach differs from cosine loss in three different ways. a) The temperature parameter is learned in the cosine loss method whereas we set a fixed temperature across all 6 settings. While it may seem extra hyperparameter is being added, we find a value of τ being architecture agnostic as well as dataset agnostic. b) Unlike cosine loss, we avoid normalizing the classification layer freeing it to learn non-smooth weight values which, in turn, boost compatibility with various downstream OOD scoring methods as they rely on ID-OOD separability based on magnitudes. c) Importantly, we remove the constraint of hyperspherical embeddings in the OOD scoring phase while [43] uses cosine similarity and is not compatible with other OOD scoring functions. [45] provided a study showing modern neural networks' poor calibration and proposed to use temperature scaling as posthoc method to improve calibration. Platt scaling [46] is another simple postprocessing calibrating technique. Label smoothing [47] helps to avoid overconfident calibration by adding uncertainty to the one-hot encoding of labels.
Conclusion
In summary, our work introduces a novel training-time regularization technique, termed as T2FNorm, which seeks to mitigate the challenge of overconfidence via enhancing ID/OOD separability. We empirically show that T2FNorm achieves a higher separability ratio than prior works. This study delves into the utility of feature normalization to accomplish this objective. Notably, we apply feature normalization exclusively during the training and inference phases, deliberately omitting its application during the OOD scoring process. This strategy improves OOD performance across a broad range of downstream OOD scoring metrics without impacting the model's overall accuracy. We provide empirical evidence demonstrating the versatility of our method, establishing its effectiveness across multiple architectures and datasets. We also empirically show our method is less sensitive to the hyperparameters.
Broader Impact and Limitations
OOD detection is a crucial task regarding AI safety. The accuracy of OOD detection directly impacts the reliability of many AI applications. Safe deployment of AI applications is crucial in areas such as healthcare and medical diagnostics, autonomous driving, malicious use or intruder detection, fraud detection, and others. In such cases, OOD detection can play a crucial role in identifying and increasing robustness against unknown samples. Further, OOD detection also helps in increasing the trustworthiness of AI models to increase public acceptance of them. We demonstrate versatility across multiple datasets and architecture; however, due to the limited availability of compute, our experiments are limited to smaller resolution images from CIFAR-10 and CIFAR-100, and the results as such can't be guaranteed to generalize for higher resolution images or in real-life scenarios.
A Optimality of L2 normalization
In addition to L2 normalization, we investigated various other normalization types, including L1, L3, and L4. While these alternative forms of normalization also enhance performance, L2 emerges as the most effective. As demonstrated in Table 5, all the investigated normalizations outperform the baseline, underscoring the efficacy of normalization in general. Though the separability factor in feature space for L1 normalization is higher, the FPR@95 for L2 normalization is still superior along with a higher separability factor in logit space.
B Adverse effect of OOD scoring-time normalization
As previously discussed in Section 5, the utilization of OOD scoring-time normalization gives rise to a detrimental consequence primarily due to the obfuscation of the inherent distinction between ID and OOD samples in terms of their magnitudes. Using the ResNet-18 model trained with CIFAR-10, the results on various OOD datasets are more clearly summarized in Table 6 with three OOD metrics.
C Feature norm Penalty
Suppressing the norm, which is directly linked to overconfidence [1], can also be feasible by employing the L2 norm of feature as the additional regularization loss. Using the joint optimization of L S + λL F P (L S referring to supervised loss, L F P referring feature norm penalty loss), setting λ = 0.01 to not affect the accuracy, we indeed find little improvement over baseline in FPR@95 metric only. The optimization objective seems to be satisfied with the relatively smaller average norm for both ID and OOD. However, the desired ID/OOD separability is not quite achieved as seen in 7.
D Ablation study on Different Layers
The ResNet-18 architecture is primarily comprised of four residual blocks: Layer1, Layer2, Layer3, and Layer4. We conducted an ablation study to understand the impact of the application of T2FNorm on representation obtained after each of these layers and discovered that pooled representation obtained from Layer4 was the only effective way to boost OOD detection performance 8. This finding aligns with observations documented in [22] where it was noted high-level features are considered to have substantial potential for distinguishing between ID and OOD data as the earlier layers primarily handle low-level features, while the later layers process semantic-level features. Furthermore, as we note in 8, implementing normalization in the earlier layers results in a performance that is even poorer than the baseline.
E FC Weights Visualization
We show weight visualization of all the classes of fully connected layers in Figure 12 The statistics of norms in both feature space and logit space along with the separability ratio obtained from the ResNet-18 model trained in CIFAR-100 datasets with various methods are given in Table 9. The observation is similar to the earlier observation, CIFAR-10 as ID. For instance, the separability ratio achieved with our method in the penultimate feature is 2.9 with a significantly lower norm of 0.32 for OOD data in comparison to other methods. We use SVHN as OOD data for the purpose of illustrating the statistics in all settings unless otherwise noted. CIFAR-10 CIFAR-10 is one of the most commonly used datasets for benchmarking computer vision performance, especially for classification tasks. It contains 10 categories of images.
MNIST The MNIST dataset comprises of 70,000 grayscale images, each representing a handwritten digit ranging from 0 to 9 in a resolution of 28x28 pixels. The dataset consists of 60,000 training images and 10,000 testing images.
SVHN SVHN is a real-world digit recognition dataset obtained from house numbers in Google Street View images. It is similar to MNIST images but the difficulty of recognition for machine learning algorithms is a bit harder.
LSUN Variations of LSUN datasets are designed for the purpose of scene understanding in largescale datasets.
Places365 Places365 is a large-scale scene dataset developed for the purpose of training deeplearning models to understand scenes.
Texture The Textures dataset contains images of various textures. It gives a collection of unique images apart from widely available object or scene images.
TinyImageNet TinyImagenet is a smaller version of the larger ImageNet dataset. It consists of 200 classes. The TinyImagenet dataset was created to make research consisting of rich categories computationally feasible with relatively lesser computing infrastructures.
H FPR@95 across various OOD datasets
The FPR@95 metric across various architectures with both CIFAR-10 and CIFAR-100 as ID is shown in the radar plot in Figure 18, 19, 20, and 21.
Observations show that TrainNorm is as competitive as LogitNorm, if not better, in terms of FPR@95 metric. I Compatibility with various OOD scoring functions Table 10 shows the comparison of various methods in terms of FPR@95, AUROC, and AUPR metrics. For instance, using a parameter-free EBO scoring function, our method achieves significantly superior performance in comparison with others.
J Distribution of Norm
The distribution of feature norm for ID (CIFAR-10) and OOD (SVHN) datasets in each of the three methods (Baseline, LogitNorm, T2FNorm) extracted from ResNet-18 architecture are shown in the Figure 22, and 23. In comparison to the baseline, both LogitNorm and T2FNorm have lesser overlap among ID/OOD samples.
|
2023-05-30T01:16:17.345Z
|
2023-05-28T00:00:00.000
|
{
"year": 2023,
"sha1": "aabd28d53bb0ceb93f2c1c099d186bd55d31c4e4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aabd28d53bb0ceb93f2c1c099d186bd55d31c4e4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
1300761
|
pes2o/s2orc
|
v3-fos-license
|
Representing and Integrating Linguistic Knowledge
This paper describes a theory of the representation and use of linguistic knowledge in a natural language understanding system. The representation system draws much of its insight from the linguistic theory of Fillmore el al. (1988). This models knowledge of language as a large collection of grammatical constructions, each a description of a linguistic regularity. I describe a representation language for constructions, and principles for encoding linguistic knowledge in this representation. The second part of the theory is a conceptual analyzer which is designed to model the on-line nature of the hunmn language understanding mechanism. I discuss the core of this analyzer, an information-combining operation called integration which combines constructions to produce complete interpretations of an utterance.
Introduction
This paper describes a theory of the representation and use of linguistic knowledge in a natural language understanding system. The representation system draws much of its insight from the linguistic theory of Fillmore el al. (1988). This models knowledge of language as a large collection of grammatical constructions, each a description of a linguistic regularity. I describe a representation language for constructions, and principles for encoding linguistic knowledge in this representation. The second part of the theory is a conceptual analyzer which is designed to model the on-line nature of the hunmn language understanding mechanism. I discuss the core of this analyzer, an information-combining operation called integration which combines constructions to produce complete interpretations of an utterance.
Representation
A natural way to model knowledge of language is as a large collect.ion of facts or regularities about the language. In this theory I use a single representational device, the grammatical construction, to represent these regularities. Our entire model of linguistic knowledge consists of a database of these constructions, uniforrnly representing lexieal knowledge, syntactic knowledge, and semantic knowledge.
Although the notion of grammatical construction I use is based on that of Fillmore e! al. (1988) it differs in a few respects. Filhnore et al. define a construetion as a structure which represents a "pairi~9 of a syntaclic pallcrn with, a meanin 9 sl'ruclure". Such a construction is a sig., in the sense of Saussure, or a rule-pairing in the sense of Montague. 1 use a somewhat extended notion: a construction is a relation between one information structure and one or more others. These information structures can be semantic, syntactic or both. Thus, whereas the "sign" expressed a relation between a set of ordered phonemes ~md a meaning structure, the construction abstracts over this by replacing 'ordered sets of phonemes' with *I want to express my tha.nks to Peter Norvig, Nigel Ward, and l{obert Wilensky for many helpfld comments and discussions on these ideas. This research was ~ponsored by the Defense Advanced Research Projects Agency (DoD), monitored by the Space and Naval War-Ikre Systems Command under N00039-88-C-0292, and the Office of Naval Research, under contract N00014-89.-,1-3205. abstractions over them. These abstractions can be syntactic or semantic ways of expressing more abstract categories to which these phoneme sequences belong. The construction is then a part-whole structuring relating these categories.
An Example Construction
To make this idea more concrete, consider some specific examples from the grammar, which currently includes about 30 constructions. In the examples I will be focusing on the knowledge that is necessary to handle the sentence: "IIow cart I find out how much disk space I am using ?" The top-level construction for the sentence is the WhNonSubjeetQuestlon construction. This construction is a sub-type of the WhQuestlon construction. The wh-questions are those which begin with a wh-element --an element which is semantically questioned. In the WhNonSubjeetQuestlon this questioned element begins the sentence but does not flmction as the grammatical subject of the sentence. Following are two other examples of WhNon-S abject Q uestions: Why did you run ~he wrong way?
What did you pick up at the store? '/'he construction is represented in figure 1 below: There is a construction in English called WhNonSubjec-tQuestion. It consists of' two constituents, $t and Sv, and a "whole", $% (In tile example sentence, the St constituent consists only of the word "How", while t:he $v constituent consists of the phrase "can I find out how much disk space I am using.") These three elements consist of statements in a knowledge representation language based on Wilensky (1986). The representation language is relatively perspicuous, not differing greatly from other popular representation languages (such as KL-ONE (Brachman & Schmolze 1985)). The operator "a" creates an existentially quantified variable, and is followed by the variable name and a set of statements in its scope.
The infix operator "AIO"(An Instance Of) establishes one element as an instance of another, and the infix operator "AKO" (A Kind Of) establishes one element as a sub-type of another.
The first constituent, St, is an instance of the Identify concept, which will be described later. This concept contains two relations, Identify-Specified and Identify-Presupposed. (Note that they are referred to simply as -Specified and -Presupposed.) The second constituent, $v, must be an instance of the construction SubjectSecond-Clause. This is a clause where the subject follows an initial "verbal auxiliary. The name SubjectSecond-Clause was chosen to replace the traditional term Aux-Inversion in order to avoid employing terminology that uses the "movement" metaphor.
The construction builds the $q element, which is an instance of the Question concept. Note that the variable which fills the -Queried relation of the Question is the same element that fills the -Specified relation in St. The -Presupposed relation is filled with the integration of the semantics of St and Sv. The integration process is discussed below.
Each constituent (enclosed in square brackets) is composed of a set of semantic relations. These constituents correspond to what are cMled informational elements in the unification literature. Each consists of a group of semantic relations expressing some information about some linguistic entity.
The relation among these informational elements is represented by the right-arrow "--," symbol. In the current version of thc representational system, the right-arrow indicates the eonflation of part-whole and ordering relations. That is, the default is for constructions to be ordered. In order to indicate a part-whole relation without the ordering relations, the right-arrow is followed by the keyword "UN-ORDERED". Figure 2 below provides an example of an unordered construction.
Features of the Representation
The representation language and the granm~ar offer a number of distinguishing features. First is the ability to define constituents of constructions semantically as well as syntactically. For example, note that the first constituent of the WhNonSubjec-tQuestion, St, was defined ~s any informational element which is an instance of a certain Identify concept. The Identify concept indicates some question about the identity of the individual filling the Identify-Speclfied relation. The information that is known about the individual fills the Identify-Presupposed relation. The Identify concept is the main characteristic of the lexieal semantics of the wh-words. Thus this constituent is specified semantically, simply by requiring the presence of the Identify semantics. In more traditional grammars, the constituent would be defined syntactically~ as some sort of WhPhrase.
Capturing the syntax of the WhPhrase would entail duplicating huge parts of the grammar or introducing significant syntactic apparatus. Since the semantics of Identify must be in the grammar anyhow, using it to specify the construction simplifies the grammar at no cost 1 A construction's constituents can be constrained to be instances of other constructions. For example, the second constituent in figure 1 is constrained to be an instance of SubjectSecondClause.
But these constraints are the only form of syntactic knowledge that this representational system allows. That is, constituency and ordering are the only syntactic relations in the system. All others are semantic. In keeping with these last two ideas, this representation dispenses with any enrichment of surface forrn. Thus phenomena which might traditionally be handled by gaps, traces, or syntactic coindexing are handled with semantic relations. Thus, as we saw above, the first constituent of the WhNonSnbjectQuestlon construction is specified semantically, and no gaps or traces are involved 2
Relations Among Constructions
As with other kinds of knowledge, linguistic knowledge includes an abstraction hierarchy. All example of two constructions related by abstraction is given in figure 2. Constructions are also augmented by information concerning their frequency. The use of frequency information in comprehension (suggested in Bresnan 1982) is discussed in Wu (1989), and is not discussed further here.
I noted above that a construction may constrain one of its constituents to be an instance of some other construction. In a similar manner, a construction may require that several of its constituents participate together in a second construction. This is represented by including specifications for additional constructions after the keyword WITH. The relevant constituents of these extra constructions are then marked with the proper constituent variables from the main construction.
To illustrate the last few points, consider the VerbPartlclel construction in figure 3. It is one of two constructions in which discontinuous phrasal verbs like "find out" or "look up" occur. These two constructions specify the ordering of the verb and particle, differing in that VerbParticle2, which covers phrases like "find it out", includes an extra 1This makes it more difficult for the parser to index constructions to consider. Because a construction's constituents may be defined by any set of semantic relations rather than by a small set of syntactic categories, it is no longer possible to simply look for a syntactic handle in the input. Indeed, the "construction access problent" becomes much more like the general memory access problem. In this sense this analyzer resembles the Direct Memory Access Parser of Riesbeck (1986). 2Jur~fsky (1988) discusses how other phenomena classically handled by transformations and redundancy rules can be represented as constructions.
constituent. But VerbPartlelel and VerbPartl-cle2 both require that their constituents be filled by phrasal verbs like "find out". This fact is represented by requiring that these constituents be instances of the unordered LexlcalVerbParticle construction, which is the ancestor of all these phr~al verbs. In the representation of the VerbParticle construction below, constituents Sv and Sp are constrained to be the Sverb and $partlcle respectively of a Lexi-calVerbPartlcle construction. Since FindOut is a subtype of LexlealVerbPartlcle, it meets the necessary constraints, and can integrate with VerbPar- The Basic Integration Operation Associated with the representational system is an information-combining operation called integration, used here in two ways. The more complex one is discussed in the next section. The simpler version of integration is used to match informational elements to constituents of a constructions. The set of relations which constitute a constituent act as a set of constraints on any candidate constituents. For example, consider what semantics must be prcsent in an informational element for it to be a constituent of aNote that a parse which uses WITH constraints does not have a parse tree, but a parse graph, q'his is because both VerbParticle and LexicalVerbParticle would occur in the parse tree above the phrase "find out". However, this grammar obviates the need to keep a parse tree at all, as the grammar itself specifics how semantic integration is to be (tone. The construction has two constituents, the lexical item "how" and a second constituent which is semantically specified to be some sort of semantic scale. This constituent may be an adjective, an adverb, or a quantifer so long as it has the proper semantics. In the cases above, the scales range over such things as width, strength, speed, and amount. The construction takes a constituent with these semantics, and builds an instance of the Identify concept. The -Presupposed relation of this Identify concept is filled by the semantics of the constitucnt. The -Specified relation is bound to the -Location of the presupposed object on this scale. That is, the memfing of the construction is something like "The localion of objec~ Sz on scale Ss is in queslion". In order for an informational element to integrate with the Ss constituent of the HowScale construction, its sernantics must already include an instance of a scale, with some object on the scale. As in standard unification, the constraints are matched with the elements in a recursive fashion, binding variables in the process. Unlike unification, a match cannot succeed if a candidate constituent is merely compatible with the required information. For example, the element [ (a Sx ($x AIO ScAle))] would u,~ify with the $s constituent. However, it would ~ot inlegratc, because it lacks the -On relation. We can summarize the simple integration algorithm as follows:
Simple
Integration Algorithm: Unify the set of constituent relations with the relations present in the candidate, subject to the constraint that every relation in the constraint must already bc instantiated in the candidate.
There arc times when wc want the semantics of unification rather than integration. That is, we may want to ensure that a certain relation is in the semantics of a construction regardless of which constituents it may also occur in. We can accomplish this by putting information in the "whole" of the construction's part-whole structure. For example, in figure 4, the information "($x -Location Sz $s)" is added to the final semantics whether or not it was present in the $s constituent.
Thus there is an asymmetry in the application of the integration operation, caused by the part-whole structure of the construction. The relations in the constituent slots of the construction definition are viewed as conslraints on candidate constituents. The relations in the "whole" element, on the other hand, arc used as instructions for creating a whole semantic structure.
Note that this interpretation of the construction is limited to its use in comprehension. When a construction is used in generation the exact opposite situation holds --the "whole" element imposes constraints, while the constituents give instructions.
More Complex Integration
The algorithm above is sufficient to match candidate constituents to the constraints of a construction. But this sort of simplistic combination is not common in natural language when combining constructions.
Integrating constituents usually involves modifying some of t:he structure of one of the constituents. A particularly common type of modification involves binding the value of one constituent to some open variable in another constituent 4. That is, we must decide whether two relations are at the same level, or whether one should fill some semantic gap in another.
In some cases, deciding the level and finding an appropriate semantic gap can be quite simple. For example, part of the verb phrase construction is shown in figure 5 below. Note that the variable $/v is marked with a slash. This indicates that the verb $v is the matrix structure, while the complement $c is to be integrated to some gap inside of $v. This slash can be used for any variable to indicate that it is the matrix for some integration. For a more complex example, consider figure 1 (duplicated below), examining how the two constituents of the WhNonSubjectQuestion are integrated for the sentence "How can [ find out how much disk space I am usingF'. I will not discuss the details of the Sv constituent except to note that its semantics involve an AbilityState predicated of some person asking a question, and concerning some finding-out action.
The first constituent (St) of the WhNonSub-jectQuestlon is filled here by the lexical construction how6, one of the various "how" constructions. How6 is concerned with the means of some action, asking for a specification of the means or plan by which some goal is accomplished. We can ignore for 4This occurs in such common phenomena as WHmovement, Y-Movement, Topicalization, and other phenomenon classically analyzed as movement rules, where there is some long-distance link between some element and the valence-bearing predicate into which it integrates. ($g-Goal Sx))]) [(a $q ($q Aio question) ($p -Queried $q) ($pr -Presupposed $q) ($pr AIO Planfor) ($p -Plan $pr) ($g -Goal $pr) ($g AIO AbilityState))] Figure 7 tlere in integrating $/v and $/presup, the algorithm finds a gap inside the second structure. That is, the integration algorithm integrated the Planfor of the first constituent with the AbilityState of the second, in effect unifying the variable Sg in the PlanFor structure with Sv, the AbilityStatc structure. Thus the complete algorithm might be expressed as follows: Full Integration Algorithm: Integrate the set of constituent relations with the relations present in the candidate by finding an appropriate gap (variable) in one of the two structures to integrate with the other.
Integration is an augmentation of unification. In order to handle more complex constructions, the operation would need to be augmented further, adding more inferential power.
For example, a certain class of inference is required to integrate constructions like DoubleNoun (Wu 1990), where the relation between the constituents can be quite indirect and contextually-influenced. Such an integration algorithm might also need to make the kind ol! metaphoric inferences studied by Martin (1988), ok: the abductive inferences of Charniak & Goldman (11988) or Hobbs et al. (1988). But rather than making these inferences in the pipelined fashion that these other mechanisms use, augmenting the integi:ation algorithm allows these inferences to be made ir~ an on-line manner.
Previous Research
The theory I have presented draws many elements from other theories of grammar, especially Fillmore el al. (1988). The use of ordered and unordered constructions is based on the ID/LP notation of Cazdar et al. (1985) The theory differs from these and most other grammars (such as Bresnan (1982), Marcus (1980), Pereira (1985)) in allowing semantic constraints on constituents (indeed in emphasizing them), and in disallowing syntactic relations other than ordering and constituency. It also differs by constraining the grammar to be semantically intbrmative enough to allow the analyzer to produce interpretations in an on-line fashion.
As for the tradition which has concentrated on semantic analysis, this theory owes much to such systems as ELI (Pdesbeck & Schank 1978) and the Word Expert Parser (Small & Rieger 1982). It ditfers from these in its commitment to representing high-level linguistic constructions, to the use of declarative representations, and to capturing linguistic generalizations.
A nmnber of earlier systems have integrated linguistic knowledge with other knowledge, including P~I--KLONE (Bobrow & Webber 1980) and Jacobs's (1985) ACE/KING system, on which this theory draws heavily. The use of constructions here draws al~o on the pattern-concept pair of Wilensky & Arens (1980) and Becket's 1975 work on the phra.sal lexicon.
The integration operation is based on unification in ways discussed above. Unification was first proposed by Kay (1979), and has since been used and ex~ended by many other theories of grammar.
Conclusion
I draw three conclusions for the representation and use of linguistic knowledge: :1. Allowing constructions to include semantic as well as syntactic constraints removes a great deal of complexity from the syntactic component of a grammar. This is useful because no corresponding complexity is added to the semantic component. Semantic knowledge which must be in the system anyway is employed.
2. Committing to exploring semantically rich constructions like HowSeale froln a semantic perspective produces a grammar of sufficient richness to .allow lexical semantics to influence the integration process, and thus the interpretation, in an on-line way.
3. Using a single representational device, the grammatical construction, avoids the proliferation of syntactic devices and simplifies the representational theory. This may also simplify the torte-sponding learning theory.
4. Extending unification-style approaches fi'om syntax to semantics requires augmenting the feature-based unification operation to richer, relational knowledge representation languages. I have shown how one such extension has been implemented, allowing some gap-seeking intelligence to guide the integration process.
|
2014-07-01T00:00:00.000Z
|
1990-08-20T00:00:00.000
|
{
"year": 1990,
"sha1": "94d3a0c77b6ceaab39f9b741f6f8e3e4bfebcc0d",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=997974&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "94d3a0c77b6ceaab39f9b741f6f8e3e4bfebcc0d",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
268521095
|
pes2o/s2orc
|
v3-fos-license
|
Integrated single-cell sequencing, spatial transcriptome sequencing and bulk RNA sequencing highlights the molecular characteristics of parthanatos in gastric cancer
Background: Parthanatos is a novel programmatic form of cell death based on DNA damage and PARP-1 dependency. Nevertheless, its specific role in the context of gastric cancer (GC) remains uncertain. Methods: In this study, we integrated multi-omics algorithms to investigate the molecular characteristics of parthanatos in GC. A series of bioinformatics algorithms were utilized to explore clinical heterogeneity of GC and further predict the clinical outcomes. Results: Firstly, we conducted a comprehensive analysis of the omics features of parthanatos in various human tumors, including genomic mutations, transcriptome expression, and prognostic relevance. We successfully identified 7 cell types within the GC microenvironment: myeloid cell, epithelial cell, T cell, stromal cell, proliferative cell, B cell, and NK cell. When compared to adjacent non-tumor tissues, single-cell sequencing results from GC tissues revealed elevated scores for the parthanatos pathway across multiple cell types. Spatial transcriptomics, for the first time, unveiled the spatial distribution characteristics of parthanatos signaling. GC patients with different parthanatos signals often exhibited distinct immune microenvironment and metabolic reprogramming features, leading to different clinical outcomes. The integration of parthanatos signaling and clinical indicators enabled the creation of novel survival curves that accurately assess patients’ survival times and statuses. Conclusions: In this study, the molecular characteristics of parthanatos’ unicellular and spatial transcriptomics in GC were revealed for the first time. Our model based on parthanatos signals can be used to distinguish individual heterogeneity and predict clinical outcomes in patients with GC.
drug therapy are the standard treatment methods for GC.According to statistics, with the development of targeted drugs, endoscopy and surgery, the global incidence and mortality of GC have decreased year by year, but the prevalence of GC in East Asia is still very high, accounting for more than 70% of the new diagnoses and deaths of GC in the world [4,5], and it's worth noting that the prevalence of GC in people under 50 years old has increased year by year globally [4].This may be linked to genetics, obesity and dysregulation of the microbiome [6].Additionally, GC is a highly heterogeneous disease in terms of clinical phenotype and molecular patterns.The heterogeneity of the tumor microenvironment and interaction with the host can potentially influence the disease progression, making the prognosis prediction for GC patients very challenging [7,8].Therefore, it is crucial to urgently identify sensitive and effective methods to evaluate clinical data in GC patients in order to optimize their treatment process and prognosis.
In 2007, Ted Dawson discovered a novel form of programmed cell death that is dependent on DNA damage and PARP-1 activation, and named it parthanatos [9].Subsequent research has increasingly confirmed that the parthanatos pathway is widely involved in the occurrence and development of various diseases, including Parkinson's disease, diabetes, heart failure, cerebral ischemia-reperfusion injury, and others [10].PARP-1 (Poly (ADP-Ribose) Polymerase 1) is a DNA repair enzyme that mainly exists in the nucleus of eukaryotic cells, accounting for over 90% of cellular PARP.Under normal physiological conditions, PARP-1 monitors the DNA replication process, identifies and approaches DNA damage sites, promotes the recruitment of DNA repair effector proteins, and plays a role in repairing DNA damage.However, under pathological conditions with substantial DNA damage, PARP-1 is excessively activated, catalyzing the breakdown of intracellular nicotinamide adenine dinucleotide (NAD) into nicotinamide and poly ADP-ribose (PAR), resulting in significant depletion of NAD and accumulation of PAR.PAR then migrates to the mitochondria, leading to inhibition of the tricarboxylic acid cycle, impaired mitochondrial energy metabolism, release of apoptosisinducing factor (AIF) and migration inhibitory factor (MIF) to the nucleus, chromatin condensation and degradation, ultimately resulting in parthanatos [11,12].In the development and progression of various cancers, parthanatos has demonstrated elevated activity [13].The expression levels of PARP-1 and associated genes are generally higher in various cancers, such as breast cancer, ovarian cancer, endometrial cancer, lung cancer, and prostate cancer, when compared to normal tissues [14].Furthermore, studies have shown that mice with PARP-1 gene knockout exhibit varying degrees of inhibition in tumorigenesis, particularly in pancreatic cancer and colorectal cancer [13,15].In GC, the upregulation of PARP-1 expression has been shown to be associated with poor prognosis [16], however, due to the complex cascade reactions and involvement of multiple signaling factors in parthanatos, the specific mechanism of the parthanatos pathway in the occurrence and development of GC has not been elucidated.Therefore, our research will start from here to explore the potential link between parthanatos and GC, aiming to provide scientific guidance strategies for the clinical treatment of GC.
In this study, we first selected parthanatos-related genes from the GeneCards database.Based on TCGA and GEO databases, we collected multi-omics data of parthanatosrelated genes in various human cancers.The analysis of integrated single-cell and spatial transcriptome data contributes to the understanding of the structure of cell type distribution and the cellular communication mechanisms that underpin this structure.We introduce this section to explore the differences in the expression of parthanatos signal between GC tissues and normal tissues.Subsequently, we established a GC patient classification model based on parthanatos-related gene expression patterns based on unsupervised cluster analysis.This model can distinguish GC patients based on their different parthanatos characteristics.Furthermore, we used the differentially expressed genes between subtypes to develop a specific parthanatosrelated prognostic model for GC, and co-predicted the clinical outcome of patients together with the clinical characteristics of patients.In summary, through a series of bioinformatics analyses, we not only explored the single cell and spatial transcriptomic molecular characterization of parthanatos in GC, but also provided personalized guidance for the clinic treatment and prognosis of GC patients.
Sample source and parthanatos-related gene source
Based on the TCGA platform, we downloaded and curated multi-omics data of pan-cancer in humans, including CNV, SNV, and methylation data, as well as mRNA expression profiles and corresponding clinical information at the transcriptome level.The processing methods for these data were similar to previous studies [17,18].Visualization of the results was achieved using the R language and TBtools software.In addition to the pan-cancer data, we focused on in-depth analysis and exploration of the transcriptome data of GC.Specifically, we downloaded the corresponding data from the TCGA-STAD cohort, which included a total of 407 samples, including 32 adjacent non-cancer samples and 375 GC samples.After excluding samples with a survival time of less than 30 days, we analyzed the remaining 312 GC samples with complete follow-up information.Furthermore, to obtain more accurate and convincing results, we collected additional publicly available GC transcriptomic data (GSE84437).After integrating the prognostic information and expression profiles, we obtained a total of 431 GC samples with complete follow-up information.Since different data sources may have batch effects, we performed batch correction using the methods from previous studies [19,20].GeneCards is a comprehensive online gene information database that provides detailed information about human genes, including gene function, expression, disease associations, mutations, protein information, and drug association [21].Users can easily search and browse thousands of genes to gain a deeper understanding of their important roles in biology, medicine, and drug development.All parthanatos-related genes in this study were obtained from the GeneCards platform, where we retrieved 32 parthanatos-related genes for subsequent indepth analysis.
Analysis of single-cell sequencing and spatial transcriptome data
The single-cell GC dataset is derived from GSE163558, which includes 3 GC tumor samples (GSM5004180, GSM5004181, and GSM5004182), 1 tumor-adjacent sample (GSM5004183) and 6 metastasis sample (GSM 5004184, GSM5004185, GSM5004186, GSM5004187, GSM5004188, GSM5004189) [22].In order to perform quality control on our raw count data, the following criteria were set: A) min.cells= 3, min.features= 200, B) nCount_RNA >= 1000, C) 200 <= nFeature_RNA <= 8000, D) percent.mt<= 20.To address the issue of differing total counts among cells, we applied a global scaling normalization method (LogNormalize) to normalize the data.This involved normalizing the feature expression measurements of each cell by their total expression, followed by multiplication by a scaling factor (10000), and finally transforming the values using the natural logarithm.Based on the "vst" algorithm, we identified feature genes that exhibited high intercellular differences in the dataset and normalized all genes for further analysis.
To simplify computations and remove data noise, we performed a joint analysis of PCA and Harmony, aiming for batch correction and dimensionality reduction.Harmony applies principal component analysis to embed the transcriptomic expression profile into a lower-dimensional space and then iteratively removes dataset-specific effects.Based on the study of Jiang et al., we manually annotated the single-cell data after quality control and identified a total of 7 cell types, including myeloid cell, epithelial cell, T cell, stromal cell, proliferative cell, B cell, and NK cell [22].We visualized the above cell subgroups in the form of UMAP [23].Additionally, we employed six algorithms to score gene sets in the single-cell dataset: AUCell [24], Ucells [25], Singscore [26], ssGSEA [27], addmodulescore [28,29], and scoring [17].The scoring value is the sum of scores from the previous five methods and serves to assess the overall distribution of parthanatos gene set scores more stably and comprehensively.
The spatial transcriptomics data were obtained from the GSE186290 dataset.It is important to note that this dataset consists of spatial transcriptomic sequencing of tissue samples from GC mice, and there are currently no publicly available human GC spatial transcriptomics data for analysis.The Read10X_Image function was used to read the spatial distribution information of tissue images and cells, while the Load10X_Spatial function integrated the spatial expression profile with spatial localization information.Similar to the singlecell analysis, we used the aforementioned six gene set scoring methods to evaluate the parthanatos scores of each cell.Finally, we used the SpatialFeaturePlot function to visualize the spatial distribution of parthanatos scores.
Construction of parthanatos molecular classifier for GC
As research progresses, the scientific community is gradually realizing the significant molecular heterogeneity both between and within tumors.It is due to this heterogeneity that individuals with the same disease often have different treatment response strategies and clinical outcomes.In light of this, we have developed a novel molecular classifier for GC based on parthanatos-related genes.Our goal is to clearly distinguish GC patients with different parthanatos features.
First, we merged the GC transcriptomic data from the TCGA-STAD cohort and the GSE84437 cohort, resulting in a total of 743 GC samples.Molecular clustering analysis was performed using the "ConsensusClusterPlus" R package developed by Wilkerson et al. [30], with the following specific parameters: reps=50, pItem=0.8,clusterAlg="km", distance="euclidean", maxK=9.The optimal number of clusters was determined based on the consensus cumulative distribution function and delta area plot.The clustering of GC patients was based on unsupervised clustering of parthanatos-related genes.To evaluate the classification performance and clinical relevance of this classifier, we further assessed the parthanatos scores and clinical prognostic differences among different subtypes of GC patients.Additionally, we generated a heatmap to visually depict the expression characteristics of each parthanatos-related gene in this molecular classifier.
Identification of internal molecular characteristics of the parthanatos molecular classifier
To fully explain the different clinical outcomes among GC patients with different molecular subtypes, we conducted in-depth research and exploration of their intrinsic molecular features.Firstly, based on the KEGG database, we collected 42 classical metabolic pathways and 24 classical immune pathways.The "GSVA" package was used to evaluate the metabolic and immune signaling strength in the 743 GC samples.Finally, we depicted the distribution of the intensity of each metabolic and immune signal in the form of a heatmap.In general, metabolic reprogramming and immune microenvironment are classical molecular markers of tumors, and different forms of metabolic reprogramming and immune microenvironment features may be potential reasons for the different prognostic outcomes of different parthanatos subtypes.
Apart from the immune pathways, there are many other algorithms in bioinformatics that can evaluate the tumor immune microenvironment.Therefore, we have subsequently conducted various immune-related algorithms to explore the inherent relationship between the parthanatos subtype and the tumor immune microenvironment.A) The "Estimate" package is used to predict the content of stromal cells and immune cells in malignant tumor tissues based on gene expression data.This algorithm is based on enrichment analysis of individual sample gene sets and produces four scores: a) stromal score (indicating the presence of stroma in tumor tissue) b) immune score (representing the infiltration of immune cells in tumor tissue) c) ESTIMATE score (stromal score + immune score) d) tumor purity.Using the "Estimate" package, we calculated the aforementioned four scores for 743 GC patients and conducted corresponding comparisons.B) TIMER2.0 online platform provides seven algorithms for predicting immune cell infiltration in tumor samples, including: TIMER, CIBERSOFT, CIBERSOFT-ABS, QUANTISEQ, XCELL, EPIC, and MCPCOUNTER [31].The specific procedure is as follows: based on R language, the 743 GC samples were divided into 5 groups, with each group containing 150 samples, and the last group containing 143 samples.The expression matrix of these five groups was uploaded to the Immune Estimation module of the TIMER2.0platform for prediction, and the results of immune cell infiltration were then downloaded and merged.Heatmaps were generated to show the immune cell infiltration of each sample and calculate the corresponding statistical differences [32].C) Immune checkpoints are the main limiting factors for immune cells to exert antitumor functions.Therefore, we collected classical and recognized immune checkpoints from previous literature [33], and compared the expression characteristics of immune checkpoint-related genes in different parthanatos subtypes of patients.D) The association between each parthanatos-related gene and the GC immune microenvironment was calculated.Based on previous literature reports [34], we identified a set of 29 genes related to immune cells and immunerelated functions.The ssGSEA algorithm was used to evaluate the immune cell and immune function scores of 743 GC samples, and Spearman correlation analysis was performed to explore the potential association between each parthanatos-related gene and immune cell infiltration and immune function.At the same time, the correlation between parthanatos score and immune cell infiltration and immune function was calculated.
Identification of potentially sensitive drugs for GC patients based on parthanatos molecular classifier
Danielle Maeser and colleagues developed a novel tumor drug sensitivity prediction scheme, namely "oncoPredict" package [35].This R package connects in vitro and in vivo drug screening, allowing easy prediction of tumor response to a large number of drugs screened in cancer cell lines.In this study, we utilized the "oncoPredict" package and the GDSC2 dataset to predict the response of 743 GC patients to each drug and ultimately analyzed potential beneficial drugs for different parthanatos subtypes of GC patients.
Developing a novel GC prognostic assessment model based on parthanatos molecular classifier
Based on the "limma" package, differential gene expression between different parthanatos subtypes was analyzed.The "clusterProfiler" package was used to assist in Gene Ontology (GO) enrichment analysis and KEGG pathway enrichment analysis [36][37][38].Subsequently, the differentially expressed genes were used to construct a GC prognostic model.Firstly, 431 GC samples from the GSE84437 dataset were randomly divided into a 6:4 ratio.60% of the GC samples were considered as the training cohort for the model (260 GC samples), while the remaining 40% of the GC samples were considered as an internal validation set 1 (test1 cohort, consisting of 171 GC samples).Additionally, all samples from the GSE84437 dataset were treated as an internal validation set 2 (test2 cohort, consisting of 431 GC samples).The GC samples from the TCGA-STAD dataset were considered as the external validation set for the model (test3 cohort, consisting of 312 GC samples).
Firstly, in the training cohort, the model was constructed through the following steps: A) Univariate Cox regression was used to select genes that are related to GC prognosis.B) LASSO regression was applied to filter genes in order to avoid gene collinearity and overfitting of the model [39].C) Multivariate Cox regression was used to build the prognostic model (based on the predict function to calculate the risk value for each GC patient).D) Patients were divided into high-risk and low-risk groups based on the median risk value.E) The "survival" and "survminer" packages were used to evaluate the modeling effect of the prognostic model by comparing the survival differences between different risk groups of GC patients [40].F) The "survivalROC" package was utilized to plot the ROC curve and calculate the AUC value to evaluate the predictive accuracy of the model [41].Subsequently, in the internal validation set 1, internal validation set 2, and external validation set, the same model genes selected from the training cohort were subjected to multivariate Cox regression.The predict function was used to predict the risk value for each sample.Based on the median risk value from the training cohort, all samples from the different datasets were grouped accordingly.Finally, the stability of the model was verified through internal and external validation strategies.To further facilitate the wide application of the prognostic model in clinical settings, clinical features were introduced as variables in the model [42].Based on the "rms" package, a nomogram was constructed, integrating the patient's risk group, grade, stage, gender, and age [32].Calibration plots and ROC curves were used to evaluate the predictive accuracy of the nomogram.
Differences in immune microenvironment between high and low risk group
To investigate the potential reasons for the different outcomes between high-and low-risk groups, we subsequently conducted immune microenvironment analysis on samples from both groups.Based on the immune-related scores obtained during molecular clustering, we compared the corresponding differences between the high-and low-risk groups.Specifically, we used the TIMER2 platform to predict the immune cell infiltration abundance using seven immune algorithms, and the statistical differences in immune cell infiltration between the high-and low-risk groups were evaluated using the wilcox.testfunction.Additionally, we also compared the expression of immune checkpoint genes between different risk groups.
Availability of data and materials
The datasets analyzed in this work may be found in the Supplementary Materials or the first author may be contacted.
Pan-cancer analysis based on genomics and transcriptomics
To study the variations and expression changes of parthanatos-related genes in various human tumors, we determined the mRNA expression as well as the frequency of copy number variations (CNVs) and single nucleotide variations (SNVs) of parthanatos-related genes in different tumors, using sample data from the TCGA database.Genes such as PARP1, NAMPT, AIMP2, MCL1, and TOMM20 exhibited widespread amplifications of CNVs in multiple cancers (Figure 1A), while genes like RNF146, GPX4, ESR1, CUL4A, and PTEN showed widespread deletions (Figure 1B). Figure 1C shows the SNV status of parthanatos-related genes, with genes such as PTEN, DDB1, ESR1, PARP1, and AIFM1 displaying high levels of SNVs in multiple cancers, especially in UCEC, STAD, and SKCM, providing valuable guidance for subsequent experimental research (Figure 1C).In terms of expression, genes like PARP1, FEN1, and GAS5 showed widespread high expression in cancers such as BRCA, LUAD, and LUSC, while genes like PTUD1, ESR1, and ESR2 showed widespread low expression (Figure 1D).Based on survival-related data, we also summarized the risk value of parthanatos-related genes in various cancers (Figure 1E).Additionally, Figure 1F shows the differential methylation levels of parthanatos-related genes between cancer and normal tissues.As shown in the figure, genes like RIPK1, RAB33A, and ESR1 displayed significantly higher methylation levels in cancer tissues compared to normal tissues (Figure 1F).
Single cell transcriptomic analysis of GC
Our single-cell data consists of 3 GC tumor samples (GSM5004180, GSM5004181, and GSM5004182), 1 tumor-adjacent sample (GSM5004183) and 6 metastasis sample (GSM5004184, GSM5004185, GSM5004186, GSM5004187, GSM5004188, GSM5004189).To perform quality control on our raw count data, we first filtered out unsatisfactory cells based on sequencing depth, number of genes, mitochondrial content, and ribosomal content, there were 53940 cells before quality control, and 41264 cells were left after quality control (Supplementary Figure 1).Subsequently, we conducted a joint analysis of PCA and Harmony to remove batch effects and reduce dimensionality, ultimately obtaining 20 principal components (PCs) for further analysis.During the determination of the resolution value, we observed that increasing the resolution value allowed us to clearly observe which cell clusters were continuously dividing into subclusters, revealing the relationships between cell clusters at different resolutions.When the AGING resolution was set to 2, we observed significant interweaving between cells, so we chose a resolution of 2 and obtained 34 unknown cell clusters (Figure 2A), and the specific maker genes expressed in each cell cluster were shown in bubble charts (Figure 2B).Then we combined all the single-cell samples, and performed dimensionality reduction on the merged samples using tSNE nonlinear clustering algorithms (Supplementary Figure 2), and we displayed 34 unknown cell clusters in the UMAP map, and found that they were expressed and distributed differently in normal, tumor, and metastatic tissues (Figure 2C-2E).Based on cell annotation strategies developed by previous studies, we identified 7 cell types within the GC microenvironment: myeloid cell, epithelial cell, T cell, stromal cell, proliferative cell, B cell, and NK cell, the specific genes expressed by each cell type were shown in the form of bubble maps (Figure 3A), we also labeled the cell clusters on the UMAP map (Figure 3B). Figure 3C depicts the cellular distribution characteristics within normal tissue, tumor tissue, and metastasis tissue.
Parthanatos gene set scoring based on single-cell and spatial transcriptome data
Cells do not rely solely on one or a few genes to carry out their functions.Many genes in the upstream and downstream pathways of the functional pathways vary in expression as the functions vary in strength.Therefore, we used five algorithms (AUCell, Ucells, Singscore, ssgsea, and addmodulescore) to score the parthanatos gene set in single-cell data.The Scoring score is the total score of the five algorithms.Figure 3D shows the expression of parthanatos-associated genes in 7 cell clusters for each algorithm.Myeloid cell, epithelial cell and proliferative cell show strong signals, while NK cells and T cells show weaker signals (Figure 3D).
To compare the differences in parthanatos-related gene expression among normal tissue, tumor tissue and metastasis tissue, we displayed the gene set scores for each cell cluster under the five algorithms in Figure 3F.The different algorithm scores indicate that compared to normal tissue, almost all cell types within the tumor tissue and metastasis tissue show higher parthanatos gene scores.Notably, the gene set scores for epithelial cell, T cell and proliferative cell in tumor tissue and metastasis tissue are significantly higher (Figure 3F).Furthermore, using the UMAP algorithm, we mapped the 7 cell clusters and their respective parthanatos gene set scores onto the merged sample tissue.By comparing the Scoring scores, we can clearly observe that epithelial cell, T cell and proliferative cell in tumor tissue and metastasis tissue have higher parthanatos gene set scores, which is consistent with our previous observations (Figure 4A).In addition, the control data from mouse GC tissue revealed, for the first time, the spatial distribution of parthanatos signals at the tissue level (Figure 4B).
Cluster analysis based on parthanatos score
We developed a GC classification model based on parthanatos-related genes, which can clearly distinguish GC patients with different parthanatos features.Using unsupervised clustering analysis algorithm [43], we divided a total of 743 GC patient samples from TCGA and GEO databases into two subtypes (C1 and C2).The results of consistency matrix heatmap, cumulative distribution curve, and delta area curve all confirmed that k=2 is the optimal clustering number (Figure 5A-5C).Violin plots showed the enrichment scores of the two subtypes, where C2 subtype had higher parthanatos scores, indicating higher activity of parthanatos-related genes, while C1 subtype had the opposite trend (Figure 5D).The heatmap displayed the expression patterns of different parthanatos-related genes between C1 and C2 subtypes.Except for some genes, most genes, such as RIPK1, DDB1, CUL4A, AIMP2, and PARP1, were expressed at higher levels in the C2 subtype compared to the C1 subtype, further confirming the higher expression activity of parthanatos-related genes in C2 subtype patients (Figure 5F).We also studied the prognosis differences between the two subtypes using the "survival" package and "survminer" program in R Studio.It was found that patients with the C2 subtype had better prognosis and higher survival rates, suggesting that a higher parthanatos score is associated with better prognosis in GC patients (Figure 5E).
Metabolic reprogramming and immune microenvironment characteristics of the parthanatos scoring model
In order to fully explore the intrinsic molecular features of GC patients in different subtypes, we collected 42 classic metabolic pathways and 24 classic immune pathways based on the KEGG database.We evaluated the metabolic and immune signaling intensities in 743 GC samples and depicted the distribution of each metabolic and immune signal in the form of heat maps.In the C1 subtype, signals of metabolic pathways such as retinol metabolism, drug metabolism cytochrome P450 metabolism, and metabolism of xenobiotics by cytochrome P450 metabolism were enhanced, while signals of metabolic pathways such as purine metabolism, pyrimidine metabolism, cysteine and methionine metabolism, and riboflavin metabolism were weakened.The opposite was observed in the C2 subtype (Figure 5G).Regarding immune-related pathways, signals of immune pathways such as TGF beta signaling pathway, chemokine signaling pathway, and intestinal network for IGA production were enhanced in the C1 subtype, while signals of immune pathways such as P53 signaling pathway, proteasome, progesterone mediated oocyte maturation, oocyte meiosis, RNA degradation, spliceosome, and nucleotide excision repair were enhanced in the C2 subtype (Figure 5F).immune score, stromal score, ESTIMATE score, and tumor purity for both C1 and C2 subtypes.From the graphs, it can be seen that C1 subtype had more stromal and immune cells, while C2 subtype had higher tumor purity (Figure 6A-6D).This phenomenon may be caused by immune function suppression and abnormal accumulation of immune cells in the tumor microenvironment of the C1 subtype.To further analyze the differences in the immune microenvironment between the two subtypes, we used seven immune cell infiltration prediction algorithms provided by the TIMER2.0online platform (including TIMER, CIBERSOFT, CIBERSOFT-ABS, QUANTISEQ, XCELL, EPIC, and MCPCOUNTER) to analyze the extent of immune cell infiltration between the C1 and C2 subtypes.From the graphs, it can be observed that regardless of the algorithm used, the number of B cells in the C1 subtype was significantly higher than that in the C2 subtype, while the opposite was observed for neutrophils.Other immune cells also showed certain differences between the two subtypes (Figure 6E).Finally, Figure 6F shows the expression patterns of immune checkpoint-related genes in the C1 and C2 subtypes.Genes such as CD8A, CD44, NRP1, TNFSF14, TNFSF15, CD27, and CD48 were significantly higher expressed in the C1 subtype (Figure 6F).This result suggests an excessive activity of immune checkpoints in the tumor microenvironment of the C1 subtype, which may lead to tumor immune escape to a certain extent and subsequently cause excessive accumulation of immune cells.This is consistent with our previous findings on the abnormal accumulation of immune cells in the C1 subtype and the differences in immune microenvironment characteristics between the two subtypes.
Significance of parthanatos scoring model for GCtargeted drug therapy
To further explore the potential value of parthanatos in the clinical treatment of GC patients, we used the "oncoPredict" package and GDSC2 dataset to predict the response of 743 GC patients to various targeted therapies.We analyzed the potential beneficial drugs for GC patients in the C1 and C2 subtypes and plotted box plots to determine the impact of parthanatos on the sensitivity to 12 common targeted drugs in GC.Patients in the C1 subtype showed higher sensitivity to dasatinib, while patients in the C2 subtype showed higher sensitivity to 11 other drugs such as carmustine, sorafenib, rapamycin, and sepantronium (Supplementary Figure 3A-3L).This result may provide precise guidance for future individualized targeted drug therapy for GC patients.
Relationship between parthanatos and immune cell infiltration in GC tissues
The characteristics of GC include a high degree of heterogeneity in tumor cells and tumor immune microenvironment.To investigate this, we evaluated the immune cells and immune function scores of 743 GC samples based on the ssGSEA algorithm.We explored the potential associations between parthanatos-related genes and 29 types of immune cells or functions using Spearman correlation analysis.We also calculated the correlation between parthanatos scores and immune cell infiltration.We found that different parthanatos-related genes have varying degrees of correlation with various immune cells or functions.Among them, RAB33A, ESR1, and ESR2 were positively correlated with most immune cells or functions, while TOMM20, DDB1, and CUL4A were negatively correlated (Figure 7A).The bubble plot displays the correlation between classic immune infiltration-related cells and parthanatos, including both positive and negative correlations (Figure 7B).We then selected the three immune cells with the highest correlation (Treg, Th2 cell, Th1 cell) and performed correlation analysis with parthanatos scores.The R-values were 0.38, 0.28, and 0.27, respectively, indicating positive correlations (Figure 7C-7E).This finding is consistent with our previous results, where we observed a higher quantity of Th1 and Th2 cells in the C2 subtype compared to the C1 subtype using the XCELL algorithm.
Construction and verification of survival and prognosis model of GC patients based on parthanatos-related genes
Based on the limma and "clusterProfiler" packages, we screened for differentially expressed genes between different parthanatos subtypes for GO enrichment analysis and KEGG enrichment analysis (Supplementary Figure 4).These genes were then used to construct a prognostic model for GC patients (SGCA, JAM2, SHISA3, DES, PDK4, SFRP2, GRP, TNC, PAEP, FBLN5, GLDC, CCDC80, HAND2, and PPP1R14A), Figure 3E shows the expression of 14 model genes in seven cell types of normal tissue, tumor tissue, and metastatic tissue (Figure 3E).In the train cohort, we performed univariate Cox regression, LASSO regression, and multivariate Cox regression analysis on the differentially expressed genes to build the prognostic model (Supplementary Figure 5).We divided the GC patients in the train cohort into high-and low-risk groups based on the median risk value.We then compared the prognosis differences and analyzed the ROC curve to evaluate the predictive accuracy of the model (Figure 8A and Supplementary Figure 6).Subsequently, in the internal validation set 1 (test1 cohort), internal validation set 2 (test2 cohort), and external validation set (test3 cohort), we performed multivariate Cox regression using the model genes selected from the train cohort.We used the predict function to predict the risk values for each sample and used the median risk value from the train cohort as the cut-off value to divide the three validation sets into high-and low-risk groups.We then conducted prognosis difference comparisons and analyzed the ROC curve.In different cohorts, the high-risk group consistently showed worse prognosis compared to the low-risk group, and the AUC values for 3-year and 5-year ranged from 0.6 to 0.8, indicating the stability and potential generalizability of the prognostic model (Figure 8B-8D and Supplementary Figure 6).In order to explore the differences in the immune microenvironment between the high and low-risk groups, we used the TIMER2 platform to predict the immune cell infiltration abundance using seven immune algorithms in the four cohorts.We also compared the expression differences of immune checkpoint genes between different risk groups (Supplementary Figure 7).The results showed that in all algorithms and cohorts, the low-risk group had higher numbers of plasma cells, NK cells, and certain CD4+ T cells (such as activated memory CD4+ T cells, Th1 cells, and Th2 cells), while tumorassociated fibroblasts (CAFs) showed the opposite trend.Additionally, the box plot results showed higher expression of immune checkpoint-related genes in the high-risk group compared to the low-risk group.These differences in the immune microenvironment may be one of the main intrinsic factors leading to differences in prognosis and clinical data between the high-and low-risk groups.
To facilitate the clinical application of the prognostic model, we introduced clinical features of GC as variables in the model and used the "rms" package to construct a column line chart for accurate prediction of GC patient prognosis.The column line chart consists of 10 parallel lines.Each row represents a score, with gender in the second row, risk type in the third row, grade in the fourth row, stage in the fifth row, and age in the sixth row.The total score in the seventh row is obtained by adding the scores of age, grade, stage, and risk type.With this chart, we can easily estimate the survival rates of GC patients at 1, 3, and 5 years (Figure 9A).Furthermore, Calibration plots and ROC curves were used to assess the predictive accuracy of the column line chart.The calibration curves for 1, 3, and 5-year survival closely align with the diagonal line (Figure 9B), and the AUC values were 0.691, 0.683, and 0.709, respectively (Figure 9C).This indicates that both our prognostic model and the column line chart construction are accurate and have predictive value for prognosis.
DISCUSSION
Unlike other known forms of cell death, parthanatos is a cell death pathway that relies on PARP1 instead of caspase, and its occurrence involves the participation of multiple factors such as NAD and AIF [44].In the development of cancer, parthanatos interacts closely with other forms of cancer cell death, such as apoptosis and autophagy, playing a significant role in breast cancer, colon cancer, ovarian cancer, and other cancers [10].As one of the most common malignancies worldwide, GC has been shown to be closely associated with cell death processes such as ferroptosis, pyroptosis, and immunogenic cell death [45][46][47].We believe that targeting parthanatos may provide a new approach to precision treatment for cancer.Therefore, this study aims to explore the role of parthanatos in GC and search for new biomarkers for the treatment and prognosis of GC based on the molecular characteristics of parthanatos.
Firstly, we explored the mutation and expression patterns of parthanatos-related genes in various human cancers.We downloaded clinical data from the TCGA database and analyzed the multi-omics data of parthanatosrelated genes in different human cancers.Additionally, to determine whether parthanatos could be a potential target for GC treatment, we also analyzed the risk and methylation status of parthanatos-related genes in different types of cancer.Our analysis revealed varying degrees of mutations and expression differences in parthanatos-related genes across multiple cancers, which significantly impacted prognosis risk.These findings open up several potential avenues for future research on parthanatos in human cancers.
ScRNA-Seq is a high-throughput technology that allows for quantitative sequencing of gene expression profiles at the single-cell level, aiding in the deciphering of hidden heterogeneity within cell populations [48].In a recent study, Shen et al. utilized scRNA-Seq to explore the expression characteristics of mesenchymal stem cells in GC and their role in treatment and prognosis [49].To investigate the correlation between parthanatos and single cells in GC, we included a dataset from the GEO database (GSE163558) for scRNA-Seq analysis.We identified and classified cells in the samples, resulting in 7 cell clusters expressing cell-specific genes.These clusters include T cells expressing CD3D, CD3E and CD2; NK cells expressing KLRD1, GNLY and KLRF1; stromal cell expressing PECAM1 and VWF; epithelial cells expressing EPCAM, KRT19 and CLDN4; B cells expressing CD79A, IGHG1 and MS4A1; proliferative cells expressing MK167, STMN1 and PCNA; myeloid cells expressing CSF1R, CSF3R and CD68.We further scored the parthanatos gene set for each cell cluster, finding different gene set scores among the clusters, with significant differences between normal, tumor and metastatic tissues.The control data revealed the spatial distribution characteristics of parthanatos in GC tissue for the first time.However, due to the limited availability of control data, statistical analysis could not be conducted.We hope that more control data will become available in the future to further uncover the differences in parthanatos signals between GC tissue and adjacent non-cancerous tissue at the spatial resolution level.Nonetheless, the above discoveries are advantageous in our analysis of tumor heterogeneity in GC at the single-cell level and spatial resolution, providing effective insights into the genetic and functional analysis of GC cells.
The heterogeneity of GC is one of the main reasons why it is challenging to diagnose and treat accurately [50].To classify GC patients appropriately and explore the role of parthanatos in GC, we merged the gene expression data from the TCGA-STAD cohort and the GSE84437 cohort, resulting in 743 GC samples.Based on the expression levels of parthanatos-related genes, we divided all samples into two subtypes: C1 and C2.C1 had lower parthanatos scores, while C2 had higher parthanatos scores.In previous studies, parthanatos and its related components have been reported to exhibit anti-tumor effects [51,52], For example, PARP1 negatively regulates epithelial-to-mesenchymal transition (EMT) and inhibits crucial processes such as tumor cell invasion [53]; AIF, as a cell death inducer, prevents the inactivation of the tumor suppressor PTEN by inhibiting its oxidation process, thereby suppressing tumor metastasis [54].Our survival curve showed that patients in the C2 subtype had better overall survival than those in the C1 subtype, suggesting that protective genes may constitute the majority or dominantly play a role among parthanatos-related genes, which is consistent with previous research findings.On the other hand, our results showed that pathways such as metabolism of xenobiotics by cytochrome P450 were more active in the C1 subtype compared to the C2 subtype.Studies have shown that cytochrome P450 plays a central role in the oxidative activation of various carcinogens and is important in tumor development and response to chemotherapy [55].We believe that the abnormal activation of certain metabolic pathways may be one of the reasons for the poorer prognosis in the C1 subtype.Similarly, the activity level of immune-related pathways such as TGF beta signaling pathway, chemokine signaling pathway, and intestinal network for IgA production, has been proven to be associated with GC treatment and prognosis [56][57][58], our research results also confirmed significant differences in these pathways between the C1 and C2 subtypes, validating the accuracy of the GC classification model based on parthanatos scores that we constructed.
The tumor immune microenvironment plays a crucial role in the progression of malignant tumors, involving both host anti-tumor immune responses and the destruction of normal tissues.Increasing evidence suggests that immune cell infiltration, such as CD4+ T cells and B lymphocytes, plays a key role in various cancers, including GC [59][60][61].However, the potential correlation between the development of GC and immune cell infiltration landscape has not been fully determined.Based on the GC clustering model we established, we attempted to explore the relationship between parthanatos and the immune microenvironment within GC.The ESTIMATE analysis can calculate the proportions of immune cells, stromal cells, and tumor cells within tumor tissues.Our results showed that the C1 subtype had a higher proportion of stromal cells and immune cells in tumor tissues, while the C2 subtype had higher tumor purity.Using the algorithms provided by the TIMER2.0platform, we found differences in the level of immune cell infiltration between the C1 and C2 subtypes, with a significantly higher number of B cells in the C1 subtype.It has been reported that tumorinfiltrating B cells can regulate the pro-angiogenic effect of bone marrow cells through the secretion of soluble mediators, and they also promote tumor growth by blocking T cell-mediated immune responses through the production of lymphotoxins [62,63].This again validates the reason why patients in the C1 subtype have a worse prognosis.Similar differences were found in the expression of immune checkpoint-related genes, such as CD8A, CD44, NRP1, and TNFSF14, which were significantly higher in the C1 subtype.It is well known that immune checkpoints control immune responses, such as effector T cells and NK cells, through various mechanisms [64], when immune checkpoint-related genes are upregulated, the activity of immune cells is suppressed, and more immune cells are recruited into the tumor microenvironment to participate in anti-tumor immune processes under the influence of chemokines and other cytokines [65,66], which is consistent with the results of our ESTIMATE analysis.Furthermore, using the ssGSEA algorithm, we also analyzed the relationship between parthanatos scores and immune cell populations and functions in the 743 GC samples.We found that different parthanatosrelated genes had varying degrees of correlation with various immune cells or functions, and parthanatos scores were also correlated with classic immune cell infiltration, primarily in a positive manner.
Currently, drugs such as sorafenib, rapamycin, vincristine, and MG-132 have been shown to have certain anti-cancer effects in GC.Among them, sorafenib significantly increases the expression of caspase-3, Bax, cyt-c proteins in a dose-dependent manner and reduces the expression of Bcl-2 protein.Inactivation of ERK protein phosphorylation is one of the mechanisms by which sorafenib inhibits GC [67]; rapamycin effectively blocks S1K4, 1E-BP-1, and HIF-31α activation in vitro in GC cells, significantly inhibiting tumor cell migration [68]; MG-132, as a ubiquitin-proteasome inhibitor, can significantly inhibit telomerase activity in GC cells, induce cell apoptosis, and cause G1 arrest [69].Based on the "oncoPredict" package and the GDSC2 dataset, we conducted an analysis on tumor drug sensitivity prediction, predicting the responses of 743 GC patients to various targeted drugs, and analyzed potential beneficial drugs for different parthanatos subtypes of GC patients.Ultimately, we found that patients of subtype C1 were more sensitive to dasatinib, while patients of subtype C2 were more sensitive to carmustine, sorafenib, rapamycin, sepantronium, AGING vincristine, MG-132, lapatinib, epirubicin, osimertinib, cytarabine, and docetaxel.This result will provide precise guidance for the rational use of drugs in GC patients.
Finally, based on the "limma" package, we selected differentially expressed genes between subtypes C1 and C2.We used univariate Cox regression to screen for genes related to GC prognosis, LASSO regression to filter genes, and multivariate Cox regression to build a prognostic model.Finally, we obtained a GC prognostic risk model consisting of 14 parthanatosrelated genes (SGCA, JAM2, SHISA3, DES, PDK4, SFRP2, GRP, TNC, PAEP, FBLN5, GLDC, CCDC80, HAND2, and PPP1R14A).By using the predict function, we can calculate the risk score for each GC patient, where higher risk scores often correspond to poorer survival outcomes.Among these genes, a considerable proportion has been proven to be closely associated with GC development.For example, JAM2 belongs to the immunoglobulin superfamily cell adhesion molecules and plays a crucial role in maintaining cell-cell junction integrity.Imbalanced JAM2 gene expression has been correlated with GC staging, differentiation, and progression [70]; the pyruvate dehydrogenase kinase encoded by the PDK4 gene is a crucial enzyme that maintains a high rate of glycolysis in cancer cells, promoting resistance to apoptosis, it has been shown to enhance proliferation and invasion of GC tumor cells and is associated with infiltrations of B cells, CD4+ T cells, and dendritic cells, making it an adverse prognostic factor in GC [71], which aligns with our findings.Similarly, SFRP1, GLDC, HAND2, and other genes have also been found to play critical roles in the development and prognosis of GC [72][73][74][75].
To evaluate the modeling performance and predictive accuracy of the model, we compared the prognostic differences between different risk groups in four cohorts, including the train cohort.We plotted ROC curves and calculated the AUC values.The stability of the model was validated through internal and external validation strategies.For the high-and low-risk groups classified by the model, we also conducted comparative analysis of immune infiltration and immune checkpointrelated genes.We found that there were still differences between the two groups, indicating the significant role of the immune microenvironment.Among them, we observed a significantly higher number of cancerassociated fibroblasts (CAFs) in the high-risk group than in the low-risk group, which may be one of the important reasons for the decreased number of immune cells and upregulation of immune checkpoint-related genes in the high-risk group.It has been reported that CAFs and their related biomarkers are associated with poor prognosis in various types of cancer [76,77], apart from directly promoting tumor growth, metastasis, and angiogenesis, CAFs may also mediate tumor immune escape by directly inhibiting immune cell infiltration and activity or by promoting the recruitment of immunosuppressive cells [78,79].In addition, based on this model, we also generated column charts to predict the survival rates of GC patients at 1, 3, and 5 years by integrating risk group, grade, stage, gender, and age data.
Furthermore, our research has certain limitations primarily due to the fact that it only includes bioinformatics analysis.For a detailed understanding of the mechanism of action between parthanatos and GC, further validation from in vivo and in vitro experiments is needed to support our conclusions.Additionally, finding effective predictive biomarkers for diagnosis and prognosis in malignant tumors is a challenging task for us, and future studies should include larger sample sizes to improve our research findings.
CONCLUSIONS
Through integrating a series of bioinformatics methods, we explored the potential link between parthanatos and GC.Single-cell combined spatial transcriptomic analysis highlighted the difference in signal expression of parthanatos between cells in GC samples, with almost all cells within tumor tissues and metastatic tissue displaying higher parthanatos signals compared to normal tissues.Based on the expression levels of parthanatos-related genes, we divided GC patients into two subtypes, which showed significant differences in prognosis, immune infiltration, and tumor purity, suggesting a relationship between the development of GC and aberrant parthanatos pathway.We also created and validated a novel prognostic risk model based on parthanatos-related genes, which showed good predictive capability.Higher risk scores were associated with poorer survival outcomes, potentially providing a more targeted and accurate new strategy for the treatment and prognosis of GC patients.
Figure 1 .
Figure 1.Pan-cancer analysis of parthanatos-associated genes.(A) CNV amplification data of parthanatos-related genes in different tumor types and the fan color represents the amplification frequency.(B) CNV deletion data of parthanatos-related genes in different tumor types and the fan color represents the deletion frequency.(C) SNV mutation data of parthanatos-related genes in different tumor types and the fan color represents the frequency of SNV.(D) In the expression data of parthanatos-related genes in different tumor types, the color of the squares represents the value of log2 (FC), and the size of the squares represents the value of -log2 (FC).(E) The risk profile of parthanatos-related genes in different tumor types, with pink representing risky, green representing protective, and gray representing no statistical difference.(F) The comparison of methylation of parthanatos-related genes between different tumor types and normal tissues, the color of the circle represents the methylation difference, and the size of the circle represents the statistical significance.
Figure 2 .
Figure 2. Single-cell dimensionality reduction.(A) Use the "clustree" package to visualize the division relationships between cell populations under different resolutions.(B) Annotate the cell characteristics of 34 clusters.(C-E) Distribution of 34 kinds of cell clusters in the perspective of UMAP dimensionality reduction algorithm.
Figure 3 .
Figure 3. Single-cell annotation and parthanatos scores prediction.(A) Expression of specific marker genes in 7 cell types.(B, C) Dimension reduction and annotation of cell clusters based on UMAP algorithm.(D) 5 algorithms scored the gene sets of seven cell clusters.(E) Comparison of the expression difference of 14 model genes in 7 cell types of normal tissue, tumor tissue and metastatic tissue.(F) Comparison of gene set scores of 7 cell clusters among normal tissue, tumor tissue and metastatic tissue.
Figure 4 .
Figure 4. Distribution of parthanatos scores in single-cell atlas and spatial resolution.(A) Based on the UMAP algorithm, the gene set scores of 7 cell clusters were displayed in the combined samples (including normal tissue, tumor tissue and metastatic tissue).(B) Spatial transcriptome data of GC, gene set scoring under 6 gene set scoring algorithms.
Figure 5 .
Figure 5. Parthanatos-based clustering analysis.(A) Define a consensus matrix heat map of k = 2 clusters and their associated regions.(B) Cumulative distribution function (CDF) curve under different cluster number.(C) The relative change in area under the CDF curve for different values of k. (D) The violin plot shows enrichment scores for two clusters (C1 and C2), p<0.001.(E) Survival curves of C1 and C2 clusters, purple for C1, green for C2.(F) Expression difference of parthanatos-related genes between CI and C2 subtypes.(G) Differences in the activity of metabolic pathways between CI and C2 subtypes.(H) Differences in immune pathway activity between CI and C2 subtypes.
Figure 6 .
Figure 6.Analysis of immune microenvironment between different subtypes.(A-D) The difference between the two clusters in immune score, stromal score, ESTIMATE score and tumor purity, P-values are represented by *. (E) Seven algorithms were used to analyze immune cell or functional differences between CI and C2 subtypes.(F) Differences in the expression of immune checkpoint related genes between CI and C2 subtypes, P-values are indicated by *. *: P<0.05, **: p<0.01, ***: p<0.001, "ns" indicates no significant difference.
Figure 7 .
Figure 7. Relationship between parthanatos score and immune microenvironment in GC. (A) Heat map depicting the relationship between parthanatos-related genes and levels of immune cell infiltration.Positive correlations are shown in red, negative correlations are shown in blue, and p-values are shown by *. (B) The bubble plot shows the correlation between parthanatos score and the levels of infiltration of various immune cells, with the size of the circle indicating the size of the correlation and the color of the circle indicating statistical significance.(C-E) Correlation between parthanatos score and Infiltration levels of three kinds of immune cells.*: p<0.05, **: p<0.01, ***: p<0.001, "ns" indicates no significant difference.
Figure 8 .
Figure 8. Survival curves and ROC curves for the high-and low-risk groups in the four cohorts.(A-D) Comparison of survival curve and ROC curve between train cohort, test1 cohort, test2 cohort and test3 cohort.
|
2024-03-20T06:17:35.085Z
|
2024-03-18T00:00:00.000
|
{
"year": 2024,
"sha1": "a705807cf0cd1f468433131d47bd2dfc17fe7efa",
"oa_license": "CCBY",
"oa_url": "https://www.aging-us.com/article/205658/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc29f2326ffb0eef9eafee057e9022da900f064b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118434515
|
pes2o/s2orc
|
v3-fos-license
|
Generalized Riemann curvature corrections to type II supergravity
We observe that the replacement of the Riemann curvature with the generalized Riemann curvature into the corrections to the type II supergravity at order $\alpha'^3$ which are in terms of the contractions of four Riemann curvatures $R^4$, is not fully consistent with the S-matrix elements in the superstring theory. In particular, they produce non-zero S-matrix elements for odd number of B-field strengths which are not consistent with the string theory results. Using the consistency of the couplings with the linear T-duality as a guiding principle, we consider all T-duality invariant couplings and fix their coefficients by requiring them to be consistent with the S-matrix elements. The new Lagrangian density is then equivalent to the replacement of the generalized Riemann curvature into the expression $t_8t_8R^4$.
Many aspects of string theory can be captured at low-energy by the Wilsonian effective action for massless fields. The leading α ′ -order terms of the effective action of type II superstring theory are given by the supergravity which contains the couplings in the NS-NS sector. The next to the leading order couplings of the gravity are given by the curvature couplings at order α ′3 which have been found by analyzing the sphere-level four-graviton scattering amplitude in the superstring theory [1]. The result in the eightdimensional transverse space of the light-cone formalism, is a polynomial in the linearized Riemann curvature tensors 1 Y ∼t i 1 ···i 8t j 1 ···j 8 R i 1 i 2 j 1 j 2 · · · R i 7 i 8 j 7 j 8 + · · · (2) wheret 8 is a tensor in eight dimensions which includes the eight-dimensional Levi-Civita tensor ǫ 8 and the tensor t 8 that was first introduced in [2]. The contraction of t 8 with four arbitrary antisymmetric matrices M 1 , · · · M 4 is defined as The dots in (2) represent terms containing the Ricci and scalar curvature tensors which can not be captured by the four-graviton scattering amplitude as they are zero on-shell. These terms can be absorbed into the supergravity (1) by appropriate field redefinition [1].
The t 8 t 8 R 4 part of the Lagrangian (2) has been also found in the covariant path integral formalism in [3] and in the pure spinor formalism in [4]. The Levi-Civita tensors int 8t8 give rise to the covariant coupling ǫ 10 ·ǫ 10 R 4 [5,6] which has its first non-zero contribution at five graviton level [7]. The presence of this term has been dictated by the sigma model beta function approach [5,6]. Using the definitions of t 8 t 8 and ǫ 10 ·ǫ 10 , one finds [5,6,10] where dots represent the specific form of the off-shell Ricci and scalar curvature couplings which reproduce the sigma model beta function [5,6]. It has been shown in [8] that the above Lagrangian is not consistent with the standard form of the T-duality transformations. They should be invariant under a non-standard form of T-duality transformation which receives quantum corrections.
The t 8 t 8 R 4 part of the Lagrangian (2) is [9,10,4,11]: 1 We use only subscripts indices and the repeated indices are contracted with the inverse of metric.
It has been shown in [10] that, up to field redefinition, the difference between the above two Lagrangians is the couplings ǫ 10 · ǫ 10 R 4 . The supersymmetric extension of the above Lagrangians has been studied in [12,13,14].
The B-field and dilaton couplings have been added to the Lagrangians (2) and (5) where the bracket notation is H ab , and comma denotes the partial derivative. We will see that while the two Lagrangians (2) and (5) are identical for the linearized Riemann curvature, they are not identical for the generalized Riemann curvature. As a result, one of them should be consistent with S-matrix elements.
The action corresponding to the Lagrangian (5) in the Einstein frame is [9] S ⊃ γ To study the T-duality of the above action, one should go to the string frame in which the linearizedR abcd becomes [15] R abcd = e −Φ/2 R abcd (8) where R abcd is the following expression It has the symmetries R bacd = −R abcd and R abdc = −R abcd . The action (7) in the string frame then becomes The dilaton appears only as the overall factor e −2Φ √ −G which is invariant under standard T-duality. It has been shown in [15] that the Lagrangian L 1 (R) is also invariant under the standard linear T-duality transformations.
The Lagrangian L 1 (R) contains the couplings R 4 , H 4 and H 2 R 2 which are exactly reproduced by string theory S-matrix elements [9]. However, it contains also the couplings R 3 H and RH 3 which are not reproduced in string theory. One can easily verify that the supergravity does not produce scattering amplitude of odd number of B-field strengths. Therefore, the string theory S-matrix element which reproduces the supergravity results at the leading order of α ′ , is zero for odd number of H.
To check that the Lagrangian L 1 (R) produces the non-zero couplings for odd number of H, one should first replace the generalized Riemann curvature (9) in (10). It produces, for example, 24 couplings between three H and one Riemann curvature, i.e., where dotes represent the other 23 terms. To verify that the above couplings do not simplify to zero, one may write the linearized Riemann curvature and the field strength H as where as usual the comma represents partial differentiation. The graviton h µν and the antisymmetric tensor b µν may be written as where ψ and ζ are two vector fields. Then one may transform the couplings to the momentum space. To this end, one should label the antisymmetric fields by 1,2,3 and the graviton by 4. Then one should add the 6 permutations of the antisymmetric fields. Performing all these steps, one finds that the result is not zero. Even if one uses the on-shell relations k i · k i = 0 and k i · ǫ i = 0 where ǫ i is the polarization of the i-th particle, the couplings still do not vanish. Doing the same steps for the HR 3 couplings, one again finds non-zero couplings. In fact the couplings of odd number of H resulting from the terms in the second line of (5) remain non-zero at the linearized level.
Since the Lagrangian L 1 (R) is not fully consistent with the string theory S-matrix elements, one expects there must be another Lagrangian with the following properties: 1-It should produce no couplings H 4 , R 4 or R 2 H 2 .
2-It should produce the couplings R 3 H and RH 3 which cancel the corresponding couplings in L 1 (R).
3-It should be consistent with the standard T-duality.
One may consider all possible contractions of four generalized Riemann curvatures, and may choose unknown coefficient for each of them. Then one may find the coefficients by forcing them to satisfy the above constraints.
To impose the T-duality constraint, we note that under linear T-duality the Riemann curvature with two Killing indices transforms as [8] R µyνy → −R µyνy (14) where y is the killing index. So under the dimensional reduction on a circle, the couplings with structure RR yy R yy R yy where R yy is the Riemann curvature with two Killing indices, are not consistent with the linear T-duality. To avoid such couplings we consider the contractions of the generalized Riemann curvature in which the first two indices of the curvatures contract among themselves, and the second two indices contract among themselves, as the couplings in (5). Using the symmetries of the curvature R abcd , one finds there are eight independent such couplings. Considering them with unknown coefficients, and constraining them to satisfy the conditions 1 and 2, one finds the following couplings: In fact L 2 (R) = 0 for Riemann curvature, however, it is not an identity any more for the generalized Riemann curvature. The Lagrangian L 1 (R) + L 2 (R) now has only couplings H 4 , R 4 and R 2 H 2 . It has been shown in [15] that such couplings at four-field level are consistent with the linear T-duality.
Since there are eight independent couplings in which the first two indices of the Riemann curvatures contract among themselves, two of the above couplings must have the same structure as the terms in (5). The first and the third terms in (15) have the same structure as the terms in the second line of (5) but their coefficients are different. Adding the Lagrangian L 1 to L 2 , one finds that all independent contraction of the four generalized Riemann curvatures which are consistent with the linear T-duality have non-zero coefficients. Therefore, the action in the string frame which is consistent with the linear T-duality and is fully consistent with the four-point functions of string theory, is where the Lagrangian density has the following eight independent terms: Note that the couplings in the first and the last lines above are the same couplings in (5). Now let us compare the above Lagrangian with the replacement of the generalized Riemann curvature (9) into t 8 t 8 R 4 . Using the definition of the tensor t 8 in (3), and using the fact that the generalized Riemann curvature R abcd has the same symmetries of the Riemann curvature expect the symmetry under (ab) ↔ (cd), one finds after some algebra Therefore, the replacement (9) in the Lagrangian t 8 t 8 R 4 is consistent with the S-matrix elements of four NS-NS vertex operators and with the linear T-duality.
The Lagrangian (17) One can easily verify that the last term has the symmetries of the Riemann curvature, so the above replacement in (17) does not produce odd number of H, as expected. It would be interesting to compare the couplings H 2 R 3 and H 4 R resulting from the above replacement, with the contact terms of the corresponding sphere-level S-matrix element in string theory. At the one-loop level of type IIA theory, it has been shown in [16] that the above replacement in t 8 t 8 R 4 and B 2 ∧ X 8 are consistent with S-matrix calculation and with the T-duality, however, this replacement in ǫ 10 ·ǫ 10 R 4 is not consistent with the S-matrix calculation.
|
2013-07-09T07:30:46.000Z
|
2013-03-17T00:00:00.000
|
{
"year": 2013,
"sha1": "18f71650f396649d485415235816791bc860ecc1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1303.4034",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "18f71650f396649d485415235816791bc860ecc1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
252474249
|
pes2o/s2orc
|
v3-fos-license
|
Offshore Wind Power Forecasting—A New Hyperparameter Optimisation Algorithm for Deep Learning Models
: The main obstacle against the penetration of wind power into the power grid is its high variability in terms of wind speed fluctuations. Accurate power forecasting, while making maintenance more efficient, leads to the profit maximisation of power traders, whether for a wind turbine or a wind farm. Machine learning (ML) models are recognised as an accurate and fast method of wind power prediction, but their accuracy depends on the selection of the correct hyperparameters. The incorrect choice of hyperparameters will make it impossible to extract the maximum performance of the ML models, which is attributed to the weakness of the forecasting models. This paper uses a novel optimisation algorithm to tune the long short-term memory (LSTM) model for short-term wind power forecasting. The proposed method improves the power prediction accuracy and accelerates the optimisation process. Historical power data of an offshore wind turbine in Scotland is utilised to validate the proposed method and compare its outcome with regular ML models tuned by grid search. The results revealed the significant effect of the optimisation algorithm on the forecasting models’ performance, with improvements of the RMSE of 7.89, 5.9, and 2.65 percent, compared to the persistence and conventional grid search-tuned Auto-Regressive Integrated Moving (ARIMA) and LSTM models. a
Introduction
Undoubtedly, to accelerate economic growth, power production through renewable energy sources needs to increase because conventional methods such as using fossil fuels have irreparable consequences, including pollution, climate change, and the depletion of the ozone layer [1].
In recent decades, various renewable energies, such as wind, solar, waves, etc., have received increasing attention. Among all these energies, wind power has played the most important role in replacing fossil fuels [2]. As reported by the World Wind Energy Council, the installed global capacity of wind energy in the world in 2021 has reached 837 GW, with an increase of 92 GW compared to 2020 [3]. Figure 1 shows the global wind power installed capacity increment over the past 21 years [3]. In this figure, the blue columns represent the capacity of installed wind power on land, while the red columns represent the offshore installed wind energy.
One main obstacle hindering the increase of wind power penetration into the power grid is the production uncertainty due to fluctuations in wind speed [1]. Therefore, adequate planning in electricity distribution to meet consumers' demand, determining the best time for operation and maintenance, and the fairest pricing on the market requires accurate wind power forecasting in the upcoming time steps. One main obstacle hindering the increase of wind power penetration into the power grid is the production uncertainty due to fluctuations in wind speed [1]. Therefore, adequate planning in electricity distribution to meet consumers' demand, determining the best time for operation and maintenance, and the fairest pricing on the market requires accurate wind power forecasting in the upcoming time steps.
Hanifi et al. [1] categorised wind power forecasting into three main methods, including physical, statistical, and hybrid approaches. Physical methods utilise numerical weather prediction (NWP) data, wind turbine geographic descriptions, and weather information to predict wind power [1]. These methods are computationally complex and very sensitive to initial information [2]. On the other hand, statistical methods work based on building an accurate mapping between input variables (such as NWP data, historical data, etc.) and target variables (wind speed or wind power). These methods include two main approaches: time-series-based methods and machine learning (ML) approaches [1]. Time-series-based methods can predict wind speed or wind power based on the history of the predicted variable itself. They can recognise the concealed random features of wind speed and are used for very short-term (minutes to a few hours) forecasting. The Auto-Regressive Integrated Moving Average (ARIMA) model proposed by is one of the common statistical methods which is used in various research. For example, in Western Australia, Yatiyana et al. [5] applied the ARIMA model for wind speed and direction forecasting. They proved that their proposed model could predict wind speed and direction with a maximum of 5% and 16% error, respectively. Firat et al. [6] proposed an autoregressive (AR) wind speed prediction model for a wind farm in the Netherlands. They used six years of hourly wind speed and achieved a high accuracy for 2-14 h ahead. In another study, De Felice et al. [7] applied 14 months of temperature readings in Italy to train an ARIMA model for electricity demand prediction. Their proposed method demonstrated higher accuracy, particularly in hot locations, compared with persistence methods. Duran et al. [8] proposed a method to combine AR and exogenous variable (ARX) models to predict the wind power generation in a wind farm located in Spain up to one day in advance. They used different model orders and training periods to prove that the application of the AR models presents lower errors than a persistent model. Kavasseri et al. [9] examined the application of fractional ARIMA models to predict wind farm hourly average wind speed for one-and two-day-ahead time horizons. The results of the predictions showed a 42% improvement compared to persistent methods. Later, the predicted wind speeds were applied to the power curve of an operating wind turbine to predict the relevant wind powers. In another study, Torres et al. [10] used the ARMA and the persistence model for hourly average wind speed forecasting up to 10 h ahead. The ARMA model Hanifi et al. [1] categorised wind power forecasting into three main methods, including physical, statistical, and hybrid approaches. Physical methods utilise numerical weather prediction (NWP) data, wind turbine geographic descriptions, and weather information to predict wind power [1]. These methods are computationally complex and very sensitive to initial information [2]. On the other hand, statistical methods work based on building an accurate mapping between input variables (such as NWP data, historical data, etc.) and target variables (wind speed or wind power). These methods include two main approaches: time-series-based methods and machine learning (ML) approaches [1]. Timeseries-based methods can predict wind speed or wind power based on the history of the predicted variable itself. They can recognise the concealed random features of wind speed and are used for very short-term (minutes to a few hours) forecasting. The Auto-Regressive Integrated Moving Average (ARIMA) model proposed by is one of the common statistical methods which is used in various research. For example, in Western Australia, Yatiyana et al. [5] applied the ARIMA model for wind speed and direction forecasting. They proved that their proposed model could predict wind speed and direction with a maximum of 5% and 16% error, respectively. Firat et al. [6] proposed an autoregressive (AR) wind speed prediction model for a wind farm in the Netherlands. They used six years of hourly wind speed and achieved a high accuracy for 2-14 h ahead. In another study, De Felice et al. [7] applied 14 months of temperature readings in Italy to train an ARIMA model for electricity demand prediction. Their proposed method demonstrated higher accuracy, particularly in hot locations, compared with persistence methods. Duran et al. [8] proposed a method to combine AR and exogenous variable (ARX) models to predict the wind power generation in a wind farm located in Spain up to one day in advance. They used different model orders and training periods to prove that the application of the AR models presents lower errors than a persistent model. Kavasseri et al. [9] examined the application of fractional ARIMA models to predict wind farm hourly average wind speed for one-and two-day-ahead time horizons. The results of the predictions showed a 42% improvement compared to persistent methods. Later, the predicted wind speeds were applied to the power curve of an operating wind turbine to predict the relevant wind powers. In another study, Torres et al. [10] used the ARMA and the persistence model for hourly average wind speed forecasting up to 10 h ahead. The ARMA model demonstrated a better performance compared to the persistence method, with a 12% to 20% lower root mean square error (RMSE) when forecasting 10 h in advance.
ML methods such as neural networks (NNs) can establish deductive models by learning dependencies between input and output variables. These methods are easy to create, do not require further geographic information, and can predict over longer timeframes. One of the common ML methods is the LSTM model, which can address the long-term dependency issues [11], which is important in forecasting time-series with long input sequences [12]. LSTM is variously used in research for wind power prediction. For instance, Zhang et al. [13] proposed an LSTM wind power forecasting model for three wind turbines of a wind farm in China. They utilised three months of wind speed and historical power data and achieved the highest forecasting accuracy in a one-to-five time-steps ahead compared to the radial basis function (RBF) and deep belief network (DBN). Fu et al. [14] demonstrated LSTM and gated recurrent unit (GRU) for a one-to-four step-ahead forecasting of a 3 MW wind turbine in China, based on the first three-month dataset of 2014, with a resolution of 15 min. The comparison with ARIMA and support vector machine (SVM) methods showed the superiority of their proposed methods. Cali and Sharma [15] proposed an LSTM-based model with one hidden layer for 1 to 24 h ahead of wind power forecasting. The model was trained with 9-month data and evaluated in the last three months of 2016. They used nine combinations of input data, including wind speed at various levels, wind direction, temperature, and surface pressure. They demonstrated that temperature, wind speed, and direction positively impacted model performance; however, adding surface pressure to the input features led to worse performance.
As well as the training data, ML models' accuracy strongly depends on the adequate selection of their parameters and hyperparameters. The parameters of ML models (e.g., the weights of each neuron) are determined during the training process of the algorithm. In contrast, hyperparameters are not directly learnt by the learning algorithm and need to be specified outside the training process. The main role of the hyperparameters is to control the capacity of the models in learning dependencies. They also prevent overfitting and improve the generalisation of the algorithm. Hyperparameter optimisation or tuning improves forecasting accuracy and reduces models' complexity [16].
The literature's most common hyperparameter tuning methods are the grid search and random search. Grid search can be used for simple models with a few parameters. The calculation will be extremely time-consuming by increasing the number of parameters and expanding the space of the possible configurations [17]. Therefore, researchers usually consider a narrow range of hyperparameters during the grid search [16]. On the other hand, a random search algorithm looks randomly for a set of combinations rather than searching for better results.
Both these search methods generate all candidate combinations of hyperparameters upfront and then evaluate them in parallel. Based on the evaluation of all combinations, the best hyperparameters can be selected. Trying all possible combinations is very costly; as a result, it is vital to develop advanced techniques to intelligently select which hyperparameters to assess and then decide where to sample next after evaluating their quality.
The advanced optimisation of the ML-based time-series forecasting models for wind turbine-related predictions remained untouched. However, a few studies have proposed methods for optimising ARIMA and LSTM models within other applications than wind power forecasting. For example, Al-Douri et al. [18] designed a genetic algorithm (GA) to find the best parameters of an ARIMA model for the better cost prediction of used fans in Swedish road tunnels, and provided results which proved a significant improvement in data forecasting. In another study, F. Shahid et al. [19] employed GA to optimise the window size and neuron numbers of LSTM layers. This approach improved the power prediction accuracy of wind farms in Europe by up to about 30% compared to existing methods such as support vector regressors.
As the review of the literature indicates, several examples use linear and nonlinear regression models for challenges related to predicting wind power. Each study provides the use of one model type or a comparison of various model types. Nevertheless, without the tuning and selection of the hyperparameters, it is not possible to obtain their maximum benefit [16]. This advanced tuning method plays an important role when the hyperparameter search space grows exponentially, and the use of exhaustive grid search becomes extremely time-consuming. This paper proposes a framework for developing accurate and robust ML models for wind power forecasting. The framework outlines the model development procedure from data engineering to precision evaluation and fine-tuning. Furthermore, an advanced algorithm is utilised to optimise wind power forecasting models to reduce time calculation costs, as well as to improve accuracy. For the case study, two ML models were selected: the LSTM model, which is proven to have remarkable prediction performance on time-seriesbased models, and ARIMA, a traditional model, for the purpose of benchmarking.
The novelty of this work lies in developing a short-term wind power forecasting model through an intelligent application of the long short-term memory (LSTM) model, while a new optimisation algorithm tunes its main hyperparameters. In addition, the distinguished aspects of the methodology are summarised, based on importance, as follows: • LSTM is used on a wind power dataset to take advantage of its ability to learn nonstatic features from nonlinear sequential data automatically. • The ARIMA model is applied as a forecasting model because of its short response time and ability to capture the correlations in time series. • Instead of the trial-and-error method to select the best hyperparameters of the ARIMA and LSTM forecasting models, which require a great deal of time, grid search is used to tune both these models.
•
The new Optuna optimisation framework is employed to optimise the hyperparameters of the LSTM model, including the number of lag observations, the quantity of LSTM units for the hidden layer, the exposure frequency, the number of samples inside an epoch, and the used difference order for making a nonstationary dataset stationary.
•
Unlike most previous studies, which is for onshore wind turbines, forecasting assessments have been done for an offshore wind turbine in this study.
•
How to deal with the negative values of wind power (which are normally found in active power observations), in terms of removal or replacement, has been thoroughly investigated in this study and the results have been discussed.
•
After a detailed discussion about the reasons for having outliers, three different methods, including isolation forest (IF), elliptic envelope (EE), and the one-class support vector machine (OCSVM), are used to detect and treat them. A comparison of the results will help researchers to choose the best outlier detection method for future studies.
•
The proposed Optuna-LSTM model is assessed by the comparison of its forecasted power with actual values and predictions by persistence and ARIMA based on the RMSE statistical error measure.
The rest of this paper is organised as follows: Section 2 discusses the optimisation process, the forecasting models, and the studied supervisory control and data acquisition (SCADA) data. This section includes the steps taken for preprocessing, resampling, and outlier treatment. Section 3 presents the results of the trained, optimised LSTM model in terms of model accuracy and the time cost compared to other prediction methods. Finally, Section 4 summarises the paper's contributions.
Methodology
The proposed procedure of this study is illustrated in Figure 2. At the beginning of this study, three required features, including the time stamps, wind speeds, and active wind powers, are selected to improve the computational time. At the next step, negative power values are removed or replaced. This data preprocessing is followed by resampling the dataset and removing outliers in three different ways. After finishing the data preprocessing and providing proper data for forecasting, data predictability and stationarity are assessed as two important specifications for accurate power forecasting. Afterwards, three different approaches are employed for forecasting, and their best performance is gained by the selection of their most appropriate hyperparameters.
wind powers, are selected to improve the computational time. At the next step, negative power values are removed or replaced. This data preprocessing is followed by resampling the dataset and removing outliers in three different ways. After finishing the data preprocessing and providing proper data for forecasting, data predictability and stationarity are assessed as two important specifications for accurate power forecasting. Afterwards, three different approaches are employed for forecasting, and their best performance is gained by the selection of their most appropriate hyperparameters.
ARIMA Model
In this study, the standard approach of the Box-Jenkins method [20] was traced for the ARIMA model development. The ARIMA model is a widely used set of statistical models for analysing and predicting time-series data [21]. This model can be expressed as [22]: While and are coefficients, p, q, and d are the lag number of observations in the model, the order of moving average, and the degree of difference, respectively. Degree of difference (d) values greater than 0 imply that the data has been nonstationary but has become stationary after some degree of difference.
The ARIMA model combines the AR, moving average (MA), and the Integrated (I) components, which denotes the data substitution with the value of the difference between its values and the preceding values [23]. The forecasting accuracy of the ARIMA model depends on selecting the most appropriate combination of p, d, and q. Normally, for small data sets, the autocorrelation function (ACF) and partial autocorrelation function (PACF)
ARIMA Model
In this study, the standard approach of the Box-Jenkins method [20] was traced for the ARIMA model development. The ARIMA model is a widely used set of statistical models for analysing and predicting time-series data [21]. This model can be expressed as [22]: While φ t and θ t are coefficients, p, q, and d are the lag number of observations in the model, the order of moving average, and the degree of difference, respectively. Degree of difference (d) values greater than 0 imply that the data has been nonstationary but has become stationary after some degree of difference.
The ARIMA model combines the AR, moving average (MA), and the Integrated (I) components, which denotes the data substitution with the value of the difference between its values and the preceding values [23]. The forecasting accuracy of the ARIMA model depends on selecting the most appropriate combination of p, d, and q. Normally, for small data sets, the autocorrelation function (ACF) and partial autocorrelation function (PACF) can be used to determine which AR or MA component should be selected in the ARIMA model [24].
These two factors, which can be graphically plotted, are widely used elements in analysing and predicting time-series. They highlight the relationship between an observation and the observations' value at prior time steps. The difference between ACF and PACF is that, in PACF, while assessing the relationship between observation of two time steps, the relationships of the intervening observations are removed. Figure 3a,b show the observations' ACF and PACF plots. An appropriate ARIMA model can be selected based on the simple explanations in Table 1 [9], and the value of d (degree of difference) depends on the number of differencing until the data is stationary. These two factors, which can be graphically plotted, are widely used elements in an-alysing and predicting time-series. They highlight the relationship between an observation and the observations' value at prior time steps. The difference between ACF and PACF is that, in PACF, while assessing the relationship between observation of two time steps, the relationships of the intervening observations are removed. Figure 3a,b show the observations' ACF and PACF plots. An appropriate ARIMA model can be selected based on the simple explanations in Table 1 [9], and the value of d (degree of difference) depends on the number of differencing until the data is stationary.
Model Autocorrelation Partial Autocorrelation AR (p)
Tails off gradually Cuts off after p lags MA(q) Cuts off after q lags Tails off gradually ARMA (p, q) Tails off gradually Tails off gradually The ARIMA model forecasting steps after resampling and outlier treatment can be seen in Figure 4. The first step is assessing the stationarity of the time-series. Stationary is one of the assumptions during time-series modelling, which shows the consistency of the summary statistics of the observations. When a time-series is stationary, it means that the statistical properties of the timeseries (such as mean, variance, and autocorrelation) do not change over time. This property can be violated by having any trend, seasonality, and other time-dependent structures. There are two main methods for the stationarity assessment of time-series, the visualisation approach and the augmented Dickey-Fuller (ADF) test. The visualisation method uses graphs to show whether the standard deviation changes over time. On the other hand, the ADF method is a statistical significance test that compares the p-value with the critical values and does hypothesis testing. This test makes the stationarity of data clear at different levels of confidence.
Regarding the data used in this study, due to the high number of observations and wide dispersion, it is not possible to check stationarity through the visualisation method. Therefore, in this study, the ADF method was used.
The ADF test's execution provides a p-value which, by comparing it with a threshold (such as 5% or 1%), can identify the stationarity of the data. Nonstationary data in this step need to be changed to stationary by methods such as differencing. After ensuring the The ARIMA model forecasting steps after resampling and outlier treatment can be seen in Figure 4. The first step is assessing the stationarity of the time-series. Stationary is one of the assumptions during time-series modelling, which shows the consistency of the summary statistics of the observations. time-series is stationary, a persistence method as a baseline is created. Then, through a detailed grid search, the best hyperparameters for the ARIMA forecasting for each preprocessed data were found. The last step is ARIMA forecasting and comparing its error with the error of the persistence method.
LSTM Model
The recurrent neural network (RNN) is a model in which the connection of its units creates cycles. RNN has a high ability to represent all dynamics. However, its effectiveness is affected by the limitations of the learning process. The main limitation of gradient-based methods that use back propagation is their path integral time-dependence on assigned weight [13]. When the time lag between the input signal and the target signal increases to more than 5-10 time-steps, the normal RNN loses the learning ability, and the back-propagation error either vanishes or explodes. This error elimination raises the question of When a time-series is stationary, it means that the statistical properties of the timeseries (such as mean, variance, and autocorrelation) do not change over time. This property can be violated by having any trend, seasonality, and other time-dependent structures. There are two main methods for the stationarity assessment of time-series, the visualisation approach and the augmented Dickey-Fuller (ADF) test. The visualisation method uses graphs to show whether the standard deviation changes over time. On the other hand, the ADF method is a statistical significance test that compares the p-value with the critical values and does hypothesis testing. This test makes the stationarity of data clear at different levels of confidence.
Regarding the data used in this study, due to the high number of observations and wide dispersion, it is not possible to check stationarity through the visualisation method. Therefore, in this study, the ADF method was used.
The ADF test's execution provides a p-value which, by comparing it with a threshold (such as 5% or 1%), can identify the stationarity of the data. Nonstationary data in this step need to be changed to stationary by methods such as differencing. After ensuring the timeseries is stationary, a persistence method as a baseline is created. Then, through a detailed grid search, the best hyperparameters for the ARIMA forecasting for each preprocessed data were found. The last step is ARIMA forecasting and comparing its error with the error of the persistence method.
LSTM Model
The recurrent neural network (RNN) is a model in which the connection of its units creates cycles. RNN has a high ability to represent all dynamics. However, its effectiveness is affected by the limitations of the learning process. The main limitation of gradient-based methods that use back propagation is their path integral time-dependence on assigned weight [13]. When the time lag between the input signal and the target signal increases to more than 5-10 time-steps, the normal RNN loses the learning ability, and the backpropagation error either vanishes or explodes. This error elimination raises the question of whether normal RNNs can show practical benefits for feed-forward networks. To address this problem, the LSTM has been developed based on memory cells. The LSTM consists of a recurrently attached linear unit known as the constant error carousel (CEC). CECs, by keeping the local error backflow constant, mitigate the gradient's vanishing problem [25]. They can be trained by adjusting both the back propagation over time and the real-time recurrent learning algorithm [26]. Figure 5 shows the typical structure of the LSTM.
As can be seen, there are three gate units in a basic LSTM cell, including the input, output, and forget gates. The gate activation vectors of i t , o t and f t for input, output, and forget gates, respectively, are calculated in Equations (2)-(4).
In these equations, , and U f represent the assigned weights, and b i , b o , and b f represent the biases in conjunction with relevant activation functions σ l . In addition, x t is the neuron input at time step t, and the cell state vector at time step t − 1 is h t−1 . As shown in Equation (5), the next evaluated value of the state S t can be calculated based on the relevant activation function σ s .
In Equation (7), the newly assessed value of S t and the prior cell state S t−1 are used to calculate cell state S t , which by itself will be used with the output gate control signal o t and the activation function σ lh to obtain the overall output h t according to Equation (8). As can be seen, there are three gate units in a basic LSTM cell, including the input, output, and forget gates. The gate activation vectors of , and for input, output, and forget gates, respectively, are calculated in Equations (2)-(4).
In these equations, , , , , and represent the assigned weights, and , , and represent the biases in conjunction with relevant activation functions . In addition, is the neuron input at time step t, and the cell state vector at time step t − 1 is ℎ . As shown in Equation (5), the next evaluated value of the state can be calculated based on the relevant activation function . ℎ In Equation (7), the newly assessed value of and the prior cell state are used to calculate cell state , which by itself will be used with the output gate control signal and the activation function to obtain the overall output ℎ according to Equation (8).
As can be seen in Equations (6) and (7), the output ℎ is dependent on the state of the LSTM cell and the activation function that is usually tanh (x). The state depends on the state of the prior step as well as the new value of the state . As can be seen in Equations (6) and (7), the output h t is dependent on the state S t of the LSTM cell and the activation function σ lh that is usually tanh (x). The state S t depends on the state of the prior step S t−1 as well as the new value of the state S t .
In accordance with all the relations mentioned above, the function of the LSTM model can be concluded as:
•
Input gate (i t ) controls the extent to which S t flows into the memory.
•
Output gate (o t ) regulates the extent to which S t gives to the output (ht).
•
Forget gate ( f t ) controls the extent to which S t−1 (i.e., previous state) is kept in the memory.
Specifying the best LSTM model for wind power forecasting requires the determination of the neural network's best combination of hyperparameters. LSTMs have five main hyperparameters, including the number of lag observations as inputs of the model, the quantity of LSTM units for the hidden layer, the model exposure frequency to the whole training dataset, the number of samples inside an epoch in each weight updating, and finally, the used difference order for making nonstationary data stationary.
Grid Search for ARIMA and LSTM Models
ARIMA model factors (i.e., p, d, and q) can be estimated through iterative trial and error by revising the ACF and PACF plot. This part of defining the ARIMA forecasting model can be very challenging and time-consuming, leading to prediction errors. As a result, researchers attempt to find these hyperparameters using an automatic grid search approach. Similar to the ARIMA model, specifying the best LSTM model for wind power forecasting requires the determination of the best combination of hyperparameters in this neural network. This study also specified a grid of the LSTM parameters to iterate. An LSTM model is created based on each combination, and its forecasting accuracy is assessed by calculating its RMSE.
Persistence Method
It is vital to create a baseline for any time-series prediction approach. As a reference, for comparing all modelling approaches, this baseline can show how well a model makes predictions. Models which perform worse than the performance level of the baseline can be ignored.
Benchmarks for forecasting problems need to be very simple to train, fast to implement, and repeatable. The persistence model is one of the most commonly used references for wind speed and power prediction (short-term forecasting methods in particular). Based on the definition of this method, wind power in the future will be equivalent to the generated power in the present [27], as given by Equation (8): where P t is the measured wind power at time t andP t+k/t is the predicted wind power for the future time k. This model performs better than most short-term physical and statistical forecasting methods. Therefore, it is still widely used in very short-term prediction [28]. This research uses the persistence model to compare the performance of the ARIMA and LSTM models for different datasets.
Hyperparameter Optimisation with Optuna
This study uses the Optuna optimisation method to optimise the forecasting models.
Optuna is an open-source optimisation software with several advantages over the other optimisation frameworks [29]. Other optimisation tools usually differ depending on the algorithm used to select the parameters. For example, GPyOpt and Spearmint [30] apply Gaussian processes, SMAC [31] employs random forests, and Hyperopt [32] uses a treestructured Parzen estimator (TPE). These methods have three main drawbacks. Firstly, they need the parameter search space to be statically defined by the user, a process that is extremely hard for large-scale experiments with many possible parameters. Furthermore, they do not have an efficient pruning strategy for high-performance optimisation when accessing limited resources. In addition, they cannot handle large-scale experiments with minimal setup requirements. On the other hand, Optuna, with a define-by-run design, enables the user to create the search space dynamically. This optimisation framework is an open-source, easy-to-set-up package that benefits effective sampling and pruning algorithms [29]. Optuna optimises the model through minimising/maximising an objective function (here, the RMSE of the forecasted wind power rather than the real generated values) that assumes a group of hyperparameters as input and returns its validation core. The optimisation process is called a study, and each objective function's evaluation is called a trial [29].
At the beginning of the optimisation, the user is asked to provide the search space for the dynamic generation of the hyperparameters for each trial. Then, the model builds the objective function by interacting with the trial object. After this step, the next hyperparameter selection is based on the history of previously evaluated trials. This algorithm optimises ML models in two steps. First, a search strategy determines a set of parameters to be examined, and second, a performance assessment strategy known as a pruning algorithm excludes the improper parameters based on the estimation of the value of the currently investigated parameters [29].
Since the initial prediction accuracy assessment of the ARIMA and LSTM models (both tuned by grid search) highlighted the better performance of the LSTM model compared to ARIMA, it was decided to apply the optimisation framework only to the LSTM model.
In this way, the hyperparameter ranges of the LSTM model increased from what was examined in its grid search to wider ranges, as shown in Table 2. In other words, the hyperparameter combinations increased from 48 combinations to more than a million combinations.
Wind Power Dataset
The source SCADA data are measured at a 1 Hz frequency from the Levenmouth Demonstration Turbine (LDT), an offshore wind turbine which is located just 50 m from the coast at Leven, a seaside town in Fife, Scotland [33]. This wind turbine was acquired by the Offshore Renewable Energy (ORE) Catapult in 2015, while its construction was completed by Samsung in October 2013 [34].
ORE Catapult's wind turbine is a three-bladed upwind turbine installed on a jacket structure [25]. The turbine is ranked to work at 7 MW, but to decrease the noise, it is limited to operating at the highest power of 6.5 MW [33]. This turbine's rotor diameter is 171.2 m, and its hub height is 110.6 m. Each blade of this turbine measures 83.5 m and weighs 30 tons. The defined cut-in speed for this turbine is 3.5 m/s, which means its electricity generation will start when wind speeds reach this speed. It will shut down if the wind is blowing too hard (roughly 25 m/s) so to prevent equipment damage. Its operating temperature is between −10 • C to +25 • C, and it has been designed to work for 25 years [35]. Figure 6 shows the configuration and main parameters of the LDT.
Feature Selection
This study recorded the SCADA datasets for five months, from 1 January 2019 to 31 May 2019, at a 1 Hz frequency (with one-second intervals). Each timestamp in this timeseries data includes 574 different observations, including the generated power, wind speed at different levels, blade pitch angle, nacelle orientation, etc. At the beginning of the data processing, a feature selection was carried out to decrease the size of the dataset to reduce the computation time by excluding unnecessary variables. This process was vital to making this study possible. All variables except the time stamp, wind speed, and active power were removed at this stage, which was useless in the ARIMA and univariate LSTM forecasting methods. Keeping the wind speed variable was vital in this project, as it veri-
Feature Selection
This study recorded the SCADA datasets for five months, from 1 January 2019 to 31 May 2019, at a 1 Hz frequency (with one-second intervals). Each timestamp in this time-series data includes 574 different observations, including the generated power, wind speed at different levels, blade pitch angle, nacelle orientation, etc. At the beginning of the data processing, a feature selection was carried out to decrease the size of the dataset to reduce the computation time by excluding unnecessary variables. This process was vital to making this study possible. All variables except the time stamp, wind speed, and active power were removed at this stage, which was useless in the ARIMA and univariate LSTM forecasting methods. Keeping the wind speed variable was vital in this project, as it verified the accuracy of generated power. For example, failure to generate power when high wind speeds were recorded was recognised as a stop in power generation due to reasons such as maintenance. After removing the redundant information, observations of wind speed and active power were plotted as shown in Figure 7a,b.
Feature Selection
This study recorded the SCADA datasets for five months, from 1 January 2019 to 31 May 2019, at a 1 Hz frequency (with one-second intervals). Each timestamp in this timeseries data includes 574 different observations, including the generated power, wind speed at different levels, blade pitch angle, nacelle orientation, etc. At the beginning of the data processing, a feature selection was carried out to decrease the size of the dataset to reduce the computation time by excluding unnecessary variables. This process was vital to making this study possible. All variables except the time stamp, wind speed, and active power were removed at this stage, which was useless in the ARIMA and univariate LSTM forecasting methods. Keeping the wind speed variable was vital in this project, as it verified the accuracy of generated power. For example, failure to generate power when high wind speeds were recorded was recognised as a stop in power generation due to reasons such as maintenance. After removing the redundant information, observations of wind speed and active power were plotted as shown in Figure 7a The histograms of this dataset for wind speed and active power are presented in Figure 8a,b, and Table 3 shows their statistical descriptions. The histograms of this dataset for wind speed and active power are presented in Figure 8a,b, and Table 3 shows their statistical descriptions.
Obvious Outlier Removal
An initial assessment of Figure 7b specified that a large part of the recorded generated power at the end of this time-series (May 2019) equals zero. Usually, the generated power of a turbine can be zero when no wind is blown. However, the evaluation of Figure 7(a) shows a continuous wind blowing with fluctuations similar to previous months. Therefore, it is speculated that the turbine was out of production during this period. Based
Obvious Outlier Removal
An initial assessment of Figure 7b specified that a large part of the recorded generated power at the end of this time-series (May 2019) equals zero. Usually, the generated power of a turbine can be zero when no wind is blown. However, the evaluation of Figure 7a shows a continuous wind blowing with fluctuations similar to previous months. Therefore, it is speculated that the turbine was out of production during this period. Based on this assumption, it was decided that this month (May 2019) should be removed entirely from the dataset. The time-series after this omission was reduced to four months, from 1 January 2019 to 30 April 2019. A closer look at the active power, as shown in Figure 9, revealed another obvious error in the SCADA data, the existence of negative values. Negative values are values of which there is no practical meaning in wind power generation. Shen et al. [36] believe that these values represent time stamps when turbine blades do not rotate, but the turbine's control system needs electricity [36]. These values need to be eliminated along with the corresponding parameters of the same timestamp for better forecasting results [25]. Since the elimination of these negative values disrupts the time continuity of the time-series, and can possibly lead to errors in wind power prediction, at this stage it was decided to create and assess three types of datasets based on different actions against negative values. Assessment of the impact of these actions on forecasting accuracy became another goal of this study.
Resampling
The effect of wind turbulence as one of the obstacles to increasing the wind energy penetration in energy markets is more significant in horizontal axis wind turbines. This is because the wind speed and direction change rapidly after hitting swept blade rotors. Therefore, the amount of wind speed measurements by installed anemometers are not equal to the speed of the wind flow hitting turbine blades [25]. These differences, which lead to a decrease in the correlation between the measured wind speed and the output power, and then scattering of the power curve, can be resolved by averaging the samples in a reasonable average period [25]. The SCADA data for this study was recorded with a 1 Hz frequency; as a result, it was possible to create multiple averaged sets for removing the mentioned obstacle. According to a review conducted by Hanifi et al. [1], the maximum sampling rate used for wind speed and power forecasting in the previous research is 10 min. This is equivalent to an average time that the international standard for power performance measurements of electricity-producing wind turbines (IEC 61400-12-1) establishes for large wind turbines [37]. Based on the IEC 61400-12-1 and reviewed literature, the data presented here was averaged for each 10 min of data collection. Figure 10a,b show the wind power curves for the original and 10 min resampled data.
Resampling
The effect of wind turbulence as one of the obstacles to increasing the wind energy penetration in energy markets is more significant in horizontal axis wind turbines. This is because the wind speed and direction change rapidly after hitting swept blade rotors. Therefore, the amount of wind speed measurements by installed anemometers are not equal to the speed of the wind flow hitting turbine blades [25]. These differences, which lead to a decrease in the correlation between the measured wind speed and the output power, and then scattering of the power curve, can be resolved by averaging the samples in a reasonable average period [25]. The SCADA data for this study was recorded with a 1 Hz frequency; as a result, it was possible to create multiple averaged sets for removing the Energies 2022, 15, 6919 13 of 21 mentioned obstacle. According to a review conducted by Hanifi et al. [1], the maximum sampling rate used for wind speed and power forecasting in the previous research is 10 min. This is equivalent to an average time that the international standard for power performance measurements of electricity-producing wind turbines (IEC 61400-12-1) establishes for large wind turbines [37]. Based on the IEC 61400-12-1 and reviewed literature, the data presented here was averaged for each 10 min of data collection. Figure 10a,b show the wind power curves for the original and 10 min resampled data. penetration in energy markets is more significant in horizontal axis wind turbines. This is because the wind speed and direction change rapidly after hitting swept blade rotors. Therefore, the amount of wind speed measurements by installed anemometers are not equal to the speed of the wind flow hitting turbine blades [25]. These differences, which lead to a decrease in the correlation between the measured wind speed and the output power, and then scattering of the power curve, can be resolved by averaging the samples in a reasonable average period [25]. The SCADA data for this study was recorded with a 1 Hz frequency; as a result, it was possible to create multiple averaged sets for removing the mentioned obstacle. According to a review conducted by Hanifi et al. [1], the maximum sampling rate used for wind speed and power forecasting in the previous research is 10 min. This is equivalent to an average time that the international standard for power performance measurements of electricity-producing wind turbines (IEC 61400-12-1) establishes for large wind turbines [37]. Based on the IEC 61400-12-1 and reviewed literature, the data presented here was averaged for each 10 min of data collection. Figure 10a
Anomalies Detection and Treatment
Outliers in a dataset are specific data points that are different or far from most other regular data points [38]. Undetected or improperly treated anomalies can adversely affect wind power forecasting applications. They may be biased with high prediction errors [38].
Anomalies Detection and Treatment
Outliers in a dataset are specific data points that are different or far from most other regular data points [38]. Undetected or improperly treated anomalies can adversely affect wind power forecasting applications. They may be biased with high prediction errors [38].
There are various reasons for having outliers among wind turbine and wind farm measurements, including wind turbine downtime [36], data transmission, processing or management failure [39], data acquisition failure [40], electromagnetic disturbance [36], wind turbine control system fault (such as the pitch control system fault) [41], damage of the blades or the existence of ice or dust [42], shading effect of neighbouring turbines, fluctuation of air density [43], etc. Figure 11 shows four different types of anomalies in the current SCADA data. Category A points have negative, zero, or low values of generated power during speeds larger than the cut-in speed [25]. The leading causes of these outliers are wrong wind power measurements, wind turbine failure, and unexpected maintenance. Wind speed sensors and communication errors cause category B outliers. The mid-curve outliers (category C) represent power values lower than ideal-this is caused by the down-rating of the wind turbines and data acquisition. Outliers in category D are scattered irregular points due to faulty sensors exacerbated during harsh weather circumstances [36].
There are different methods for anomaly detection in machine learning, such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN), IF, local outlier factor, and EE. In this study, three common methods for wind power forecasting are investigated. EE is used based on the assumptions described in [44]. IF, which is an unsupervised learning algorithm, recognises anomalies by isolating them in the data. This algorithm works based on two main features of anomalies, that they are few and different. The one-class support vector machine (OCSVM) is a common unsupervised learning algorithm for outlier detection, assuming rare anomalies create a boundary for most data, and considering data points out of the boundary as outliers [45]. This method of outlier detection and treatment chose the third method. gory A points have negative, zero, or low values of generated power during speeds larger than the cut-in speed [25]. The leading causes of these outliers are wrong wind power measurements, wind turbine failure, and unexpected maintenance. Wind speed sensors and communication errors cause category B outliers. The mid-curve outliers (category C) represent power values lower than ideal-this is caused by the down-rating of the wind turbines and data acquisition. Outliers in category D are scattered irregular points due to faulty sensors exacerbated during harsh weather circumstances [36]. There are different methods for anomaly detection in machine learning, such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN), IF, local outlier factor, and EE. In this study, three common methods for wind power forecasting are investigated. EE is used based on the assumptions described in [44]. IF, which is an unsupervised learning algorithm, recognises anomalies by isolating them in the data. This algorithm works based on two main features of anomalies, that they are few and different. The oneclass support vector machine (OCSVM) is a common unsupervised learning algorithm for outlier detection, assuming rare anomalies create a boundary for most data, and considering data points out of the boundary as outliers [45]. This method of outlier detection and treatment chose the third method.
Experimental Results and Discussion
This research employs packages and subroutines written in Python to implement the proposed algorithms. A PC with an Intel Core i5-7300 32.6 GHz CPU and 8 GB RAM (without any GPU processing) was used to run the experiments. Three outlier detection
Experimental Results and Discussion
This research employs packages and subroutines written in Python to implement the proposed algorithms. A PC with an Intel Core i5-7300 32.6 GHz CPU and 8 GB RAM (without any GPU processing) was used to run the experiments. Three outlier detection methods, which were described in Section 2.8, were used to detect and remove the outliers of the resampled dataset. The results of these treatments can be seen in Figures 12-14: This study considers six different preprocessing methods based on applying three different outlier detection methods and three approaches against the negative power values (Table 4). Different cases of preprocessed data are fed to the ARIMA and LSTM forecasting models. The grid search method is applied for the initial hyperparameter tuning; Table 4 shows the selected hyperparameters for the ARIMA and LSTM models. As expected, the values of the hyperparameters vary depending on the different employed preprocessing methods (Table 4). This study considers six different preprocessing methods based on applying three different outlier detection methods and three approaches against the negative power values (Table 4). Different cases of preprocessed data are fed to the ARIMA and LSTM forecasting models. The grid search method is applied for the initial hyperparameter tuning; Table 4 shows the selected hyperparameters for the ARIMA and LSTM models. As This study considers six different preprocessing methods based on applying three different outlier detection methods and three approaches against the negative power values (Table 4). Different cases of preprocessed data are fed to the ARIMA and LSTM forecasting models. The grid search method is applied for the initial hyperparameter tuning; Table 4 shows the selected hyperparameters for the ARIMA and LSTM models. As After selecting the best ARIMA and LSTM prediction methods, both models were trained by the first 95% part of the dataset (as training data) to make predictions for the last 5% of the dataset. The predicted values were compared with the measured values to determine the RMSE of each forecasting process. Table 5 provides the RMSE values of the ARIMA, LSTM, and persistence methods. Comparing the RMSE values of all three models (Table 5) for case data 1, 2, and 3 clarifies that the complete elimination of the negative values (without any replacement) will lead to worse forecasting. The highest RMSE value of case 3 means that removing the negative values will decrease the forecasting accuracy. One of the reasons for this performance drop can be the creation of discontinuity in the dataset.
Regarding the best specific value to be considered instead of negatives, a comparison of case data 1 and 2 proves that replacing the negative values with the average wind power values has a better impact than replacing them with the nearest (neighbour) positive value. Replacing the negative values with the average values can lead to about a 15% forecasting improvement for ARIMA and 11% for the LSTM models.
The results also highlight the importance of dealing with outliers in wind power forecasting. Cases 4, 5, and 6, representing the outlier removed data, show a significant enhancement of the accuracy rather than the other cases, without any action against the anomalies. Comparing the error levels of case data 3 with cases 4, 5, and 6 (for both ARIMA and LSTM models) shows a 30% to 38% forecasting improvement by the elimination of the outliers, either by isolation forest, elliptic envelope, or the one-class SVM outlier detection methods.
The assessment of the RMSE values of cases 4, 5, and 6 show that the IF and EE outlier detection methods overcome the OCSVM method. An elliptic envelope can improve forecasting performance up to 9.61% and 8.92% rather than OCSVM for ARIMA and LSTM methods. This performance enhancement can reach 9.96% and 9.64% for ARIMA and LSTM, respectively, by applying the isolation forest.
As shown in Table 5, the ARIMA and LSTM methods for all the treated case data have better performances than the persistence methods. This is understandable if one remembers that, in the persistence method, only one preceding step data is used for forecasting, whilst the ARIMA and LSTM models consider a more extensive range of prior data.
It is also clear that the LSTM performs better than the ARIMA almost for all approaches against the negative values and outliers. This is probably due to the fact that LSTMs are better equipped to learn long-term correlation. In addition, the LSTM can better capture the nonlinear dependencies between the features.
In this study, because of the better prediction performance of the LSTM model compared to the ARIMA model, the proposed optimisation algorithm is applied to the LSTM model to tune its hyperparameters even more. As discussed in Section 2.5, the hyperparameter ranges of the LSTM model are increased from what was examined in its grid search to the wider ranges shown in Table 2.
The six preprocessed case data are again divided into the first 95% as the training dataset and the rest 5% as the test data. These divisions were developed to establish the same conditions and logically compare the new and previous methods. The developed optimisation algorithm, with the two described strategies, including search and pruning, started the selection of different combinations to minimise the RMSE value. Table 6 shows the new hyperparameters found by the Optuna optimisation algorithm, and Figure 15 shows the measured power values of the turbine and prediction results of all the forecasting methods, including ARIMA, LSTM-grid, and LSTM-Optuna, for one of the datasets (data 4-removed negative values and removed outliers with the EE method). As can be seen in Figure 15, the LSTM model optimised by Optuna can predict more accurately by better learning the wind power's short-term and long-term dependencies. The diagram illustrated in Figure 16 is plotted to better compare the error levels of the different wind power forecasting methods. It can be recognised from this diagram that the LSTM-Optuna approach follows rules similar to the ARIMA and LSTM-grid models. To achieve a higher prediction accuracy, it is essential to eliminate the outliers and replace the negative power values with the average wind power value. As can be seen in Figure 15, the LSTM model optimised by Optuna can predict more accurately by better learning the wind power's short-term and long-term dependencies. The diagram illustrated in Figure 16 is plotted to better compare the error levels of the different wind power forecasting methods. It can be recognised from this diagram that the LSTM-Optuna approach follows rules similar to the ARIMA and LSTM-grid models. To achieve a higher prediction accuracy, it is essential to eliminate the outliers and replace the negative power values with the average wind power value.
Building the LSTM models based on the new values of the hyperparameters, as shown in Table 6, improves the prediction accuracy of the LSTM model in a range from 1.22% to 2.65% for different cases of preprocessed data. These accuracy improvements can be seen in Table 7.
The results show that the highest accuracy improvement is related to case 5, a case in which negative values were replaced with the mean power value and the outliers were removed through the IF method. A comparison of the required search times to find the best combination of the hyperparameters in LSTM-grid and LSTM-Optuna proves the faster performance of the proposed method, as it spends from 13.79% to 20.59% less time adjusting the model for the most accurate prediction ( Table 8).
As can be seen in Figure 15, the LSTM model optimised by Optuna can predict more accurately by better learning the wind power's short-term and long-term dependencies. The diagram illustrated in Figure 16 is plotted to better compare the error levels of the different wind power forecasting methods. It can be recognised from this diagram that the LSTM-Optuna approach follows rules similar to the ARIMA and LSTM-grid models. To achieve a higher prediction accuracy, it is essential to eliminate the outliers and replace the negative power values with the average wind power value. Building the LSTM models based on the new values of the hyperparameters, as shown in Table 6, improves the prediction accuracy of the LSTM model in a range from
Conclusions
This study addresses issues regarding inaccurate wind power prediction using ML approaches. As discussed in the reviewed literature, most previous research applied ML without advanced model optimisation. At the same time, in this paper, a novel concept of Optuna-LSTM is reported to expedite the process of selecting the hyperparameters and tuning the wind power forecasting models. This model not only reduces the time complexity of creating reliable models, but also improves the accuracy of the predictions.
To accurately evaluate the proposed model, SCADA data of an offshore wind turbine was preprocessed by eliminating its negative values and outliers to help find the best preprocessing method. The performance of the proposed forecasting was demonstrated through comparisons with the persistence, ARIMA, and LSTM models, which were already tuned by grid search. This comparison proved the better performance of the proposed model, with a range up to 7.89, 5.9, and 2.65 percent compared to the persistence and conventional grid-search-tuned ARIMA and LSTM models.
This study also highlights the importance of eliminating negative values in the power recordings. The results of this study confirmed that replacing the negative values with the average power value has the most positive effect on the forecasting accuracy. In addition, comparisons between several data cases showed the significant impact of the outlier treatment methods on the forecasting performance. The results proved that removing the
|
2022-09-24T15:05:42.807Z
|
2022-09-21T00:00:00.000
|
{
"year": 2022,
"sha1": "79b94dad5d0d767eda728434a07d32208a571ccc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/15/19/6919/pdf?version=1663765838",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "036a1ad1b4c31edc775ab15a9d469b0515047385",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
232162886
|
pes2o/s2orc
|
v3-fos-license
|
Increasing Embryonic Morphogen Nodal Expression Suggests Malignant Transformation in Colorectal Lesions and as a Potential Marker for CMS4 Subtype of Colorectal Cancer
Nodal, an embryonic morphogen in TGF-β family, is related with tumorigenicity and progression in various tumors including colorectal cancer (CRC). However, the difference of Nodal expression between CRC and colorectal polyps has not yet been investigated. Besides, whether Nodal can be used as a marker for consensus molecular subtype classification-4 (CMS4) of CRC is also worth studying. We analyzed Nodal expression in patients of CRC (161), high-grade intraepithelial neoplasia (HGIN, 28) and five types of colorectal polyps (116). The Nodal expression difference among groups and the association between Nodal expression and clinicopathological features were analyzed. Two categories logistic regression model was used to predict the odds ratio (OR) of risk factors for high tumor-stroma percentage (TSP), and ROC curve was used to assess the diagnostic value of Nodal in predicting high TSP in CRC. We found that Nodal expression was significantly elevated in CRC and HGIN (p < 0.0001). The increased expression of Nodal was related with high TSP, mismatch repair-proficient (pMMR) status, lymph node metastasis and advanced AJCC stage (p < 0.05). Besides, Nodal expression was the only risk factor for high TSP (OR = 6.94; p < 0.001), and ROC curve demonstrated that Nodal expression was able to efficiently distinguish high and low TSP. In conclusion, different expression of Nodal between CRC/HGIN and benign lesions is suggestive of a promoting role for Nodal in colorectal tumor progression. Besides, Nodal might also be used as a potential marker for CMS4 subtype of CRC.
INTRODUCTION
As is well-known, colorectal cancer (CRC) is a heterogeneous disease with complicated molecular profile, and the overall survival of CRC patients in advanced stage remains poor, with an approximate 50% overall 5-years survival rate [1]. Vogelstein et al recorded the model of gradual step-wise accumulation of epigenetic and genetic events leading to adenoma and carcinoma occurrence, and also put forward a new review about the function of 'driver' alterations in tumor suppressor genes such as SMAD4 and APC, and oncogenes such as PIK3CA, KRAS and BRAF that screen for advantageous genes and result in CRC progression [1,2]. More and more researches focus on the molecular changes in CRC tumorigenesis and progression in order to identify novel diagnostic and therapeutic targets for CRC.
The human Nodal gene located on chromosome 10q22, is a member of the Transforming Growth Factor Beta (TGF-β) superfamily and plays a critical role in maintaining pluripotency of human embryonic stem cells (hESCs) and mesodermal differentiation, including epithelial-tomesenchymal transition (EMT) [3]. Normally, Nodal expression is largely limited to embryonic tissues and not usually expressed in adult tissues [4]. Recently, more and more findings have revealed that Nodal reemerged in a number of tumors such as CRC, melanoma, breast cancer and prostate cancer [5][6][7][8]. The initial researches have revealed that Nodal was higher in human colon cancer tissues than that in adjacent noncancerous colon tissues, and Nodal was shown to accelerate self-renewal of human colon cancer stem cells via Smad2/3 signaling pathway [8]. In addition, Nodal and Aldehyde Dehydragenase-1 can served as prognostic markers for CRC [9]. However, the Nodal expression in colorectal polyps remains unclear. Therefore, we inspected and compared Nodal expression in a wide spectrum of intestinal polyps and CRC using immunohistochemistry (IHC) method.
In 2015, the consensus molecular subtype (CMS) classification of CRC was reported [10], which represented the best description of CRC heterogeneity and might be applied to guide the target therapy in the future. There have been established four CMSs: CMS1 (MSI Immune, 14%) is characterized as exhibiting microsatellite instability (MSI), immune activation and CpG island methylation phenotype (CIMP); CMS2 (Canonical, 37%) is characterized as showing somatic copy number analysis (SCNA), WNT/MYC signaling pathway activation and microsatellite stable (MSS) status; CMS3 (Metabolic, 13%) is characterized as exhibiting evident metabolic dysregulation and KRAS and APC mutations; and CMS4 (Mesenchymal, 23%) is characterized as showing TGF-β activation, stromal invasion and angiogenesis [11]. Particularly, CMS4 usually occurs at advanced stage with poorer prognosis than the other subtypes [12,13]. The recently reported tumor-stroma percentage (TSP) was used to evaluate the proportions of tumor area infiltrated by stroma, and high TSP (>50%) is an appropriate marker to determine CMS4 subtype in the clinicopathologic diagnosing work [14]. Nevertheless, when tumor specimens contain more necrosis or mucus tissue, grading TSP is very difficult and it is meaningful to seek a new marker for assessing CMS4 subtype. As a member of TGF-β superfamily, whether Nodal expression has any relationship with TSP and thus can be used as a marker for CMS4 subtype? The present study also aimed to answer this question.
Patients and Specimens
Tissue samples were obtained from the department of pathology of Guangzhou First People's hospital. A total of 305 lesions were collected in this study between June 2017 and March 2018, including 18 juvenile polyps (JPs), 22 hyperplastic polyps (HPs), 22 inflammatory polyps (IPs), 24 sessile serrated adenoma/polyps (SSA/Ps), 30 tubular adenomas (TAs), 28 high-grade intraepithelial neoplasms (HGINs), 143 primary CRCs and 18 metastatic CRCs (pulmonary and liver metastases). Besides, a total number of 20 normal colon tissues were collected and used as control. All the CRC patients had undergone surgical operation and none of them had received radiotherapy or chemotherapy before surgery. Tumor type and grade were evaluated according to the 2016 World Health Organization (WHO) classification system. The pathological tumor stage was defined according to the seventh edition of the tumornode-metastasis (TNM) staging of the American Joint Cancer Committee (AJCC). Primary colon cancer was classified into leftsided (including descending colon and sigmoid) and right-side (including caecum, ascending and transverse colon) tumors.
Immunohistochemistry Staining
Formalin-fixed and paraffin-embedded samples were cut into 4μm thickness, deparaffinized in xylene, washed and dehydrated with graded ethanol, followed by antigen retrieval using highpressure method with EDTA 9.0. The slides were then pretreated with 3% H 2 O 2 for 10 min to block endogenous peroxidase activity and washed by PBS for three times. Afterward, the tissues were incubated with mouse monoclonal anti-Nodal antibody (1:300 dilution; ab55676; Abcam) at 37°C in water bath kettle for 45 min, following a 20 min incubation with biotin-linked secondary antibody (1:1000 dilution; ab47844; Abcam) at 37°C according to the manufacturer's instructions. Then the sections were stained in DAB (diaminobenzidine tetrahydrochloride, Dako) solution and counterstained with hematoxylin for 1 min. Slides were then washed and dehydrated in graded ethanol, ultimately mounted under a cover slip. Each slide was observed under a light microscope (magnification, ×100) by two pathologists to obtain coincident results.
Cytoplasmic staining was considered Nodal-positive. The immunostaining of Nodal was scored according to positive cell rate and staining intensity. Two senior pathologists (XPW and HD) independently recorded the IHC results. The staining intensity was graded as follows: 0 score, no staining; 1 score, weak staining with light brown; 2 scoresintermediate staining; 3 scores, strongly stained with dark brown. The percentage of positive cells was divided into the following levels: 0 point for no cells positive; 1 point for <10% cells positive; 2 points for 10-50%, and 3 points for >50% cells positive, respectively. The staining index (SI) was calculated as (score for staining intensity) × (score for percentage of positive cells) [15,16]. SI was divided into low group (total score ≤ 4.5) and high group (total score > 4.5) for further analysis.
Histopathological Scoring for TSP
The evaluation of TSP was conducted using microscopic analysis of 4-µm hematoxylin and eosin (H&E) stained sections from the most invasive part of CRC as previously described [17,18]. All the tissues were fixed, dehydrated, paraffin-embedded and sectioned, followed by hematoxylin staining, washing under running water, hydrochloric acid alcohol differentiation, dehydration using graded ethanol and vitrification by dimethylbenzene. Then, the slides were covered with a glass slip. Using a × 5 objective, the invasive area with the desmoplastic stroma was selected. Subsequently, the fields where the stroma was infiltrated with small tumor nest were calculated using a × 10 objective, meanwhile ensuring that tumor cells were present at all four sides of the image (north-east-south-west) and excluding areas of necrosis or mucin. Two pathologists (XPW and HD) estimated all the samples respectively. The TSP value was divided into "stroma-high" (>50%) and "stroma-low" (≤50%) groups with a 50% cut-off value [14].
Statistical Analysis
Statistical analysis was carried out using SPSS 22.0 software (SPSS Inc, United States). The difference of Nodal expression among various types of colorectal lesions was compared using the Mann-Whitney U test. The Pearson Chi-square test was used for assessing the correlation between Nodal expression and clinicopathological parameters. Two categories logistic regression model was used for univariate and multivariate analysis to predict the odds ratio (OR) of individual factors for high TSP. Only variables with p value of less than 0.1 in the univariate model were included for further analysis in the multivariate logistic model. The ROC curve was plotted to evaluate the diagnostic value of Nodal in predicting high TSP. A p value of less than 0.05 was considered statistically significant.
Nodal Expression in Various Types of Colorectal Lesions and TSP in CRC Tissues
Results of Nodal expression in various types of colorectal lesions are summarized in Table 1. Nodal was expressed at low levels in normal colon tissue (SI 0.75 ± 0.38), JP (SI 0.89 ± 0.42), HP (SI 0.86 ± 0.37), and IP (SI 0.75 ± 0.31) ( Figures 1A-D), and there were no statistic differences among the four groups (all p > 0.05). SSA/P (SI 2.17 ± 1.09) ( Figure 1E) showed a weak to moderate Nodal expression which was comparable with TA (SI 2.40 ± 1.22) ( Figure 1F) whereas higher than the former four types (all p < 0.0001). Nodal expression was significantly increased in HGIN (SI 4.18 ± 1.81), primary tumor of CRC (SI 4.97 ± 2.27) as well as metastases of CRC (SI 4.61 ± 2.55) ( Figures 1G-J) compared with other lesions (all p < 0.0001), however, no significant differences were found between the three groups. Representative high (>50%) and low (≤50%) TSP in CRC tissues were displayed in Figures 1K,L).
Relationship between Nodal Expression and Clinicopathologic Features in CRC Patients
We then investigated whether Nodal expression had relationship with other clinical features in CRC patients. In the CRC cells, Nodal was predominantly located in the cytoplasm and evaluated by SI criteria. As shown in Table 2, Nodal expression was significantly correlated with primary tumor site, AJCC stage, node stage, MMR status and TSP (all p < 0.05). Higher Nodal expression was prone to be found in left-sided colon cancer (compared with right-sided colon and rectum cancer, p 0.044), advanced AJCC stage (stage III + IV compared with stage I + II, p 0.001), lymphatic metastasis (positive compared with negative, p 0.011), pMMR status (compared with mismatch repair-deficient (dMMR) status, p 0.025) and high TSP (compared with low TSP, p < 0.001).There were 29 CRC cases (29/143, 20.3%) with high TSP (>50%), among which only five cases showed low expression of Nodal (5/29, 17.2%).
Risk Factor Analysis for High TSP
As Nodal expression significantly correlated with high TSP, we next conducted the univariate and multivariate logistic regression analysis to exploit the risk factors for high TSP. As shown in Table 3, pMMR status and high Nodal expression were two risk factors for high TSP in univariate model (all p < 0.1). However,
The Diagnostic Value of Nodal Expression in Predicting High TSP
As high Nodal expression was the only significant risk factor for high TSP and high TSP is thought to be a good marker for determining CMS4 subtype of CRC in the routine pathology., we then plotted the ROC curve to assess the diagnostic value of Nodal in predicting high TSP or determining CMS4 subtype of CRC. As shown in Figure 2, the outcome demonstrated that Nodal expression was able to efficiently distinguish high and low TSP (AUC 0.773, 95% CI 0.676-0.869, p < 0.001). The optimal cut-off point of Nodal expression was 5.50, at which point a sensitivity of 75.9% and a specificity of 67.5% were obtained.
DISCUSSION
Nodal is a member of the TGF-β superfamily, as effective to both embryological development and tumorigenesis. Growing researches have reported that Nodal is expressed in various tumors. Nodal plays an important role in angiogenesis, invasion and progression of cancer, as upregulation of Nodal caused a loss of E-cadherin and upregulated mesenchymal markers including N-cadherin, Twist1 and Vimentin, inducing EMT via the ERK pathway, thus played a promoting role [19], while suppression of Nodal expression reduced the clonogenicity, tumorigenesis and metastasis abilities of cancer cells [20,21].
In the present study, we found that Nodal expression prominently increased in HGIN and CRC, whereas was at lower level in SSA/P and TA, and hardly expressed in normal colon tissue, JP, HP and IP. Previous research about Nodal expression in cutaneous melanocytic lesions showed that Nodal expression was significantly increased in malignant lesions including malignant melanoma, metastatic melanoma and melanoma in situ compared with those benign melanocytic lesions (no expression or low expression), indicating a role for Nodal in melanoma progression [15]. Nodal expression also increased in triple-negative breast cancer biopsies, whereas it was hardly detectable in benign breast lesions [22]. Based on our research and previously reported studies, we speculate that increased Nodal expression might also play an important role in tumorigenesis and progression of CRC, whereas needs further research to confirm it.
We further investigated the relationship between Nodal expression and clinicopathological characteristics in CRC, and found that Nodal was related with the advanced node stage and AJCC stage in CRC, which was consistent with the fact that Nodal promotes tumor growth in CRC [23]. Guinney et al recently proposed four subtypes (CMS classifications) for CRC based on the consolidation of six different large genomic subtyping studies [10]. Among the four CMSs, CMS4 (mesenchymal, 23%) is mainly manifested as MSS status, being a SCNA-high and CIMP-negative phenotype, occurring at advanced stages, having a poorer prognosis, and exhibiting TGF-β activation, EMT activation, stromal infiltration, angiogenesis and matrix remodeling [10,12,24]. However, the CMSs are dependent on special genetic test and difficult to be detected by routine pathological methods. Recent studies have shown that TSP was an independent prognostic factor for CRC and association with poor overall survival, and can be deemed as a good marker to determine CMS4 subtype of CRC patients in routine pathological examination [18]. As a member of TGF-β superfamily, Nodal was presumed to be overexpressed and be a diagnostic marker in CMS4 CRC. In the present study, we found that high Nodal expression was the only independent risk factor for high TSP in CRC. ROC curve confirmed the diagnostic value of Nodal for predicting high TSP. As high TSP is thought to be a valuable marker to determine CMS4, we might also speculate that Nodal could be another reliable diagnostic index in CMS4 CRC, especially when tumor specimens contain much necrosis or mucus tissue, which makes grading TSP very difficult. Moreover, high Nodal expression correlated with pMMR status, left colon cancer and advanced stages, which were in accordance with the features of CMS4 CRC and further confirmed the above hypothesis.
Several limitations exist in our research. Accurate diagnosis of CMS relies on specialized genetic testing, which is too costly and discommodious to be introduced into routine pathology, so we just took TSP as an alternative "gold standard" for diagnosing CMS4. Besides, we inferred that Nodal might participate in the procedure of malignant transformation of CRC based on the finding that Nodal only overexpressed in HGIN and CRC instead of other benign lesions and normal colon tissues, however, the potential mechanism research such as which signaling pathway does Nodal work through was missing, which need further investigation in the future. r, contingency coefficient. The bold type indicates that the P value is statistically significant. Taken together, based on our research, Nodal might play a role in colorectal tumor progression and be used as a diagnostic marker to identify benign and malignant colorectal lesions under certain circumstances. Besides, Nodal might also be used as a marker for determining CMS4 subtype of CRC. We thus identified a potential driver of malignant transformation in colorectal lesions and a relatively simple and feasible method to determine the CMS4 subtype of CRC.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Research Ethics Committee of Guangzhou First People's Hospital. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
XW, SL, HD, and HS designed and performed the research; XL and GL did immunohistochemical experiments; XL, GL, and YR collected and analyzed the data; XW, SL, and HC analyzed the results and wrote the manuscript. All authors read and approved the final manuscript.
|
2021-03-10T14:22:26.325Z
|
2021-03-10T00:00:00.000
|
{
"year": 2021,
"sha1": "596d449f410257485df58b24b59e10e59366677f",
"oa_license": "CCBY",
"oa_url": "https://www.por-journal.com/articles/10.3389/pore.2021.587029/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "596d449f410257485df58b24b59e10e59366677f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17348351
|
pes2o/s2orc
|
v3-fos-license
|
Molarization of Mandibular Second Premolar
ABSTRACT Macrodontia (megadontia, megalodontia, mac rodontism) is a rare shape anomaly that has been used to describe dental gigantism. Mandibular second premolars show an elevated variability of crown morphology, as are its eruptive potential and final position in the dental arch. To date, only eight cases of isolated macrodontia of second premolars have been reported in the literature. This case report presents clinical and radiographic findings of unusual and rare case of isolated unilateral molarization of left mandibular second premolar. How to cite this article: Mangla N, Khinda VIS, Kallar S, Brar GS. Molarization of Mandibular Second Premolar. Int J Clin Pediatr Dent 2014;7(2):137-139.
INTRODUCTION
Dental organogenesis disorders manifest as alterations in the number, size or form of teeth. 1 When dental size and anatomy present characteristics that deviate from what is supposed to be accepted range of normality, they are termed anomalies. 2 Mandibular second premolars show an elevated variability of crown morphology, as are its eruptive potential and final position in the dental arch. 3 Macrodontia (megadontia, megalodontia, mac rodontism), is a rare shape anomaly that has been used to describe dental gigantism. 4,5 O'Sullivan 6 reported the prevalence of macrodontia to be 1 to 2% in males and 0.9% in females, but macrodontia of mandibular second premolars affected males and females equally. Canoglu 4 et al reported an overall prevalence of macrodont permanent teeth to be 0.03 to 1.9% with a higher frequency in males. In most of the cases, macrodontia in mandibular second premolars has been reported in children. 7,8 All the reported unilateral cases of macrodontia involved the right mandibular second premolar. 3 Macrodontia is usually associated with systemic disturbances or syndromes, such as insulin-resistant diabetes, otodental syndrome, facial hemihyperplasia, KBG syndrome, Ekman-Westborg-Julin syndrome and 47 XYY syndrome. 4,5 Isolated form of macrodontia has rarely been reported. 4,9 To date, only eight cases of isolated macrodontia of second premolars have been reported in the literature; five of which have shown bilateral occurrence. 1,4 No case report of unilateral erupted macrodont mandibular second premolar has been reported in the literature.
This case report presents clinical and radiographic findings of unusual and rare case of isolated unilateral molarization of left mandibular second premolar.
CASe RePORT
A 14-year-old male patient reported to our clinic for routine dental check-up. There was no relevant family and medical history elicited. Extraoral examination revealed no abnormalities. On intraoral examination, dental caries was present involving the buccal pits of permanent mandibular right and left first and second molars. The patient had extrinsic stains and calculus with respect to mandibular anterior region (Fig. 1). He presented with Angle's class I molar relation (Fig. 2).
The mandibular left second premolar had an abnormal ovoid molariform crown with an irregular crescent-shaped crater-like fissure with the convex aspect toward the lingual side, in occlusion (fully erupted) which resulted in severe crowding in mandibular anterior region (Fig. 3). Clinically, mandibular left second premolar presented with mesiodistal diameter of 11 mm and buccolingual diameter of 11.5 mm.
To determine the diagnosis of the anomaly, intraoral periapical radiograph was taken. Radiographically, tooth represented with an abnormal size and shape with a single tapering root of normal length (Fig. 4). Oral prophylaxis was done, and composite restorations were done for the cariously involved teeth (Fig. 5). Patient was referred for fixed orthodontic therapy.
DISCUSSION
The molar-like morphology of the premolars consists of a reduction of the single vestibular cusp, the shoulders of which appear as small extra cusps. The resulting appearance is the same as that of a mandibular first molar, with three vestibular cusps and three, two, one or no lingual cusps. In studies of dental anthropology and hominid evolution, descriptions are found, such as that of Australopithecus robustus, in which the premolars are shaped like molars, with large occlusal surfaces and one, two or three roots. 3 The etiology of dental anomalies remains largely unclear but some anomalies in tooth structure, shape and size results by many factors from disorders during the morphodifferentiation stage of development. Identification of specific patterns of associated dental problems could be related with certain genetic and environmental factors contributing to different dental anomaly subphenotypes. 7 Macrodontia can be classified as true generalized where teeth are larger than normal and are associated with pituitary gigantism, relative generalized where there is presence of normal or slightly larger than normal teeth in small jaws and macrodontia involving single tooth. 10,11 True macrodontia of a single tooth should not be confused with fusion of teeth, in which early in odontogenesis, the union of two or more teeth results in a single large tooth. 10 According to the classification of macrodontia, this case corresponds to an isolated macrodontia. It is uncommon to see localized macrodontia alone, because generally it is associated with a syndrome. 7 This type of macrodontia is more frequently found in incisors and canines, 5,8,12 and has been rarely reported to involve premolars and molars. All the reported unilateral cases of macrodont mandibular
Molarization of Mandibular Second Premolar
second premolars demonstrate the involvement of the right tooth but, in the present case, the left tooth was involved. 8 The mesiodistal (11 mm) and buccolingual diameter (11.5 mm) of mandibular second premolar was greater than its normal size of 7 and 8 mm respectively. 13 Thus, the appearance of severe crowding was a predictable consequence of the increased size of the mandibular second premolar. Under normal conditions, the mesiodistal size of the premolar is less than that of its deciduous predecessors, particularly in the case of mandibular second premolars. 3,14,15 Problem of plaque accumulation is found in such cases because of surface notching as in this reported case. 16 The large crown size causes problems with eruption and disrupts the dentition. There are consequent inherent difficulties for the extraction of these teeth. Once erupted, their anatomy predisposes them to caries. 8
CONClUSION
The dental anomaly of unilateral macrodontia of mandibular second premolar appears to be extremely rare. Dental professionals should acquire deeper knowledge about this anomaly and carry out careful treatment planning to avoid unexpected problems during dental treatment procedures generated by the ignorance of morphology.
|
2017-06-30T23:53:41.356Z
|
2014-06-10T00:00:00.000
|
{
"year": 2014,
"sha1": "9d0cdc1072b0fb2420177082558ba69c9079b136",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5005/jp-journals-10005-1251",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d0cdc1072b0fb2420177082558ba69c9079b136",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
189361368
|
pes2o/s2orc
|
v3-fos-license
|
THE ECONOMIC OUTLOOK FOR ENTREPRENEURIAL WOMEN AND WOMEN-OWNED ENTERPRISES IN THE U.S. AND POLAND
Abst rac t . The U.S. is an unquestionable cradle of entrepreneurship and small business. This paper is concerned with developments of women entrepreneurship and women-owned enterprises in the U.S. The issues of growth, access to finance and development perspectives of women-owned businesses are discussed. The paper presents the sector profile, identifies the most important barriers to its development as well as the elements of sustainable entrepreneurship ecosystem for women (holistic approach). Last but not least is a short characteristics of Poland regarding the entrepreneurship among women.
Introduction
The U.S. is an unquestionable cradle of small business and entrepreneurship. In order to indicate the countries most favoring entrepreneurship, one must pay attention to E u r o p a Re g i o n u m 3/2 01 7, t o m X X X I I , s . 14 7-1 5 7 I S S N 1 4 2 8 -2 7 8 X | D O I : 10.18276/er.2017.32-11 | klasyfikacja JEL: L26, M21, M13 several factors that have the greatest impact on running a business. These key factors include: -tax and economic environment, -legal system, -labor costs, -commercial law, -industry experience, -access to finance, -innovation. In almost every case, the U.S. can be mentioned first. The reputation of this country as a cradle of entrepreneurship lies deeply in the minds of business people all over the world. Among the entrepreneurs and managers in the U.S. (and Poland) men predominate. The same is true for pay levels. However, entrepreneurial women and their businesses also play a significant role though they must still face significant hardships their male counterparts do not face (21st Century Barriers…, 2014). Over the past few decades the entrepreneurial women have made significant strides in growing businesses and creating new jobs. They have built businesses that are relevant, innovative, and responsive to market demand playing a significant role in rebuilding the middle class. Female entrepreneurship is an important challenge for modern societies, but at the same time a complex phenomenon. On the one hand there is a clear economic context, but on the other hand, the activity of selfemployed women and taking on the role of business owners is strongly culturally conditioned. The main aim of this publication is to broaden the knowledge of women entrepreneurship in the US and in Poland, including empirical research and industry reports, to formulate recommendations for supporting women entrepreneurs and reducing barriers to entrepreneurship. The additional objective is to present the current determinants of the development of entrepreneurial attitudes among women in the U.S. and in Poland as well as the assessment of the conditions of functioning of women conducting economic activity in contemporary socio-economic conditions. The following assumption (hypothesis) was made in the publication: for the development of the economy it is necessary to promote the entrepreneurship of women and to increase their participation among business owners.
Analysis of subject literature, statistical sources and empirical research reports indicate that women's entrepreneurship contributes to economic development by creating new products, services and jobs. The growing number of female business owners demonstrates that women exhibit entrepreneurial traits that effectively break social barriers and participate actively in economic processes. U n i w e r s y t e t S z c z e c i ń s k i 1. The increased role of women entrepreneurship and women-owned businesses in the U.S.
Women's entrepreneurship has turned into an integral contributor to innovation and job growth. It is expected that the number of women-owned and women-led businesses increase by more than 50% over the next 5 years Annual Report…, 2015. Between 1997 and 2007, women-owned enterprises grew by 44% -twice as fast as male-owned firms (National Women's Business Council, 2008). The U.S. Census Bureau collected the first nationally representative data on women's business ownership in 1972 and the survey indicated that there were only 402,025 womenowned firms operating in the U.S. (U.S. Census Bureau, 1976) whereas currently, there are over 11.3 m enterprises, employing nearly 9 m people, and generating $1.6 trillion in sales (Census Bureau Reports, 2015). Between 2007 and 2016, the number of women-owned firms grew (increased by 45%) at a rate of five times the national average (Women Fueling Post-Recession…, 2016). The greatest increase in the number of enterprises has been in more traditional industry sectors One in five enterprises with revenues of $1 m or more is womenowned and 4.2% of all women-owned firms have revenues of 1 m or more. 90% of women-owned businesses have no employees other than the business owner (Womenowned businesses carving…, 2014). Women-owned enterprises account for 31% of all privately held business ventures and contribute 14% of employment and 12% of revenues. There has been an increase of 8.3 m (net) new jobs over the past 7 years and it is comprised of a 9.2 m increase in employment (in large, publicly traded corporations) combined with nearly 0.9 m decline in employment among smaller, privately held companies (2015 Annual Report…). The six fastest-growing states for women-owned firms since 2007 in terms of growth in number, employment and revenue are: North Dakota, South Dakota, Texas, Iowa, Indiana and Wyoming (Women Fueling Post-Recession…, 2016).
Business environment and entrepreneurial ecosystem for entrepreneurial women
The entrepreneurial spirit among women is well due to favorable legal environment as well as a market positive response with innovative and alternative strategies for capital and market access for women-owned businesses. The U.S. Congress proposed a legislation in support of women's entrepreneurship and the White House committed to the empowerment of women and girls. Moreover, the U.S. Small Business Administration (SBA) focused on achieving the 5% federal procurement goal for women-owned businesses (though, it has never met it). Among the other factors that are improving the business climate for women there are the following: low interest rates, easier access to expansion capital, earlier stage investment opportunities, women in finance serving as business angels, VC investors and fund managers, an increased women consumers' demand for products and services. They contribute to ample business opportunities and a plenty of drive among entrepreneurial women. Moreover, the cultural landscape has changed and there has been a shift in the conversation about women's entrepreneurship. It results from an increase in women launching their companies and the recognition of their impact on the economy by business press (2015 Annual Report..., p. 9).
One of the best ways to support entrepreneurs are through interactions across a community of organizations, institutions, actors and processes. The SBA and U n i w e r s y t e t S z c z e c i ń s k i the federal government should improve their performance in the following areas supporting women entrepreneurship (Entrepreneurial Ecosystems…, 2017): -more information on federal contracting and taking advantage of new programs, -access to capital, especially for start-ups, -increased mentoring opportunities specific to industry, technical training, contracting as well as financial literacy, -promoting SBA programs and resources to women entrepreneurs, -networking opportunities with special emphasis on industry-specific means to access sources of capital and to reach new customers. The aforementioned interactions should correspond with a well-balanced and sustainable entrepreneurship ecosystem based on the holistic approach which should recognize the various factors of the business climate, their influence and interconnectedness. A vibrant and sustainable ecosystem strongly considers the connections between its elements (Measuring An Entrepreneurial Ecosystem, 2015) and the major components include finance, culture, human capital, community building, innovation, markets, policy, resources and investments in these areas should improve the business climate. The key system entities include policymakers, individuals, organizations and institutions (2015 Annual Report..., p. 32). The active pro-business attitude(s) of the participants can contribute significantly to increasing access to capital at various stages of business development.
Troubled access to capital for women and its recent significant shift
Historically, women have not had equal access to financial markets due to the systemic bias against women business owners (Research by the Global Initiative for Women's Entrepreneurship Research). Moreover, they hadn't obtained the funding because they might not understand the process and frequently did not ask (Beesley, 2016). Women entrepreneurs face numerous barriers regarding the growth of their businesses, including an inability to compete with large businesses. Less than 3% of women-owned enterprises cross over the million dollar threshold in revenue. Moreover, women entrepreneurs still face a significant wage gap. They frequently have smaller amounts of start-up capital than their male peers (on average, women start their business with half as much capital as men -$75,000 vs. $135,000).
In the case of high growth potential enterprises the capital differences between women-owned and men-owned entities (at the time of founding) are also considerable -$150,000 vs. $320,000 (Problem: Women Entrepreneurs Need Greater Access to Capital, 2017). Among the most significant challenges related to business growth is a lack of growth capital as well as a lack of new business opportunities. According to the Women in Small Business Survey 2013 (Ready to Grow: A Snapshot of Women Small Business Owners, 2013), one in four of respondents (26.2%) consider financing as a barrier to entry or growth in the small business market, and about 6% noted difficulty in obtaining funding. The external funding is secured after an average of 2.7 attempts. In 2015, 40% of businesses sought outside funding and 60% of them succeeded to secure funding (median loan sizes in 2015 were $332,000). Women are more likely to use personal savings to begin a new business and bank loans are underutilized. Only 5.5% of women-owned firms use business loans from banks or other financial intermediaries to start their businesses compared to 11.4% of menowned firms. There is a gender gap in equity financing and women receive 1% of VC financing compared to 4% of men. There is a noteworthy example of capital source focusing on women -Golden Seeds. It is one of the nation's most active early-stage investment firm focused on vibrant opportunities of women-led businesses. Since its founding in 2005, it has invested over $90 m in over 85 women owned enterprises (www.goldenseeds.com).
Women are more likely to outperform their men counterparts in meeting crowdfunding goals. They are more likely to succeed at a crowdfunding campaign, more active on social media and more collaborative when they invest (Greenberg, Mollick, 2016). Women business owners are very engaged with new ITC developments. 92% report they try to stay updated with innovations that could be incorporated into their enterprises (Business women speak out on the issues, 2016).
Since 2015 there has been a significant shift in funding sources ranging from collateralized bank loans to lines of credit and the funding is most commonly used to finance the existing business (Business women speak out on the issues, 2015). Women-owned enterprises have been trending towards alternative lending sources for access to capital. The remarkable shift and increase was visible in sources of capital, such as crowd sourcing (e.g., peer-to-peer lending, reward-based funding, etc.) and hybrid models. There are many possible ways to increase access to greater amounts of capital available to women-owned and -led firms, e.g.: -increase resources available to business owners on capital/financing strategies, -increase lending by credit unions and smaller community banks, -address creditworthiness and capital challenges for startups (exploration of new ways of credit scoring and promotion of crowdfunding).
U n i w e r s y t e t S z c z e c i ń s k i In order to realize the above goals the following key government and nongovernment actors have to be involved (2013 Annual Report)
The unrecognized considerable potential of female entrepreneurship and the economic outlook for entrepreneurial women in Poland
In the case of Poland, within the last two decades some crucial socio-cultural norm changes regarding women approach to business have been progressing. Entrepreneurship has always been the male's attribute but the newest generation of women is observed to be more and more interested in seizing new business opportunities and there has been a lower risk aversion among them. What is more, women value entrepreneurial qualities such as: diligence, creativity and innovation. These are undoubtedly the elements of a contemporary approach to run a business. Statistics say that women in Poland are generally better educated than men, and many of them successfully combine professional life with the role of a mother.
Despite the fact that in Poland (like in many other countries) entrepreneurship remains mostly within the male's domain, recently more and more women has become entrepreneurs and owners of small businesses. There is also a growing number of women who manage large corporations or play the role of a CEO, facing a severe foreign competition (Szepelska, 2013). Poland is at the forefront of countries with the highest percentage of women in leadership positions. The statistics show that the self-employment rate among women remains one of the highest in Europe, revealing a large gap between Poland and the U.S. (17.6% and 5.3% respectively in 2015) - Figure 2 (www.data.oecd.org). Since 2010 the figure has dropped significantly from 20.1% in 2010 to 17.6% in 2015 but still remains the highest among the rest of the presented countries. It is much more difficult for women to successfully run a business in Poland. Female entrepreneurs meet much more obstacles than their male counterparts. The barriers limiting entrepreneurship among both women and men include a high non-wage labor costs, complicated procedures for establishing, running and closing businesses, numerous formalities and unstable, frequently changing labor law (Borowska, 2013). Moreover, businesswomen experience additional difficulties like socio-cultural factors and stereotypes determining their life choices (and therefore their business decisions), a necessity to reconcile a career with bringing up children, as well as poorer access to financing. Also the general level of economic activity in Poland was hampered due to unstable political and economic situation in the Eurozone as well as in Poland. There is also a high fear of failure among Polish businessmen and businesswomen compared to other countries (Dziedzic, 2017).
The potential of female entrepreneurship is considerable, though, for some reason, it is not recognized and exploited. Most Polish women do not seize business opportunities due to their low self-esteem and self-confidence. As a consequence, they evaluate their entrepreneurial competences as poor and not satisfactory to set up and run a new business. Moreover, they are afraid of failure much more than their male counterparts. The above mentioned factors continue to make a significant difference in starting a business by women and men. Nevertheless, the number of U n i w e r s y t e t S z c z e c i ń s k i startups set up by women has been growing (Tarnawa, Węcławska, Zbierowski, Bratnicki, 2012).
A growing interest in the area of women activity on labor market in terms of entrepreneurship, together with a growing number of Polish institutions and organizations promoting the entrepreneurship among women and lowering unemployment by supporting the development of women-owned business ventures, may positively influence the entrepreneurial ecosystem for women.
Conclusions
Female entrepreneurship is becoming an increasingly important part of economic life. Over the past few decades the entrepreneurial women in the U.S. have made significant strides in growing businesses and creating new jobs and it is expected that the number of women-owned businesses increase by more than 50% over the next 5 years. Entrepreneurial women and their business ventures are the fastest growing segment, despite significant hardships, including a lack of growth capital. The business climate for women is positive and it could be further improved by low interest rates, easier access to expansion capital, earlier stage investment opportunities, women in finance serving as business angels, VC investors and fund managers as well as an increased women consumers' demand for products and services. Women business owners are very engaged with new ITC developments and they are more likely to outperform their men counterparts in meeting crowdfunding goals. The remarkable shift has been visible in peer-to-peer lending, reward-based funding and hybrid models. One of the best ways to support women entrepreneurs are through interactions across a community of organizations, institutions, actors and processes corresponding with a well-balanced and sustainable entrepreneurship ecosystem built with the use of a holistic approach.
As far as the women entrepreneurship in Poland is concerned some crucial socio--cultural norm changes regarding approach to business have been progressing and there has been a lower risk aversion. Poland is at the forefront of countries with the highest percentage of women in leadership positions and the self-employment rate among women remains one of the highest in Europe, revealing a large gap between Poland and the U.S. Poland holds a leading position in the category of participation of women self-employed in total employment. The specific determinants influencing the participation of women among entrepreneurs in Poland include: technological development, economic factors, unemployment, cultural factors and institutional and demographic factors. Women less often than men are active in the advanced technology sector, and the increase in wealth is accompanied by demand for services that provide space for women's entrepreneurship. Unemployment has a greater impact on women's entrepreneurship, as they are mainly "pushed out" of work during the crisis and have to look for sources of income in their own business. It is important to accept women as entrepreneurs, taking into account important institutional and demographic factors (e.g., fertility, having a partner). The main limitation of the presence of women in the labor market is the lack of possibility to reconcile work obligations with family, especially with care functions.
|
2019-06-13T13:17:39.687Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "40bf5fd4811b22458de252ddbfadded581946be7",
"oa_license": "CCBYSA",
"oa_url": "https://wnus.edu.pl/er/file/article/view/9124.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "721120f5e2835eb5bb59acedd78fd14d7a793b44",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
252316398
|
pes2o/s2orc
|
v3-fos-license
|
-31-From Cultural Resistance to Cultural Commitment: A New Path for Cultivating Cultural Confidence Based on the Law of College Students' Cultural Cognition
: Cultural self-confidence is essentially a psychological concept, which belongs to the categories of "cognition" and "will", and has to go through four stages of cultural resistance, cultural compliance, cultural respect, and cultural belief. Colleges and universities should carry out differentiated training based on the cognitive stage of college students, so as to enhance the effect of cultural self-confidence cultivation.
Introduction
College students shoulder the important mission of national rejuvenation and national prosperity, and need to cultivate cultural self-confidence in order to resist the invasion of bad thoughts. The essence of cultural self-confidence is a psychological concept. Before forming cultural self-confidence, it must go through four stages: cultural resistance, cultural compliance, cultural respect, and cultural belief. Therefore, colleges and universities should grasp the laws of cultural cognition of college students, so that the behavior of cultural self-confidence cultivation is more accurate and effective.
Cognitive Laws
Cognition refers to "the process by which people acquire knowledge or apply knowledge, or the process of information processing" [2] . Humans mainly build cognition through feeling, perception, memory, thinking, imagination and language. Cognitive development of human beings has rules to follow. At present, the cognitive development concept of Swiss psychologist Jean Piaget is widely used in the field of psychology. Piaget believes that the essence of cognitive development is that the cognitive subject completes the adaptation to the object through actions. The "action" mentioned by Piaget here includes both the "movement" embodied in the body and the "thinking" in the human brain. In his view, the formation of cognition is not achieved overnight, but is divided into various stages of development. Each stage is interlinked, and the level of cognition is continuously strengthened. Therefore, different cultivation methods need to be adopted for each stage of cognitive development.
The laws of cultural cognition
Culture refers to "the sum total of material and spiritual wealth created by human beings in the whole process of social and historical development" . The law of cultural cognition is the law that reflects the psychological change process of people's perception of culture, internalization of culture, and pursuit of culture, that is, the law of people's "processing" culture. Like people's cognition of other things, people's cognition of culture cannot be one-sided and generalized as a whole of development. It is also divided into multiple stages of development, and each stage is progressive. According to the viewpoint of psychology, people's cognition of a culture can be divided into four stages: cultural resistance, cultural compliance, cultural respect, and cultural belief. Among them, cultural resistance is the lowest stage, which can also become a "dangerous stage", and cultural belief is the highest stage, which can also become an "ideal stage", which is the highest goal of cultivating cultural self-confidence. ISSN 2522-6398 Vol. 5, Issue 9: 31-34, DOI: 10.25236/FER.2022.050905 Published by Francis Academic Press, UK -32-
Grasping the Law of Cultural Cognition Makes the Cultivation of Cultural Self-confidence More Accurate
According to the law of cultural cognition, people's cognition of culture is staged, which is embodied in "staged differences". In the process of cultivating college students' cultural self-confidence in the past, colleges and universities often ignored the existence of stage differences in cognition. Whether colleges and universities organize cultural lectures or carry out cultural practice activities, they basically participate in classes or a specific group, such as party activists and league members. In this way, students at different cognitive stages are trained with the same content, and the effect of training will be greatly reduced. By grasping the laws of cultural cognition, students can be divided into different stages, and different cultivation methods can be adopted for different stages, so that cultivation behaviors can be more accurate and effective.
Grasping the law of cultural cognition makes the cultivation of cultural self-confidence more focused
Culture refers to "the sum total of material and spiritual wealth created by human beings in the whole pro The research foothold of the cultivation of cultural self-confidence should be the college students themselves. Since the 18th National Congress of the Communist Party of China, General Secretary Xi Jinping has mentioned cultural self-confidence many times, and major colleges and universities have extensively explored the cultivation path of college students' cultural self-confidence, from curriculum design, activity creation, teacher training, campus culture construction and many other aspects. Innovation helps college students to form a "deeper and broader" cultural self-confidence, but the implementation effect of some colleges and universities is unsatisfactory. The reason is that the main body of cultivation, that is, the college students themselves is ignored. Grasping the law of cultural cognition and grasping the various stages of students' cultural cognition can promote colleges and universities to take college students as the orientation in the process of cultivating cultural self-confidence. According to the actual differences of college students, a personalized cultivation method is proposed to make the cultivation behavior more focused.
Grasping the law of cultural cognition well to make the cultivation of cultural self-confidence more systematic
Cultivating college students' cultural self-confidence is a systematic project. The law of cultural cognition divides students' awareness of culture into four stages: cultural resistance, cultural compliance, cultural respect, and cultural belief. Therefore, the cultivation of cultural self-confidence is to take relevant measures based on these four stages, so as to change college students from cultural "unconfidence" to "cultural self-confidence". College students at different stages have different understanding and acceptance of culture. Grasp the law of cultural cognition well, in the process of cultivating cultural self-confidence, colleges and universities draw up the idea of cultivating cultural selfconfidence. It will realize the overall integration from the low-end stage of cultural resistance to the highend stage of cultural belief, and make the cultivation behavior more systematic and scientific.
Cultural resistant stage: Be a "compass" for good college students' cultural self-confidence
Cultural resistance refers to a negative psychological state such as the subject's disapproval of Chinese culture, sneering or even hostility. For example, some college students unilaterally believe that Chinese culture is "outdated", "feudal" and "conservative", and that Western culture is "advanced", "novel" and "exciting" [3] .As a result, in terms of cultural choice, "I am obsessed with Westernization without deeply understanding the background of Western culture, and I hate tradition without understanding what Chinese tradition is." For students at this stage, ideological and political workers should point out their unhealthy cultural mentality in a clear-cut way, prevent the continued spread of such concepts, and point out the direction for college students to form a correct cultural outlook.
Cultural compliance stage: Be a "catalyst" for good college students' cultural self-confidence
Cultural compliance refers to the subjective "can't", "unwilling" or "dare" to oppose this culture because of the constraints of external objective conditions. For example, my country promulgated the "Law of the People's Republic of China on the Protection of Heroes and Martyrs" in 2018, which pursues legal responsibility for acts of insulting, slandering or otherwise infringing upon heroes and martyrs. Based on this law, some acts under the guise of "self-reflection" and "self-criticism" that actually distort and deny revolutionary culture and promote historical nihilism have been effectively curbed, forming compliance with revolutionary culture and advanced socialist culture. Although the subject avoids wrong behavior through compliance, due to lack of knowledge, its behavior is blind, passive, and unstable, and cultural choices may be voluntary or forced. Therefore, at this stage, it is necessary to further increase education and publicity, carefully select the bright spots in traditional culture, and continue to convey the "good voice of China". Through specific cultural cases, college students can truly perceive the unique charm of Chinese culture, so as to catalyze college students from the heart identification with Chinese culture.
Cultural Respect Stage: Be the "Stability Instrument" for good college students' cultural selfconfidence
Cultural reverence refers to the phenomenon that the subject tends to be consistent with the culture in terms of cognition, emotion and behavior, resulting in voluntary compliance with the culture. The distinguishing mark of cultural respect and cultural compliance lies in whether the subject's recognition of culture is "voluntary". College students at this stage can perceive the charm of Chinese culture and identify with this culture. However, due to the lack of understanding of Chinese culture and the lack of emotional support for this culture in their hearts, they cannot form a resonance with Chinese culture, and they will not reshape their original values because of Chinese culture. So the reverence formed may change over time or encounter some external disturbance. At this stage, the task of ideological and political workers is to be a stable instrument for this "sense of respect", and to spend time on explaining the connotation of Chinese culture thoroughly and clearly. Let the students' cognition of the value orientation of Chinese culture at this stage become stable, and let this knowledge enter their minds and hearts, and guide them to gradually form their belief in Chinese culture.
Cultural Belief Stage: Be a "thermo bottle" for good college students' cultural self-confidence
Cultural belief means that the subject has a deep understanding of Chinese culture and its value principles, and has a positive emotional experience. Make it a belief of oneself, integrate with the original values, establish a normative motivation system, and implement a normative monitoring system. Cultural belief is the goal of cultivating cultural self-confidence. Once college students form a cultural belief, Chinese culture will permeate into students' world outlook, outlook on life, and values, and guide students' behaviors at all times, that is, let college students understand the value of Chinese culture. The goal of "internalizing in the heart and externalizing in the action" has been achieved. At this stage, what ideological and political workers need to do is to "warm up" the cultural emotions formed by college students through a series of means such as practical activities, so that college students can stay in the stage of cultural self-confidence for a long time.
Conclusion
In the important speech at the celebration of the 100th anniversary of the founding of the Communist Party, General Secretary Xi Jinping pointed out that "cultural self-confidence is a more basic, broader and deeper self-confidence". College students shoulder the important mission of national rejuvenation and national prosperity, and need to cultivate cultural self-confidence in order to resist the invasion of bad thoughts. The essence of cultural self-confidence is a psychological concept. Before forming cultural self-confidence, it has to go through four stages: cultural resistance, cultural compliance, cultural respect, and cultural belief. Therefore, colleges and universities should grasp the laws of cultural cognition of college students and make the cultivation of cultural self-confidence more effective. Precise and effective.
|
2022-09-17T15:20:31.272Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "6e867e273cb06687415c258408a81bdddff87eff",
"oa_license": null,
"oa_url": "https://francis-press.com/uploads/papers/ZS2dnIQsclpPI8AJMjSx6ue4KV6ISwzhWZMr9Eks.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d90f68025639f559c96fb929cb4cef1a8027e29a",
"s2fieldsofstudy": [
"Education",
"Law"
],
"extfieldsofstudy": []
}
|
239628303
|
pes2o/s2orc
|
v3-fos-license
|
The music of Iannis Xenakis’ estranged Kassandra
Iannis Xenakis (1922–2001) was a radically innovative composer. Violently persecuted for his leftist activism in the Greek Civil War that followed the Second World War, he fled Greece to live the rest of his life in Paris. One of the most explicit expressions of his resultant feelings of trauma, guilt, and displacement can be found in the vocal piece he called Kassandra (1987). This demanding work requires its two performers to enact and explore the alienation experienced by the prophet Cassandra in Aeschylus’ Agamemnon. Xenakis’ Kassandra is defined by a symbiotic relationship between percussionist and vocalist, by a simultaneously controlled and improvisatory score based on extracts from Aeschylus’ Agamemnon, and by a striking use of the baritone singer’s falsetto as well as his chest voice. These features of the piece mark out Cassandra as ‘other’ in her origins, her sex, and her language, while also hinting that her characteristics are not entirely foreign, but are in fact understood or even shared by the very communities that initially seemed to exclude her.
displacement in time as much as space. He is quoted as having said, 'I felt I was born too late -I had missed two millennia . . . But of course there was music and there were the natural sciences. They were the link between ancient times and the present, because both had been an organic part of ancient thinking'. 1 Titles such as Evryali, Medea, Psappha, and Palimpsest abound in Xenakis' oeuvre, though not all perform an obvious or easily identified act of reception. 2 It is through some of the composer's more explicit engagements with ancient Greek texts, placed in the context of his own biography and the broader social and historical conditions that followed the Second World War, that it becomes possible to make sense of the tension between Xenakis' forward-looking iconoclasm and his compulsive evocation of the past. This article focuses on how these contradictory forces operate in Xenakis' short vocal work Kassandra (1987), which is based closely on the Cassandra scene in Aeschylus' Agamemnon.
Xenakis' feelings about ancient Greece -a time and place with which he strongly identified but which he could never visit -were doubtless informed by events in his life that left him dislocated from his own home. Xenakis began life as the outsider implied by his name, born in 1922 to a Greek family that had long been settled in Romania. 3 When he was 10 he was sent to boarding school on the Aegean island of Spetzes, where he claimed to have been mocked for his odd Greek accent. After he finished high school he moved to Athens to study civil engineering at the National Technical University of Athens, but at that point the Second World War broke out and Xenakis joined the communist underground resistance. With universities scarcely operative, he scrambled through some semblance of study over the next few years, while committing most of his energies to resisting both the occupying forces and the Greek royalist militias. But it was the disastrous events during and after the end of the war that triggered Xenakis' ostracism from Greece. In 1944 Xenakis was critically injured by a British mortar explosion as he was resisting the British-backed efforts to restore the Greek monarchy -part of the early skirmishing that would ultimately develop into the Greek Civil War. He lost an eye, and only narrowly survived after making it through twelve hours without emergency treatment, and then undergoing multiple operations to reconstruct his face. Xenakis struggled on in Athens for just long enough to complete his engineering degree, but as the Civil War dragged on and communists were increasingly persecuted he was ultimately forced to flee the country. The Greek authorities condemned him to death in absentia, brutally affirming his status as an outcast for the next thirty years. employed in the atelier of Le Corbusier, where he found common ground with the controversial Brutalist architect. In their different ways, both Le Corbusier and Xenakis were trying to think across scientific, artistic, and aesthetic fields. In the 1950s Le Corbusier's firm was designing huge apartment blocks called 'Unités d'habitation', miniature cities within a single building, designed as a new way to house those displaced after the ravages of the Second World War. For the Unité commissioned to be built in Nantes (1955), Xenakis designed the nursery that sits on the top of the building. He peppered its walls with rectangular windows placed at irregular intervals to mimic the appearance of neumes, an early system of musical notation. For the Dominican priory Le Couvent de Sainte Marie de La Tourette , Xenakis designed a chapel in the shape of a grand piano, and masterminded the construction of long stretches of glass panes along the external walls. The layout of these panes was determined by the same mathematical ratios that Xenakis was simultaneously using to structure his first major musical composition, Metastaseis (1953-54) -a work that Xenakis described as recalling the sounds of Athenian anti-Nazi protests and gunfire. 4 As Metastaseis shows, in Paris Xenakis had also begun to explore and develop seriously his interests in music composition. For this he had found support from the titan of French contemporary music composition, Olivier Messiaen. Messiaen not only recognized Xenakis' unique set of skills, but also understood that he was trying to process and express the trauma of war and exile in ways that had little to do with conventional musical training at the time. Though a pacifist, Messiaen had also experienced the horrors of the Second World War from his time in the prisoner-of-war camp Stalag VIII-A, where he most famously composed his Quatuor pour la fin du temps -'Quartet for the End of Time'.
Although Xenakis gradually built up enough of a following in the musical world that he could stop working in architecture and devote himself to full-time music composition, he never abandoned his interest in synthesizing space and time, sculpture and sound. This is most apparent in his Polytopes, the huge-scale pieces of performance art that he designed to animate specific spaces around the globe. Even their title transcends disciplinary borders: a neologism, rooted in Greek words evoking multiplicity (poly) and place (topos), it is a mathematical term that refers to a geometric object operating in multiple dimensions. Xenakis was not working in a vacuum: at the very time he was working on his first Polytope (for the French Pavilion in Montreal, 1967) case, his fascination with the worlds of antiquity increasingly drew him to the creative reimagining of spaces whose links with the past were visible. This reached its fullest expression in his later Polytopes, in which he applied sons et lumières to electrify ruins in Persepolis (1971) and Mycenae (1978), resuscitating the distant past by applying futuristic media -and contemporary communities of performers and audiences -to ancient material structures and landscapes. 6 This interest in revisiting and recreating the ancient world clearly had a personal component. Intersecting with Xenakis' interests in mathematics, engineering, and music, was his identification with the world of ancient Greece, and his understanding of how this identification counterbalanced a more general sense of alienation from the world around him. Consider, for example, how he presents his interaction with music from countries and cultures that are new to him, and how this transforms his appreciation of Greek history: '… in the 1950s, I discovered music beyond the European tradition: from India, Laos, Vietnam, Java, China and Japan. Suddenly I found myself in a world that felt my own. At the same time, Greece appeared to me in a new light, like the crossroads of remnants from a very ancient musical past'. 7 Xenakis appears to have discovered what was meaningful to him about the musical past of Greece by acknowledging his temporal and spatial distance from that world -that is, by embracing both anachronism and a global sonic perspective. At another time he wrote in jest to his wife: 'I am not a Roman decadent but a classical Greek living in the twentieth century '. 8 It is in his vocal works that Xenakis explores this dislocation most explicitly. Creating music to accompany Greek texts, in particular, forced Xenakis to confront his own unique experience of Hellenic identity, an identity that was dependent on his lived -and subsequently lost -experience of Greek space, time, and language. 9 Xenakis clearly intended to signal through his compositions a connection with the Greek past, with its places and its people, but he was also conscious of how much of an imaginative leap this communication process required for him, as 'a classical Greek living in the twentieth century' -and a classical Greek living in France, to boot. Echoing his memory of being mocked for sounding strange to his fellow Greek 6 On the Mycenae Polytope see Kotzamani (2014) with further bibliography. Chardas (2016: 91) quotes Xenakis describing the work as 'an artistic revival' in the programme notes. 7 … dans les années 50, j'ai découvert les musiques extraeuropéennes, de l'Inde, du Laos, du Vietnam, de Java, de Chine et du Japon. Je me suis trouvé tout à coup dans un monde qui était le mien. En même temps, la Grèce m'apparut sous un autre jour, comme le carrefour des survivances d'un passé musical très ancient. Xenakis, interviewed in Montassier (1980: 221).
Translations from the French are mine, except where noted. Vagopoulou (2006: 4) notes the influence of Japanese Noh theatre on Kassandra. pupils at boarding school, Xenakis described another disconcerting realisation of his own alterity when he finally went back to Greece in the mid-1970s, after the military junta fell and his death sentence was rescinded. After nearly three decades of absence, he found that the Greece to which he had returned was unrecognizable to him, and that his own native Greek speech had become antiquated and occasionally even incomprehensible. 10 Perhaps it should come as no surprise to find that as Xenakis' ties to the modern country of Greece frayed, his attachment to the world of ancient Greece only grew stronger. We can track Xenakis' growing interest in ancient Greece from the 1960s to the 1990s through his work on Aeschylus' Oresteia -a trilogy whose ancient complexities proved fascinating to many avant-garde composers. Before the twentieth century Aeschylus was not popular with composers of music. Static staging, difficult Greek verse abounding in flights of open-ended metaphorical imagery, and overt political messaging tended to deter all but the most committed of classically educated composers. 11 Yet the wars and political upheavals of the twentieth century triggered a renewed interest in Aeschylus' plays, as Ferrario explains: 'In a post-monarchical world that has experienced warfare on an unprecedented scale, dramas that expand beyond the human emotions to question absolutism, show the brutality of conflict, and perhaps even advocate for a just society have found an increasingly hospitable home'. 12 Ferrario also identifies the concomitant aesthetic shifts that made the challenge of Aeschylus more tempting to composers. As sound worlds became more radical, partly in response to the destabilizing trauma of the twentieth century, so the violence, surrealism, and sheer foreignness of Aeschylus' language grew in appeal. Ferrario describes Orff's use of Aeschylus' Greek in his Prometheus (1968), linking it back to Stravinsky's use of a Latin libretto in setting Sophocles ' Oedipus Rex (1927). The strangeness of the ancient languages in these works, along with the screaming, the lamenting, and the 'virtuosic, unconventional vocal delivery', creates an alienating effect that reflects the disorientating uncertainties of the century. 13 This wider appreciation of Aeschylean possibilities may explain why Xenakis reworked his settings of the Oresteia so many times. 14 In 1966 Xenakis wrote the music for an English-language production of the Oresteia at a festival in Ypsilanti, Michigan. 15 After the event he cut the piece down to become a concert suite, in the 10 Mâche (1993: 197). 11 Ewans (2018: 205) notes how difficult Greek choruses are to integrate into opera, and how Aeschylean language adds a further layer of complication. Ewans (2006) explores Aeschylus' profound influence on Wagner, but even Wagner was inspired only to adapt Aeschylean dramatic forms, not to set Aeschylus' own plays to music. 12 Ferrario (2016: 211). 13 Ibid., pp. 208-9. 14 Vagopoulou (2006: 4) notes how unusual it was for Xenakis to return repeatedly to the same work. 15 Dir. Alexis Solomos, using Richmond Lattimore's translation of the Oresteia. For details see Foley (2005: 209-10).
process reverting to Aeschylus' original ancient Greek text for a libretto intended to be sung by a chorus of either children or adults. In 1987 Xenakis revisited the composition while on a visit to Sicily, in which he stayed not far from Aeschylus' burial site at Gela. There he produced a new episode for his Oresteia, called Kassandra; it was a piece based on the Cassandra scene from Aeschylus' Agamemnon, the 250 lines in which the Trojan prophet attempts to communicate with the play's chorus before she follows Agamemnon into the palace offstage to face her murder (Ag. 1072-1330).
A few years later Xenakis returned to his Oresteia one final time to add La Déesse Athéna (1992), a piece in which he explored another passage of powerful female speech, setting to music Athena's lines from the Eumenides in which she establishes the court of justice in Athens (Eum. 681-708).
Aeschylus' Agamemnon is the first extant ancient Greek text to develop in detail the unique features of the prophet Cassandra and the situation in which she finds herself when she arrives in Argos, in the aftermath of the Trojan War. Years earlier, back in Troy, Apollo had granted Cassandra true visions of the future, but after she refused his sexual advances the god had stripped her of the ability to communicate her knowledge regarding the future. Apollo ensured that whenever Cassandra was possessed by prophetic frenzy, her language effectively became incomprehensible, foreign-sounding, to her interlocutors. 16 In the Agamemnon Cassandra has been forcibly transported from Troy to Greece as part of Agamemnon's spoils of war. This means that, as a Trojan, she is quite literally a foreigner in a Greek-speaking world. She is also now an enslaved woman in a ruling household, and one who will become collateral damage in the generational violence that has engulfed the house of Atreus. Aeschylus emphasizes how Cassandra's life-story has been one of repeated victimisation and marginalisation in every respect: sexual, social, cultural, and linguistic. Her rambling and confusing narratives, wandering backwards and forwards through time and space, are presented as the natural product of a prophet who is also always tragically displaced in time and space. Yet Aeschylus gives Cassandra the opportunity to use that same freedom from narrative convention as a form of resistance to the oppression she faces in Greece. 17 Her voice, both heightened and hobbled in its reach, is the weapon with which she asserts her authority as someone who can tell -if not sell -the truth.
The Cassandra scene in the Agamemnon, then, reflects the challenges faced by social outcasts when they try to communicate their insights, no matter how truthful or profound those insights might be, and the range of communicative strategies that they might adopt to defy these challenges. 18 As such, the scene has appealed to many ancient and modern artists interested in voices of alterity and subversion. 19 Virginia Woolf, for example, responded powerfully to the figure of Aeschylus' Cassandra, as Prins has demonstrated. 20 For Woolf, the trouble that Cassandra has with making herself understood in the Agamemnon can, and should, be read as a radical portrayal of the difficulties involved in translating, particularly translating an ancient language whose riches have been co-opted by a patriarchal education system that largely excluded Woolf. From this the scene ultimately comes to stand for the difficulties involved in all acts of communication, linguistic or other. Woolf writes of Aeschylus' words: 'we know instantly and instinctively what they mean, but could not decant that meaning afresh into any other words'. 21 She finds this phenomenon at its most evident in Cassandra's first incoherent cry in the Agamemnon, in which the prophet produces a string of untranslatable noises that make it unclear whether she is speaking Greek or not, or indeed whether she is even speaking a human language at all. As Woolf wrote in her essay 'On Not Knowing Greek': 'No splendour or richness of metaphor could have saved the Agamemnon if either images or allusions of the subtlest or most decorative had got between us and the naked cry ὀτοτοτοῖ πόποι δᾶÁ / ὤπολλον ὤπολλον'. 22 Cassandra's 'naked cry' is sound, but not sense -or rather, its sense is its unmediated, untranslated sound.
With the twentieth century's growing appreciation of Aeschylus, creative artists working in a variety of media joined Woolf in responding to Cassandra's extraordinary speech in Aeschylus' Agamemnon with a similar combination of bafflement, fascination, and identification. 23 In each rewriting of Cassandra's role the representation of her linguistic estrangement maps onto a more contemporary experience of political, social, or cultural isolation. Cassandra is exotic, foreign, a Trojan in Greece; she is divinely inspired and cursed in a world of limited mortal understanding; she is a prisoner of war being brought into the house of her captors. She is therefore a figure for foreign exiles, for resistance fighters, for creative artists, for minorities, for feminists, for leftists, for the oppressed and marginalised and misunderstood. It is not hard to see why Xenakis might have felt impelled to return to his Oresteia suite to fill out the character of Cassandra. Xenakis found in Cassandra a figure who allowed him to explore in sound the frustrations and unexpected insights granted to those who have been uprooted from their homes, especially those for whom language, or the ability to communicate politically or artistically, has become fraught. (2005); see also Pillinger (2017). 21 Woolf (198421 Woolf ( : 30) [1925. 22 Ibid., p. 31. 23 Goudot (1999).
The first striking feature of Xenakis' Kassandra is its instrumentation. This is all the more noticeable when the piece is performed as part of the Oresteia suite. The Agamemnon movement in the suite depends on a mixture of orchestral instruments and choral voices, all of which mark the play's concern with public display and the ostensible welcoming of the king Agamemnon as he returns to his people. Just before Kassandra is to be performed, there is a formal trumpet fanfare (marked a blaringly loud fortississimo), reinforcing an atmosphere of official, martial, civic activity. By contrast, Kassandra begins with an austere, if energetically syncopated, beating of skin drums, introducing the spare and intimate instrumentation that the work will employ throughout. 24 Xenakis scored the piece for only two performers: a percussionist playing woodblocks and those skin drums, and a baritone singer who would also play the Javan psaltery. He wrote it for two particular performers, the singer Spyros Sakkas and the percussionist Sylvio Gualda, so the sense of unusual intimacy is further reinforced by the ease of the three artists' collaboration.
The versatility of Sakkas' vocal talent inspired Xenakis to make the most dramatic innovation in this piece. Whereas in the rest of the Oresteia suite both choral and individual roles are performed by a chorus, or at least by small groups of singers, in Kassandra a single voice performs the part of both the chorus and the isolated prophet. Cassandra's speech is represented by the baritone singing in falsetto, while the chorus of old Argive men -or at least the chorus-leader -is represented by the same singer using his chest voice. 25 A similar move in La Déesse Athéna, also written with Sakkas in mind, sees the baritone hop in and out of falsetto to represent the male and female aspects of the goddess Athena. 26 Sakkas' vocal abilities allowed Xenakis to use one voice to dramatize a female-gendered insight (prophetic or divine) 24 The score instructs the performer to play with hands (mains), but the instruction is often ignored by performers, including by Gualda in the first recording of the piece, which is the recording referred to throughout this article. 25 Le baryton est successivement Cassandre dans son registre aigu et coryphée des vieillards d'Argos dans son registre grave -'the baritone is alternately Cassandra in his upper register and chorus-leader of the old men of Argos in his lower register', as Xenakis writes in the foreword to the score of Kassandra (Editions Salabert, 1987). In the score for Xenakis' Oresteia (Boosey and Hawkes, 1996), which is the final revision of the work but includes neither Kassandra nor La Déesse Athena (as they are exclusively published by Editions Salabert), the front matter oddly suggests that the baritone's role in Kassandra might be sung by a baritone plus one or two children instead. The details are not explained anywhere else in the Oresteia score, and the idea is not found anywhere in the score to Kassandra. It is hard to know what Xenakis had in mind. Would the children play the role of the chorus leader or of Cassandra? If two children were involved, how would their voices manage the highly improvisational melodic shaping -would they be expected to sing in unison or to present another kind of vocal fragmentation? 26 Wolff (2010: 298); Harley (2004: 45). Wolff (2010: 299) notes that the instrumentation of the rest of the Oresteia tends toward the extremes of pitch also found in Kassandra, stretching as it does from piccolo to tuba with little in the mid-range.
in attempted conversation with a male-gendered response. 27 In an interview with Vagopoulou, Sakkas identified his collaboration with Xenakis as part of a compositional process analogous to Cassandra's inspiration: 'I believe that [Xenakis'] kind of inspiration goes hand in hand with the performer's dexterity … . Kassandra comes as a delirium; no matter if there is a text behind it, it is in fact a delirium.' 28 Xenakis' inspiration for Kassandra certainly manifests itself in a carefully calibrated sharing of the interpretative process with his performers. He directs precisely the rhythms of the percussion, as well as the untempered tuning of the psaltery, and he marks the relative pitches of both drums and woodblocks. 29 Xenakis denotes the voice of Cassandra with a treble clef, and that of the chorus with a bass clef. The melodic line of the voice within those distinctions, however, is marked purely in terms of graphics that indicate rising or falling pitch, with further directions as to the vocal attack and timbre. In his foreword to the score Xenakis describes the notation as being 'neumatic' in fashion -like the windows he had designed in the nursery of the Unité d'habitation at Nantes -but in fact the wandering line (literally a line, rather than dots, on the musical stave) is more impressionistic still than even the relative pitches marked by medieval neumes. 30 The singer is instructed to match his semi-improvisational melismatics to the tetrachords he has selected to play on the psaltery, which in turn he is encouraged to choose on the basis of his own interpretation of the character of each passage. 31 The electronic amplification of the 27 On the female gendering of prophecy in the ancient world, particularly in relation to prophets granted their visions by the god Apollo, see Fowler (2002), Brault (2009), Miller (2009, and Pillinger (2019: 12-16). 28 Vagopoulou (2007: 213). 29 Wolff (2010: 289) notes how often, since Milhaud's Les Choéphores (1915), twentiethcentury composers have used abrasive, stark percussion sounds to evoke ancient Greek tragic performance, despite the lack of evidence for percussion in the original productions. Wolff attributes this to composers' desire to evoke the imagined acoustics of archaic ritual and to 'other' the sound-world of their compositions. Brown (2004: 286) discusses related attempts to engage with the musical traditions of Africa or East Asia -as Xenakis also does -to defamiliarize, at least for modern Western audiences, the performance world of ancient Greece. 30 La notation est du type neumatique afin de tenter une approche nouvelle de la voix qui soustend le texte d'Aeschyle -'The notation is neumatic in style, to try out a new approach to the voice that supports the text of Aeschylus'. Xenakis, foreword to the score of Kassandra (1987). 31 Le baryton accorde les mouvements de sa voix sur l'un des tetracordes qu'il choisit selon les séquences du texte et leur caractère -'The baritone pitches the movements of his voice to each one of the tetrachords that he selects according to the development and character of the text'. Xenakis, foreword to the score of Kassandra (1987). See Harley (2004: 188-9). It should be noted that Sakkas (2010) strongly resists the suggestion that a good performance of Xenakis' vocal works is ever truly improvisational.
singer's voice, suggested in parentheses in the score, adds one further layer of technological anachronism, even as it reinforces the power and intimacy of the words voiced. 32 All of these moves combined serve to remove the piece that much further from the conventions of the European classical music tradition. Xenakis and his performers construct a radical sound that avoids as far as possible the tonal patterns of any other piece of music -including those of any other performance of Kassandra, since every iteration of the piece will depend on the singer's interpretation of the notation. The connections that the work builds are, instead, those that arise in the moment of performance. As Sakkas observes: 'The work is expressed according to the way it will be played by the performers and by the rapport that will develop between the musicians and the public, both of whom are participants in a ritual'. 33 And there is one more sonic connection that Xenakis embraces in the piece. The composer insists that the singer must attempt to replicate the pronunciation of fifth century Attic Greek. His notion of this ancient Greek pronunciation is fairly idiosyncratic, as can be seen in both the Latin transliteration he provides in the Kassandra score and the more detailed instructions for pronunciation found in the foreword to the Oresteia score. 34 Still, the sound of Aeschylus' language, articulated as it is by the solo singer's malefemale voice, is identified as one of the few acoustic constants in Kassandra, an anchor within each performance of a piece that otherwise swirls in anarchically jumbled sounds of past, present, and future. 35 This combination -sometimes conflict -of rigidly controlled text and limitedly controlled music enables a unique fidelity to Aeschylus' Agamemnon; it allows each performance of the work to portray several of the most important aspects of Cassandra's communicative difficulties in the ancient play. This begins from the very opening sequence of Kassandra, in which the audience is introduced in swift succession to the various parts that construct the work. Firstly the percussionist beats rhythmically on the drums, accelerating into a tremolo that fades away to nothing. Next the singer enters playing the role of Cassandra and plucking the psaltery. Then, after the first phrase, the singer abruptly switches into the voice of the chorus to 32 Connor (2000: 38) explores the intimacy of electronic amplification: 'The microphone makes audible and expressive a whole range of organic vocal sounds which are edited out in ordinary listening; the liquidity of the saliva, the hissings and tiny shudders of the breath, the clicking of the tongue and teeth, and popping of the lips … '. 33 Sakkas (2010: 312). 34 Xenakis' instructions combine elements of Erasmian and modern Greek pronunciations without any clear rationale. One of the anonymous readers for CRJ helpfully pointed out that Xenakis' instructions would make a Greek singer like Sakkas less comprehensible to a modern Greek audience than if he were left to sing the words with a regular modern Greek pronunciation. 35 Chardas (2016: 110) 110 explores Greek twentieth century composers' broader habit of using the language of ancient Greek to signal the 'unending significance' of their subject matter. deliver a response to Cassandra. (Sound clip 1.) This back and forth, punctuated by episodes of solo woodblock, will continue for much of the work. The first words sung are a version of Cassandra's first words in the Agamemnon, the very same stutterings that Woolf found so powerful: ὀτοτοτοῖ πόποι δᾶÁ / ὤπολλον ὤπολλον -'otototoi popoi da; / Ahpollo Ahpollo' (Ag. 1072-3). 36 In Aeschylus' Agamemnon these first sounds uttered by the prophet are anticipated by an exchange between the chorus and Clytemnestra in which they speculate on the kind of speech the as-yet silent Cassandra might deliver, and wonder if the Trojan princess might need a translator. Among those exchanges Clytemnestra associates Cassandra's barbarian speech with that of a swallow (Ag. 1050-2): ἀλλ' εἴπερ ἐστὶ μὴ χελιδόνος δίκην ἀγνῶτα φωνὴν βάρβαρον κεκτημένη, ἔσω φρενῶν λέγουσα πείhω νιν λόγωι.
But unless she is, like a swallow, possessed of an unintelligible foreign voice, by speaking within her mind I am persuading her with my argument.
From the moment the voice enters in Xenakis' Kassandra we are reminded of Clytemnestra's speculative characterization of Cassandra. Xenakis may not specify the exact pitch of the falsetto voice, but he demands an absolutely precise attack throughout the work, which results in a birdlike articulation reminiscent of Messiaen's Catalogue d'oiseaux -'Catalogue of Birds'. The breathless staccato of 'otototoi' is carefully marked by separate curved lines on the score. The voice must then transition into a glissando, marked by a curving line, then a wide vibrato, marked by a wiggly line, and finally into an unusual fluting sound, marked by a broken line. Xenakis brings the Greek accentuation into play, too: simple stress marks ('/') above the stave replace the polytonic accents of Aeschylus' text, but the fluctuations in the pitch line map approximately onto the rising and falling indicated by the ancient accents that Xenakis has omitted. The result is a simultaneously stressed (modern, monotonic) and pitched (ancient, polytonic) version of Aeschylus' Greek ( Fig. 1).
This falsetto line is immediately followed by the chorus' response, which the singer delivers in chest voice: τί ταῦτ' ἀνωτότυξας ἀμφὶ Λοξίου; -'Why do you cry out 'otototoi' to Loxias [Apollo]?' (Ag. 1074). This jump from falsetto to chest voice virtually without a breath introduces the most astonishing feature of the piece: the combination, and virtual overlap, of multiple characters in one single singer's overstretched voice. Even Sakkas, whose remarkable vocal agility helped to inspire 36 Xenakis writes Ἀπόλλω rather than ὤπολλον. All translations from the ancient Greek are mine. and shape the piece in the first place, is pushed to his limits, so that his voice cannot help but express the strain of representing both Cassandra and the chorus that is trying to understand her. Sakkas describes Xenakis as leading the performer 'into highly dangerous conditions for one's spiritual and bodily integrity'. 37 This strain, which begins here in the first lines but will continue and indeed increase over the course of the work, operates in two competing directions. On the one hand the dramatic fragmentation of the singer's voice into multiple characters suggests that the piece is exploring a breakdown of communication that is so powerful it splits the individual performer at the centre of the piece into a broken embodiment of that communicative failure. This fragmentation ripples out beyond the singer, as the relentlessly precise, rhythmic drumming is jarringly juxtaposed with the glissandoing, wandering movement of the voice(s) and the gentle plunking of the psaltery. Even the two performers appear initially to be pitted against each other in their articulation of quite different kinds of musical language.
On the other hand, there are similarities in the ways Xenakis constructs the percussion and the vocal lines. Both employ sudden shifts of dynamics, attack, and tremolos of different speeds, and the percussionist switches between drums and T H E M U S I C O F X E N A K I S ' E S T R A N G E D K A S S A N D R A 81 woodblocks in the same way that the singer switches between chest voice and falsetto. As Xenakis describes it, 'The percussion consists of skin drums and woodblocks punctuating or commenting on the text'. 38 The vocalist and the percussionist may be using different musical languages, but the languages complement each other and appear to be mutually comprehensible. Even their physical efforts match: Sakkas notes that by the end of a performance of the piece he and Gualda were 'both gasping for breath'. 39 He observes that the audience, too, is led 'breathless to the work's end '. 40 This symbiosis of the performers, along with the sympathetic engagement of the listeners, encourages a more positive reading of the fragmentation experienced by the singer playing both Cassandra and the chorus. The singer is not an individual who is dissociating and disintegrating over the course of the performance; he is, on the contrary, the incarnation of shared experience. The singer is two separate but allied voices, and he represents within one single human body, one single vocal tract, the struggle of both the individual Cassandra and the community of the chorus to breach their mutual foreignness. It is fitting that their first exchange is triangulated through an appeal to Apollo, the god who has created such communicative mayhem for Cassandra, and who is here identified as the god of linguistic confusion with his cult title 'Loxias' -'the riddler'.
As Kassandra develops, it builds upon this fraught but collaborative communication that is taking place between the musicians and within the body of the singer. Soon the work widens its scope, to welcome the integration of entire communities that are foreign to each other. In Xenakis' work, as in Aeschylus' play, Cassandra's birdlike sounds are closely linked to her identity as a barbarian, a non-Greek. 41 At one point in the Agamemnon the chorus expresses its astonishment at Cassandra's clear knowledge of past events that took place in Argos, even though the prophet was living in Troy at that time (Ag. 1198-1201): καὶ πῶς ἂν ὅρκου πῆγμα γενναίως παγὲν παιώνιον γένοιτο; hαυμάζω δέ σου, πόντου πέραν τραφεῖσαν ἀλλόhρουν πόλιν κυρεῖν λέγουσαν ὥσπερ εἰ παρεστάτεις.
And yet how could the binding security of an oath honestly secured be helpful? Then again I am indeed amazed at you, how, brought up beyond the sea and talking about a foreign-speaking city, you are as accurate as if you had been present here. The moments when the chorus can see the truth in Cassandra's speech -that is, when she describes events in the past rather than the future -are striking to them not because they come from a prophet, but because they come from a foreigner. How does Cassandra know about things that happened in the house of Atreus when she was living on the other side of the Aegean? And how can she express the narrative of the past in such clear Greek when her incomprehensible prophecies make her sound so foreign? Xenakis decided to highlight this moment in which the chorus is surprised and impressed by Cassandra's knowledge, by making some drastic cuts to the text that precedes it. Xenakis instructs his singer to deliver Cassandra's cry ἰώ (Ag. 1136) in the form of a drawn-out groan marked fendu -'cracked' -in the score, then has the cry fade into an extended passage on the psaltery. The composer cuts the following sixty lines of stumbling communication between Cassandra and the chorus in the Agamemnon, and instead jumps directly to the chorus' lines quoted above (with a couple of small changes). The chorus' respectful appreciation of Cassandra's insight now responds not to the prophet's words, but to her broken howl and to the twanging strings that follow it. Cassandra is being validated and embraced by the chorus in all her alien incomprehensibility. (Sound clip 2.) The prominence of the psaltery at this point is significant because, as Xenakis wrote in the foreword to Kassandra, he believed that its sound could signal both a spatial and a temporal shift: 'The Psaltery, a copy of a 20-stringed instrument from Java belonging to Maurice Fleuret, is a remarkable descendant of the ancient lyre. It is strung in 6 perfect fourths with two intermediary pitches, creating a global scale that is non-tempered and non-diatonic'. 42 Lifted (appropriated) from its Southeast Asian origins the geographical and cultural displacement of the psaltery allowed Xenakis to represent his translation of Aeschylus' original text across space and time. The instrument is avowedly inauthentic, but it is an attempt to be truthful to the sounds of ancient Greece. In taking over from Cassandra's cracked and inarticulate cry the psaltery offers its own version of Cassandra's 'otototoi'; beyond verbal communication and outside notions of past and future, its unfamiliar sounds (in a European context) nonetheless convey an integrity that the chorus recognizes even though there are no specific words to which they can respond. As they say to Cassandra, but also to the psaltery: 'you speak and are as accurate as if you had been present here'.
Having expanded a single voice to encompass multiple bodies, and having used the Greek language and the Javan lyre to celebrate the value of foreign speech and sound, Xenakis goes on to develop one further feature of Cassandra's position: her role as a woman communicating, for the most part, with men. Cassandra's 42 Le Psaltérion, copie d'un instrument à 20 cordes de Java appartenant à Maurice Fleuret est un succédant remarquable de la lyre antique. Il est accordé en 6 quartes justes conjointes avec deux notes intermédiaires formant une échelle globale non-tempérée et non diatonique. Xenakis, foreword to the score of Kassandra (1987). Maurice Fleuret (1932Fleuret ( -1990) was a composer, critic, and ethnomusicologist who championed contemporary and global music.
vulnerability as a mortal woman is crucial in the ancient Greek myth. It is Apollo's attraction to her and her denial of him that brings upon her the curse of being both truthfully prophetic and doomed never to be understood. 43 This situation is revealed in its most explicit physical dimension when Cassandra finally accepts her fate at the very same time as she prophesies it (Ag. 1256-94). 44 At this point in the play, as Apollo mentally assaults the prophet with his inspiration, Cassandra hurls away the accoutrements of her prophetic skill (Ag. 1264-70) in a grotesque act of undressing that may be read as either her ultimate submission to Apollo, or a defiant (second) act of rejection. Hall notes the additional frisson that male actors cross-dressing could bring to such a scene from antiquity onwards, particularly in a twentieth century when directors were able to represent more diverse and fluid genders and sexualities on stage. 45 Xenakis' singer, presenting now as male and now as female, has already performed this diversity and fluidity through his vocal modulations. Here he gets to reinforce this by gesture too: this action of discarding Apollo's symbols is, significantly, one of the few stage directions that Xenakis marks in the score. In Xenakis' piece the tension leading up to this moment builds through an increasingly frantic exchange between Cassandra and the chorus. It begins with them acknowledging the limited success of their current communications (Ag. 1239-45), and then accelerates through several lines of stichomythia (Ag. 1246-55) in which Cassandra cannot help but continue to articulate the events to come while the chorus attempts some quibbling interjections. Finally, Cassandra takes over with an extended prophecy. Xenakis sets almost all of this speech, concluding only at the moment where Cassandra foresees the arrival of her avenger, Orestes. Cassandra identifies him, significantly, as another exile: φυγὰς δ' ἀλήτης τῆσδε γῆς ἀπόξενος -'a refugee, a wanderer, an exile from his land' (Ag. 1282). This is the point in Kassandra where the singer, who has perforce been leaping in and out of falsetto during the stichomythia, brings his portrayal of the inspired prophetess to a climax. His voice, now almost screaming the constant falsetto of Cassandra's inspiration, starts to break. The chest voice begins to make itself heard through the strain, and the pretence of the female persona starts to break down. Finally, in a howling cry that follows the vision of Orestes' arrival, the male voice emerges from concealment. The singer is instructed to produce a fluting glissando which moves in a quite extraordinary slow slide through all the vocal registers, from the treble to the bass clef, from Cassandra's falsetto into the chorus' baritone. (Sound clip 3.) The glissando is a remarkable moment that reveals the bare bones of the performance, while also exposing and then conflating all the apparent dichotomies explored by the work as a whole. These seconds are totally given over to what Barthes 43 For a couple of different approaches to the possible events that may have led to Apollo's curse on Cassandra see Kovacs (1987) and Morgan (1994). 44 On the ineluctable performativity of Cassandra's prophetic voice, and the connection between her uttering and accepting the future, see Pillinger (2019: 16). 45 Hall (2004: 15). describes as the 'grain' of the voice, 'the body in the voice as it sings'. 46 The singer is no longer a prophet and a chorus, a local and a foreigner, a man and a woman. The singer is a single body with a single voice. This reinforces the work's earlier hints that the communicator Cassandra is ultimately not so different from her receptive interlocutors, that the foreign female prophetic voice is not so distinguishable from the native male choral voice. The individual visionary cannot be extricated from the community that depends on such a figure; everyone is implicated in the outsider's struggle to be heard.
Kassandra explores the prophet's distance from and yet enduring connection with her interlocutors, her home (language), and her sex. Underpinning this tension between disruption and continuity is Cassandra's defining superpower: her ability to wander away from, and then return to, her own moment in time. This the figure of Cassandra does, first and foremost, through her prophecies, for her visions give her the ability to transcend time although her body is trapped in the mythic past. But she also transcends time through her reception in the works of artists such as Xenakis. If the music of Kassandra at first strikes its audience as alien, unfamiliar, and futuristic in ways that take Cassandra ever further away from her ancient Greek past, every detail can also be understood as an attempt to resituate Cassandra in the fifth century Greece of Aeschylus -or rather, in the Bronze Age world filtered through Aeschylus' drama. In Kassandra Xenakis aspires to an authentic voicing of the past through the compositional techniques of the present, while he also consciously defers to the interpretative decisions of future performers, collaborators, and audience members. There is no attempt to disguise the temporal layers that lie beneath the voice of Cassandra in Kassandra, no attempt to trick the audience into feeling comfortably based in either the present or the past. In a way that suits the figure of Cassandra -and Xenakis -so well, the composer is demonstrating how one's language (musical, verbal, cultural) can feel at odds with one's spatial and temporal environment, and yet still engage meaningfully with others in that environment. As Wolff says of so much of twentieth century music's engagement with the classical world: 'The distant past has partially become timeless, and offers an enticing combination of being both distant and other, and somehow also part of us'. 47 In a collection of responses to Xenakis' work published some years before Kassandra was written, the novelist Milan Kundera reveals that he developed a particular appreciation for Xenakis' music in the late 1960s, when Russia invaded his home country, Czechoslovakia. 48 Kundera describes the strange kind of comfort that the music granted him, and dubs Xenakis a prophète de l'insensibilité: something like a 'prophet of detachment'. Kundera explains that emotions are too easily mobilized in the cause of violence and repression. By contrast, Xenakis seemed to him to have successfully broken with the history of European music in order to pursue an objective, 46 Barthes (1977: 188). 47 Wolff (2010: 304). 48 Kundera (1981). rather than subjective, description of the brutal turmoil of the post-war years in Eastern Europe, and this rationalism spoke far more powerfully and truthfully to Kundera than any artistic histrionics.
In the foreword to Kassandra, six years later, Xenakis writes only one instruction for performance: 'The performance must avoid all emotional expression. For there is a serious danger of imposing modern clichés on Aeschylus' text.' 49 The prophète de l'insensibilité demands that his musicians should restrain their emotions in performing his work, in order to avoid anachronistic perversions of Aeschylus' Greek words, whose ancient pronunciation he was so determined to replicate. In this effort to remain faithful to the Cassandra of the past, Xenakis takes what Kundera sees as the only legitimate approach to their turbulent contemporary world. Xenakis' presentation of Cassandra's estranged voice neither translates nor explains Aeschylus' text for his listeners. Instead he allows the ancient prophet to speak for herself through, over, against, and alongside the music that delivers her words.
At the same time, any performance demonstrates how impossible it is to channel the voice of this Cassandra without making a huge physical effort that inevitably takes an emotional toll on the performers and the audience alike. Cassandra's voice remains that of the swallow, the barbarian woman, the political exile, and the anachronistic prophet, but her voice is 'also part of us'. It is part of a sympathetic wider community that cannot stay detached, but identifies with the 'naked cry' of Woolf's reading, the delirium of Sakkas' singing, and the visionary sound of Xenakis' creating.
her work on Xenakis' Polytopes, and Richard Rawles took the trouble to check a score for me. Above all I am grateful to my parents, Edward Pillinger and Suzanne Cheetham Pillinger, for sharing their musical world with me.
|
2021-08-27T16:26:57.160Z
|
2021-08-30T00:00:00.000
|
{
"year": 2021,
"sha1": "0e224e5816ab8a0f1f810c0e0c22e0f1e1afae22",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/crj/article-pdf/14/1/70/42128067/clab005.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "760ed6600861b9f8393d0b972480f666614c6c8d",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
}
|
220499946
|
pes2o/s2orc
|
v3-fos-license
|
The relative importance of plasticity versus genetic differentiation in explaining between population differences; a meta-analysis.
Both plasticity and genetic differentiation can contribute to phenotypic differences between populations. Using data on non-fitness traits from reciprocal transplant studies, we show that approximately 60% of traits exhibit co-gradient variation whereby genetic differences and plasticity-induced differences between populations are the same sign. In these cases, plasticity is about twice as important as genetic differentiation in explaining phenotypic divergence. In contrast to fitness traits, the amount of genotype by environment interaction is small. Of the 40% of traits that exhibit counter-gradient variation the majority seem to be hyperplastic whereby non-native individuals express phenotypes that exceed those of native individuals. In about 20% of cases plasticity causes non-native phenotypes to diverge from the native phenotype to a greater extent than if plasticity was absent, consistent with maladaptive plasticity. The degree to which genetic differentiation versus plasticity can explain phenotypic divergence varies a lot between species, but our proxies for motility and migration explain little of this variation.
INTRODUCTION
When environmental conditions vary in space, individuals of the same species often differ in phenotype in a way that increases their fitness in the local environment (Hereford, 2009). These phenotypic differences arise through two different mechanisms: phenotypic plasticity, in which phenotypic expression is a direct response to the environment without genetic change (Pigliucci 2001), and local adaptation, wherein phenotypic differences are determined by genetic differences (Kawecki & Ebert 2004). Although many studies implicitly assume phenotypic differences between populations are mainly, if not completely, genetic (Brommer 2011), in reality the relative importance of these two processes in driving spatial differentiation in phenotype is currently unclear.
Reciprocal transplant studies have been widely used to estimate the contribution of plasticity and genetic differentiation to spatial phenotypic divergence (Turesson 1922), and to our knowledge four studies have synthesised their findings. Leimu & Fischer (2008) and Hereford (2009) both conducted metaanalyses that focused on traits positively associated with fitness, and found strong evidence for local adaptation. Using studies on plants, Leimu & Fischer (2008) found that the average performance of a native population is 0.16 withinpopulation standard deviations greater than the performance of a non-native population, and using studies of both plants and animals, Hereford (2009) found a 45% increase in performance of native individuals. Such home versus away comparisons, when averaged over all possible reciprocal transplants, measure the genotype by environment interaction for fitness (Blanquart et al., 2013). Presumably differences in performance also exist because of the main effects of genotype and environment, but the magnitude of these differences were not characterised. In the context of fitness this is understandable as environmental differences in fitness probably reflect between-site differences in habitat quality rather than an active plastic response on the part of the organisms (Blanquart et al., 2013).
For traits other than fitness, while genotype by environment interactions may exist, it is also meaningful to consider environmentally induced variation in local optima, and therefore genetic differentiation and phenotypic plasticity in those traits in response to divergent selection. Palacio-L opez et al. (2015) synthesised data from reciprocal transplant studies of non-fitness traits in plants. For those traits they classified as plastic (the 48% of traits that exhibited a> 53% change in phenotype when home versus away), 49.4% exhibited 'perfectly' adaptive plasticity, 19.5% 'partially' adaptive plasticity and 31% maladaptive plasticity (See Fig. 1). The adaptive plasticity categories represent situations where plasticity causes trait values to be closer to their putative optima compared to the situation where plasticity is absent (Ghalambor et al., 2007). In contrast, maladaptive plasticity were those traits in which the plastic response was more than twice the difference in putative optima ('too-steep'; 9.8%) or opposite in sign ('wrong-sign'; 21.2%). While the classification system of Palacio-L opez et al. (2015) has merits, it suffers from the implicit assumption that the observed phenotypic divergence between populations is equal to the difference in their respective optima. An alternative but overlapping classification system is to distinguish cogradient variation, where plasticity causes differences in trait value in the range 0-100% of the phenotypic divergence, from counter-gradient variation in which the plastic response is either greater than the phenotypic divergence (hyperplasticity) or opposite in sign (wrong-sign plasticity) (Levins 1968;Conover & Schultz 1995). Surprisingly, an informal review suggests that for traits in which spatial differentiation in phenotype can be attributed to both genetic and plastic responses, 84% show counter-gradient variation (Conover et al., 2009). The related question of whether changes in mean phenotype within a population over time are due to plasticity or genetic adaptation was also touched on in Conover et al. (2009) but several more focussed taxonomic reviews have appeared subsequently (Gienapp (2008) (2014)). The consensus from these papers is that the contribution of plasticity outweighs that of genetic adaptation, although this conclusion is largely based on a failure to reject the null hypothesis that all phenotypic change is due to plasticity.
The conditions under which plasticity or genetic differentiation are favoured has been explored extensively in a theoretical context. When there is no cost to plasticity and the environmental cue is perfect, all spatial differentiation is predicted to arise from a direct plastic response to spatial variation in the environment (Via & Lande 1985). However, as the cost of plasticity increases (van Tienderen 1997) or cue reliability decreases (Gavrilets & Scheiner 1993) the plastic response is reduced and genetic differences contribute to spatial differentiation (de Jong 1999;Tufto 2000). As the scale of environmental variation increases relative to the scale of dispersal, genetic differences start to play an increasingly dominant role (Hadfield 2016). Although not well developed theoretically (but see Edelaar et al. 2017), it has also been suggested that species without active motility (such as plants) do not have the capacity to move to environments in which they are suited and are therefore exposed to a greater range of environmental variation which requires a plastic response (Bradshaw 1972;Huey et al. 2002).
Low gene-flow between environments, either through active habitat choice or low migration, combined with high costs of plasticity and low cue reliability are therefore expected to favour genetic differentiation over plasticity as a cause of spatial differentiation. Given the difficulty of measuring the cost and accuracy of plasticity, empirical work has mainly concentrated on the explanatory power of gene-flow. Meta-analyses have been used to show that the absolute strength of plasticity is greater in plants than in animals (Acasuso-Rivero et al. 2019), which while consistent with active habitat choice reducing the need for plasticity, could also be because the continually growing modular structure of plants is more developmentally labile (De Kroon et al. 2005). In contrast, the absolute magnitude of genetic differentiation does not appear to increase with increasing geographic distance (Leimu & Fischer 2008) which is inconsistent with reduced gene-flow promoting genetic differentiation. Outside of a meta-analytic approach, Jacob et al. (2017) used an elegant experimental evolution approach testing the effects of motility and migration. As predicted, they showed in their ciliate microcosms that active habitat choice increases the amount of local adaptation, but surprisingly, reducing gene-flow by reducing the rate of random migration had little effect.
While previous syntheses give some insight into the relative contributions of plasticity and genetic differentiation to phenotypic divergence in space (Conover et al. 2009;Palacio-L opez et al. 2015) and time (Meril€ a & Hendry 2014), they suffer from several limitations. First, the use of informal, extreme or arbitrary inclusion criteria makes it hard to know whether their findings are general. Second, the unnecessary (and arbitrary, in the case of perfect/partial adaptive plasticity) discretisation of a continuous metric makes it hard to judge in quantitative terms the relative importance of plasticity versus genetic differentiation. Third, the reliance on significance testing may mean that observed patterns simply reflect statistical power rather than biological effect, and finally, without correcting for measurement error the number of studies falling into rarer classes, such as maladaptive plasticity, are likely to be inflated. In this paper, we collate data on non-fitness traits from reciprocal transplant studies and avoid the above issues using meta-analytic techniques to determine the average relative strength of genetic differentiation versus plasticity. In addition, we quantify the degree to which the strength of the two processes varies over species and traits and test leading a classification scheme based on co/counter gradient variation (right). In both cases the black dots represent the mean phenotypes of two populations raised in their home environments such that their difference is the in situ divergence (P A E A À P B E B , where P i E j refers to the average phenotype of individuals from population i raised in the environment of population of j: see Fig. 2). The solid black line represents the scenario where 100% of the divergence is due to plasticity with no genetic differentiation, and in this case the difference in mean phenotype between the same population assessed in the two environments (either P A E B -P A E A or P B E B À P B E A ) would track this line. Coloured regions denote plasticity induced phenotypes (P A E B -P A E A ) that fall into the different classes, which for perfect adaptive plasticity must lie in the region 47-153% of the phenotypic divergence, and between 0-47% or 153-200% for partial adaptive plasticity.
hypotheses about what factors might promote plasticity over genetic differentiation.
Literature search
Data were collected using the search term 'reciprocal transplant experiment' on the ISI Web of Science database on the 26th January 2018 and 22nd June 2018. Reciprocal transplant experiments are those in which two populations are assayed in their own and each other's environment ( Fig. 2) to test whether phenotypic differences between populations are due to genetic differences or a plastic response to environmental variation. Two reciprocal transplant studies were also collected using the search term 'common garden experiment', as initially common garden studies were being screened as well. There proved to be a lack of suitable common garden studies, however, so this type of study was excluded from the analysis. In total, 682 studies were screened from 1981 to 2018, which comprises the total number of studies returned by the search. Studies were chosen for inclusion in the meta-analysis based on the following criteria: a phenotypic trait measurement for each of the four treatment groups in the reciprocal transplant ( Fig. 2) was reported; standard errors for these measurements could be extracted or calculated; distance between the studied populations could be determined; the populations involved in the reciprocal transplant were of the same species; and each phenotypic measurement corresponded to one population. Studies were excluded if they used lab populations or replicated natural conditions in a laboratory or greenhouse, or if the reciprocal transplant did not take place at the site where individuals were collected. For 218 studies it was determined from the abstract that the inclusion criteria were not met, and of the remaining 464 studies a full reading was required. For those studies, 375 did not meet the inclusion criteria (a summary of the reasons are given in the Supplementary materials Data S1) and 87 studies were selected for inclusion in this meta-analysis. Of the 87 species, only three species had been subject to independent reciprocal transplant experiments.
Phenotypic means and their standard errors were either extracted from the text or tables, calculated from publiclyavailable raw data, or extracted from graphs using Web Plot Digitizer (Rohatgi 2012). For studies where phenotypes were measured over multiple time periods, the last time period was used in each study for consistency, except for cases where no standard error, or no measurement was reported for one or more groups at the final time point, in which case the previous time point was used. In studies that performed reciprocal transplants with more than two populations, only the first two listed populations were used to avoid issues with non-independence during analysis. All traits were used unless they were calculated from the same information (i.e. leaf width and leaf area), in which case the first listed trait was used. In cases where standard deviation and sample size were reported, these were used to calculate the standard error if it was not reported. Fitness traitstraits that are inextricably tied to fitness (i.e. measures of survival and/or fecundity)were excluded. In total 200 traits were included, and there was little evidence of any publication bias (see Supplementary materials Data S1).
Effect size
The effect size extracted from these studies is the component of the in situ phenotypic divergence between populations that can be explained by plasticity, as opposed to genetic differentiation. This is obtained from the four phenotypic measurements collected from each study (Fig. 2), by calculating the plastic component of the phenotypic difference (DE) and dividing it by the total in-situ phenotypic divergence (DH À the difference when in their home environments) to give the plasticity metric (PL). The plastic component can be determined as follows. Individuals from Population A in Environment A (P A E A ) and from Population A in Environment B (P A E B ) are from a common genetic background but experience different environmental conditions, and so the Figure 2 The different populations involved in a reciprocal transplant experiment and the way in which they can be used to determine the plastic component of phenotypic differences between populations. The green boxes indicate populations in their home environment, and orange boxes are populations that have been transplanted. The phenotypic difference observed between the individuals of the same population in different environments is identified as the difference due to plasticity. difference in phenotype can be ascribed to plasticity (DE A = P A E A ÀP A E B ). Likewise, the difference in phenotype between individuals from Population B in Environment A (P B E A ) and from Population B in Environment B (P B E B ) can be ascribed to plasticity (DE B ). We take the average of these as the plastic component of the phenotypic difference (DE = (DE A + DE B )/2). If there is no genotype by environment interaction, such that the reaction norms of the two populations only differ in intercept and not slope, we expect DE A = DE B . The in situ phenotypic divergence (DH) is simply the difference in phenotype between the two populations in their home environments; the difference between Population A in Environment A (P A E A ) and Population B in Environment B (P B E B ) (Box 1). The plasticity metric (PL) is then DE/ DH and lies between zero and one if there is co-gradient variation, but may be negative if there is wrong-sign plasticity, or greater than one if there is hyperplasticity. It should be noted that 1ÀPL can be interpreted as the component of the in-situ phenotypic divergence between populations that can be explained by gene divergence.
Our PL metric is similar to that used by Palacio-L opez et al. (Leimu & Fischer 2008); absolute measures face the risk of confounding the capacity to respond to an environmental difference with the magnitude of the environmental difference itself. For example, if researchers are better able to identify and manipulate environmental variables that are important to plants than they are for animals, then plasticity-induced changes in phenotype would be larger in plants (Acasuso-Rivero et al. 2019) even if plastic responses are comparable to those in animals. Using the notation of Chevin et al. (2010) to make this point clearer; if we assume the reaction norm b and the environmental sensitivity of selection B are both linear functions of the environment E, it is hard to tell whether the differences between plants and animals in their plasticity-induced response to contrasting environments (|b[E A ÀE B ]|), is driven by differences in |b| or |E A ÀE B |. Likewise, if environmental differences increase with geographic distance it is hard to ascertain whether greater genetic differentiation between distant sites (Leimu & Fischer 2008) is due to low-gene flow facilitating a response to divergent selection or whether the strength of divergent selection on breeding value ((BÀb)[E A ÀE B ]) is itself greater. Since our PL metric is a relative measure of plasticity versus genetic differentiation the magnitude of the underlying environmental difference (E A À E B ) should be largely controlled for when assessing the ability to be plastic.
Moderators
All plants (48 species, 118 traits) were classified as sessile, and animals (39 species, 82 traits) were classified as sessile (17 species) or motile (22 species) depending on whether their adult forms are anchored to a substrate. As proxies for the amount of gene flow, the distance between study populations was also recorded and plants were categorised into whether they were wind (10), water (15) or animal (23) pollinated. Traits were also classified as either being morphological (115), physiological (34), growth (25), timing (17) or behavioural (6) following Hansen et al. (2011). Three sex-allocation traits were left uncategorised. Morphological traits mainly include measures of organismal size, such as height, biomass and number of structures such as leaves or branches. Physiological traits refer to various traits related to metabolism, macromolecule content, and other biochemical processes. Growth traits refer to changes in a quantity (usually a morphological trait) over time. Timing traits (originally classified as life-history traits in Hansen et al. (2011)) include measures of phenology and the timing or duration of life-history stages.
Statistical analyses
If the sampling errors around the four assay means are independent and normally distributed (as would be predicted from large-sample theory) then the sampling distribution of the PL metric can be derived (Marsaglia 1965(Marsaglia , 2006. In general the distribution is heavier-tailed than the normal, can be asymmetric and bimodal and the median (the mean is undefined) may not coincide with the true value. We employ three strategies to overcome these issues which are discussed at length in the Supplementary materials Data S1: The normal model. If the sampling distribution of d DH does not have much density close to zero (either because the in situ phenotypic divergence is large, or because it is precisely measured) the sampling distribution of c PL can be approximated by a normal (Marsaglia 2006). We therefore employ metaanalysis using the delta method to obtain an approximate standard error for PL given the standard errors of the four means (using the msm package in R (Jackson 2011). Betweenobservation effects not due to measurement error (i.e. random-effect meta-analysis) and species effects were fitted as random. This model was also refitted with log-distance between populations and trait type as moderators together with one of plant/animal, mobile/sessile or pollination mode (plants only). The models were fitted in MCMCglmm (Hadfield 2010) using default (flat) priors.
The normal model failed to capture the leptokurtosis in c PL so we also conducted a bivariate meta-analysis of d DH and d DA (the estimated difference in the two populations phenotypes when away in each other's environment; P B E A ÀP A E B ). From the joint distribution of DH and DA (after accounting for measurement error) we can obtain the distribution of PL using results in Marsaglia (2006) since it can be obtained as (DH + DA)/(2DH). However, even after accounting for variation in measurement error, the distribution of DH and DA were far from normal. This is most likely due to different traits been measured on different scales, and so we adopted two strategies.
The ratio-t model. First we assumed independent Gaussian sampling errors around the true values of DH and DA, and pairs of these true values were assumed to come from a common (over pairs of values) bivariate normal distribution after being subject to a rescaling: where i indexes pairs of values, j indexes DH or DA, l are fixed intercepts and u and e are observation level random effects with the variance of e fixed at the statistics' sampling variance. The squared scales of measurement (s 2 ) were assumed to come from an inverse gamma distribution with the scale parameter being equal to the shape parameter. This forces E[1/s 2 ] to be one (since it is confounded with the variance of u) but has a free parameter determining the spread of scales. The resulting compound distribution is a bivariate generalised t-distribution. The ratio-sd-scaled model. Second, we standardised DH and DA by the (weighted) average standard deviation of trait values (from P A E A and P B E B for DH, and from P A E B and P B E A for DA) and assumed the sampling errors come from a scaled non-central t-distribution (Hedges 1981;Camilli et al. 2010).
The true values of DH and DA were then assumed to come from a common bivariate normal distribution as in the ratio-t model. For the ratio-sd-scaled model 70 out of 200 observations had to be discarded because the standard deviations were not available.
A difficulty with these two joint models of DH and DA is that the signs of the two variables are arbitrary (depending on whether population A is compared to B or vice versa) suggesting that a model in which the moderators and species effects determine their absolute values would make sense. However, the ratio of their signs is not arbitrary. This precludes taking absolute values, and in the absence of a solution to this problem we fitted a model without moderators and species effects. The ratio-t model was fitted in STAN using diffuse normal priors on the fixed effects, diffuse half-Cauchy priors on the standard deviations (Gelman 2006) and inverse-gamma shape parameter, and an LKJ prior (Lewandowski et al. 2009) with a shape parameter of one on the correlation matrix. The ratio-sd-scaled was fitted in MCMCglmm with flat priors.
The numerator of the PL metric is the average plastic response of the two populations (DE A and DE B ) and so only captures the main effects of plasticity and not any GxE interaction. While we expect GxE interaction to be a dominant source of variation for fitness traits (which we exclude) the contribution of GxE interaction to non-fitness traits is less clear (see Discussion). In order to test whether the slopes of the reaction norms differed between populations (i.e. GxE interaction) we also fitted an identical model to the ratio-sdscaled model but for DE A and DE B (rather than DH and DA). We used the ratio-sd-scaled model rather than the ratio-t model because differences in scale across traits might cause DE A and DE B to be strongly correlated, and we felt that the ratio-sd-scaled model would suffer from this issue less.
RESULTS
Although the quantitative inferences varied over model types, and all models suffered inadequacies of some form in terms of model fit, the general conclusions were reasonably consistent. The posterior means (and 95% credible intervals) for the median value of PL in the three models were normal) 0.703 [0.623-0.787] (without moderators), ratio-t) 0.750 [0.694-0.870] and ratio-sd-scaled) 0.677 [0.615-0.752] indicating that in co-gradient cases plasticity on average explains more than two-thirds of the between-population divergence. However, there was substantial variation around this expectation and the probability of a trait showing hyper-plasticity was normal) Best estimates (using the posterior means of the relevant parameters) for the distribution of PL are plotted in Fig. 3. As can be seen, the distributions of PL under the three models have similar central tendencies, but the normal model differs substantially from the other models in terms of tail behaviour, and therefore the probabilities of hyper, negative or too-steep plasticity reported earlier. We believe the ratio models provide more robust estimates of these probabilities given they allow for thick-tailed distributions that are a characteristic of ratio distributions. Fig. 4 plots the joint distribution of PL and phenotypic divergence for each data point using either the raw data or the posterior mean estimates from the ratio-t and ratio-sd-scaled models.
For the normal model with moderator variables fitted, contrary to expectation the contribution of plasticity to population divergence was estimated to be smaller in plants than animals by an amount À0.113 [À0.300 to 0.051], but this was Inferred distribution for the ratio of plastic response to in-situ divergence (PL) using the posterior mode parameter values from the three models. Note the normal model assumes the distribution of PL is normal, but the ratio-t and ratio-sd-scaled models assume the component parts of PL are normal such that the distribution of PL is that given by Marsaglia (1965).
far from significant (pMCMC = 0.172). However, lumping sessile animals with plants and contrasting them with motile animals (Huey et al. 2002) indicating similar reaction norms and therefore weak genotype by environment interaction.
DISCUSSION
To the best of our knowledge, the relative importance of plasticity versus genetic differentiation at explaining phenotypic divergence has not previously been assessed in a fully quantitative manner. Our results suggest that there is substantial between-species and between-trait variation in the degree to which plasticity causes phenotypic divergence, but on the whole plasticity is the dominant cause. For traits that exhibit co-gradient variation, where plasticity-induced differentiation and genetic differentiation have the same sign (Conover & Schultz 1995), plasticity is approximately twice as important as genetic differentiation. Plasticity itself was only weakly genetically differentiated between populations (i.e little G by E interaction) but there was strong evidence that counter-gradient variation, where plasticity-induced differentiation and genetic differentiation have opposing signs (Conover & Schultz 1995), is moderately common.
In a similar literature-based study of plasticity in plants, Palacio-L opez et al. (2015) discretised a related metric to ours in order to make qualitative assessments. In particular, they assessed the relative frequency of adaptive to maladaptive (2015) and the dashed lines separating (from top to bottom) wrong-sign plasticity, co-gradient variation and hyperplasticity. The top figure represents the raw data (i.e. c PL) and phenotypic divergence is scaled by the average phenotype across the two environments (P A E A + P B E B )/2. The middle and bottom figures represent an MCMC draw from the meta-analytic estimates of PL and phenotypic divergence from the ratio-t and ratio-sd-scaled models respectively. In all plots, any points for which PL> 4 are plotted at PL = 4 and any points for which PL < À4 are plotted at PL = À4. In the three plots 14, 10, and 8 points have been subject to truncation, respectively. plasticity (Ghalambor et al. 2007) and found that maladaptive plasticity existed in a third of cases. Our best estimates of the prevalence of maladaptive plasticity are considerably lower than this because we control for the sampling errors that tend to result in estimates that fall into extreme categories. In addition, our analyses also suggest that many cases of maladaptive plasticity are most likely associated with very low phenotypic divergence such that the absolute strength of maladaptation may be weak (Fig. 3). In an informal qualitative review of spatial and temporal differentiation in plants and animals, Conover et al. (2009) suggested that more than three-quarters of traits exhibit counter-gradient variation, where plastic and genetic differentiation between populations differ in sign (Conover & Schultz 1995). Here we show that spatial countergradient variation is considerably rarer than this and is mainly caused by hyper-plasticity whereby the plasticity-induced response is the same sign as phenotypic divergence but greater in magnitude. Again, our more conservative estimates are expected since informal literature reviews compound the problems of ignoring sampling errors with selection bias and the conflation of effect size with statistical significance (Palmer 2000). The temporal studies included in Conover et al. (2009) are a case in point; the three studies included in the review (Merila et al. 2001;Garant et al. 2004;Wilson et al. 2007) were the only reports of putative temporal counter-gradient variation among examples dominated by co-gradient variation, and in all cases the exceptionally small effect sizes were overlooked in favour of statistical significance (that turned out to be erroneous, Hadfield et al. 2010).
Although we believe the importance of counter-gradient variation has been over-stated, we do acknowledge that our analyses suggest that it should exist with moderate frequency. This is surprising since co-gradient variation is the expected outcome from most theoretical models of spatially varying selection (Gavrilets & Scheiner 1993;de Jong 1999;Tufto 2000). Verbal models of why counter-gradient variation arises often invoke adaptive evolutionary changes that are required to counteract sub-optimal plastic responses induced by novel environments (Conover & Schultz 1995). Confirming this, counter-gradient variation in gene expression has been shown to repeatedly evolve when populations are exposed to new experimental environments (Ghalambor et al. 2015;Huang & Agrawal 2016). Over longer time-scales however, such sub-optimal plastic responses are expected to disappear, and so under this view the moderate prevalence of counter-gradient variation that we find suggests that populations are often in novel environments in which plastic responses have yet to evolve to their optimal values. Two ideas may be put forward against this viewpoint. First, counter-gradient patterns can also arise at equilibrium through adaptive plastic responses that have evolved to cope with both temporal and spatial environmental variation (King & Hadfield 2019) and there is little empirical work to gauge whether this is likely. Second, in our data, counter-gradient variation, like maladaptive plasticity, is often associated with low phenotypic divergence, and while low phenotypic divergence could be driven by genetic and plastic responses that are large in magnitude but opposite in sign, it seems more likely that low phenotypic divergence is also associated with low genetic and plastic divergence. As phenotypic divergence approaches zero, our metric will tend to extreme values of hyper-or negative-plasticity, even if the absolute strength of plasticity is weak, and such patterns might simply be driven by drift as opposed to genetic compensation (Grether 2005) opposing strong maladaptive plasticity.
Our conclusion that plasticity plays a more important role than genetic differentiation in determining spatial divergence is in agreement with the qualitative conclusions drawn from studies that look at phenotypic differentiation in time (Gienapp et al. 2006;Meril€ a & Hendry 2014), but determining whether they are quantitatively similar will require a formal meta-analysis of temporal patterns. However, a number of methodological differences between quantifying spatial and temporal patterns would need to be considered. First, reciprocal transplant studies are a relatively clean way to separate plasticity from genetic differentiation, whereas the non-experimental model-based approaches for detecting temporal changes in genetic value can be biased by model assumptions (Hadfield et al. 2011). In contrast to this, the choice of time points to sample within a population is often made blindly with respect to environmental conditions, whereas the choice of populations is rarely random in reciprocal transplant studies. In particular, studies which choose to reciprocally transplant at small spatial scales seem to choose populations that are from contrasting environments (Galloway & Fenster 2000). In these cases plasticity is predicted to play a more dominant role compared to populations which had been chosen at random with respect to distance and/or environment (Hadfield 2016). This would inflate our estimates of the importance of plasticity, but the lack of relationship between the amount of plasticity and distance between populations [see below] suggests that the bias may not be large.
Different traits in the same species exhibited similar levels of plasticity, which at face value suggests there are specieslevel characteristics that promote plasticity over genetic differentiation. Our main predictor of whether a species should exhibit plasticity over genetic differentiation was whether the species was sessile or motile, with the expectation that sessile species should be more plastic because they cannot move to environments in which they are adapted (Bradshaw 1972). However, we found that if anything phenotypic divergence in sessile organisms has a reduced contribution from plasticity. This result is opposite to what we expect and appears to contradict a previous meta-analyses showing that plants have greater absolute levels of plasticity (Acasuso-Rivero et al. 2019). One explanation for this pattern is that absolute levels of plasticity are higher in sessile organisms, but the spatial scale of dispersal may be lower resulting in an even greater increase in the absolute amount of genetic differentiation (Slatkin 1978;Hadfield 2016). We tried to test more generally whether low rates of gene-flow facilitate genetic differentiation but although the distance between reciprocal transplant populations was positively related to the amount of phenotypic divergence explained by genetic differentiation, the relationship was weak and far from significant. This result should be taken with caution, however, because different species are likely to have very different dispersal distances and the correlation between distance and gene-flow might be quite weak. Compounding this problem, if researchers choose populations to reciprocally transplant based on the scale of dispersal for that species (for example if populations at a distance of 1km are chosen for a low-dispersal species, but populations at a distance of 100km are chosen for a high-dispersal species), our proxy may then only be very weakly correlated with geneflow, and at the limit may be uninformative when researchers can perfectly calibrate the distance between populations with the scale of dispersal. Within-species studies should suffer from this issue less, and indeed several such studies have found evidence of local-adaptation scaling with distance (e.g. Galloway & Fenster 2000;Joshi et al. 2001) despite a between-species meta-analysis failing to find such a pattern (Leimu & Fischer 2008). For plants we used pollination mode as an additional proxy for gene flow with the expectation that because wind-pollinated plants have increased gene flow compared to animal-pollinated plants (Hamrick et al. (1979), but see Friedman & Barrett (2009)) plasticity should play a more dominant role in any phenotypic divergence. As with distance, the point estimate was consistent with expectation but far from significant. Alternative proxies of gene flow using genetic marker information may prove more suitable (but see Bohonak (1999), Whitlock & Mccauley (1999)) but unfortunately only seven of the studies in our meta-analysis reported F st values for their populations.
Although our predictors explained very little of the substantial between-species variation, it should be borne in mind that in the vast majority of species measurements of multiple traits came from a single paper and therefore a single pair of populations. It is therefore possible that some unknown fraction of the between-species variation in our plasticity metric is due to the particular pair of populations within each species that were transplanted. Moreover papers often focus on a non-random subset of traits and so it is possible that some of the observed species variation may also be due to variation across trait-types in their propensity to be plastic. However, our broad categorisation of traits into morphological, behavioural, physiological, growth and timing traits failed to find substantial differences. Previous meta-analyses and syntheses have found life-history traits to have lower heritabilities than other trait types (Postma 2014;Mittell et al. 2015) or morphological (Mousseau & Roff 1987), and specifically size traits (Hansen et al. 2011) to have higher heritabilities. The fact that we do not see these patterns recapitulated at the between-population level casts doubt on the utility of substituting phenotypic measures of relative divergence (P st ; Leinonen et al. (2006)) for genetic measures (Q st ; Wright (1951), Spitze (1993)) since the validity of this substitution assumes genetic variation has the same proportional contribution to both within and between population variation (Brommer 2011). However, given there are relatively few non-morphological traits in our analyses (and life-history traits were omitted) we urge caution in accepting our null result without further investigation.
For the non-fitness traits we analysed, we found a strong correlation between the plastic responses of each paired population, and so very little evidence for strong GxE interactions. This is in direct contrast to meta-analyses of fitness traits that have found substantial local-adaptation (Hereford 2009) since metrics of local-adaptation, when averaged over comparisons, equal the difference in the plastic response of fitness among populations (i.e. DE A ÀDE B ) and therefore represent GxE interaction (Blanquart et al. 2013). This suggests that the genetic determination of traits may be relatively insensitive to the environmental context, and that local adaptation is primarily driven by trait-fitness relationships that vary over environments. A similar pattern has been found in the sexual antagonism literature where the cross-sex genetic correlation is close to one for non-fitness traits, but reduced for fitness components (Poissant et al. 2010).
In conclusion, we show that plasticity plays a dominant role in explaining between-population phenotypic divergence, and that it usually acts in the same direction as genetic differentiation, consistent with it being adaptive. Nevertheless, substantial variation exists, and in a large minority of cases plasticity can act in opposition to genetic differentiation, and in a small minority of cases may even be maladaptive.
|
2020-07-14T13:01:27.572Z
|
2019-11-25T00:00:00.000
|
{
"year": 2020,
"sha1": "84f9ea124e3dff5b25a750a939b24141c5eeb528",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ele.13565",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "45c7d042b28417492857144cb201bc33b2d633fc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
11095621
|
pes2o/s2orc
|
v3-fos-license
|
Adult mastocytosis: a review of the Santo António Hospital 's experience and an evaluation of World Health Organization criteria for the diagnosis of systemic disease
BACKGROUND Mastocytosis is a clonal disorder characterized by the accumulation of abnormal mast cells in the skin and/or in extracutaneous organs. OBJECTIVES To present all cases of mastocytosis seen in the Porto Hospital Center and evaluate the performance of World Health Organization diagnostic criteria for systemic disease. METHODS The cases of twenty-four adult patients with mastocytosis were reviewed. Their clinical and laboratorial characteristics were assessed, and the properties of the criteria used to diagnose systemic mastocytosis were evaluated. RESULTS The age of disease onset ranged from 2 to 75 years. Twenty-three patients had cutaneous involvement and 75% were referred by dermatologists. Urticaria pigmentosa was the most common manifestation of the disease. One patient with severe systemic mast cell mediator-related symptoms showed the activating V560G KIT mutation. The bone marrow was examined in 79% of patients, and mast cell immunophenotyping was performed in 67% of the participants. Systemic disease was detected in 84% of cases, and 81% of the sample had elevated serum tryptase levels. All the diagnostic criteria for systemic mastocytosis had high specificity and positive predictive value. Bone marrow biopsy had the lowest sensitivity, negative predictive value and efficiency, while the highest such values were observed for mast cell immunophenotyping. Patients were treated with regimens including antihistamines, sodium cromoglycate, alpha-interferon, hydroxyurea and phototherapy. CONCLUSIONS Cutaneous involvement is often seen in adult mastocytosis patients, with most individuals presenting with indolent systemic disease. Although serum tryptase levels are a good indicator of mast cell burden, bone marrow biopsy should also be performed in patients with normal serum tryptase, with flow cytometry being the most adequate method to diagnose systemic disease.
INTRODUCTION
The term 'mastocytosis' designates a heterogeneous group of disorders characterized by the abnormal clonal proliferation and accumulation of mast cells (MC) in one or multiple organs and/or tissues including the skin, bone marrow (BM), liver, spleen, and lymph nodes. 1 Its clinical presentation is variable, ranging from skin-limited disease, especially in pediatric cases which spontaneously resolve over time, to a more aggressive condition involving extracutaneous sites and associated with multiple organ dysfunction/failure and shortened survival. 2,3,4 Diseases involving the pathologic proliferation of MC are classified based on their clinical presentation, pathologic findings, and prognosis. The 2008 World Health Organization (WHO) classification divided tumors into the following categories (Chart 1): 1) Cutaneous mastocytosis (limited to the skin); 2) Extracutaneous mastocytosis (unifocal MC tumor with low-grade cellular atypia and non-destructive features); 3) Mast cell sarcoma (unifocal mast cell tumor with destructive features and poorly differentiated MC); 4) Systemic mastocytosis (SM), which almost invariably involves the BM, frequently presents with skin lesions and is the most commonly diagnosed MC disorder in adults. 5,6,7 The diagnostic criteria for SM were also established by the same 2008 WHO document (Chart 2). 5,8 Patients are diagnosed with SM upon fulfilling one major and one minor or three minor criteria.
SM has been associated with somatic mutations in the v-kit Hardy-Zuckerman 4 feline sarcoma viral oncogene homolog (KIT), which codes for a transmembrane receptor with kinase activity (KIT receptor, CD117) whose ligand is the stem cell factor (SCF). 9 KIT mutations that induce ligand independent phosphorylation of the SCF receptor and consequently lead to constitutive activation seem to play a critical role in the pathogenesis of SM by inducing autonomous MC growth. As such, these mutations may be potential diagnostic markers and therapeutic targets. Two activating point mutations leading to the amino acid substitutions Asp-816(r)Val and Val-560(r)Gly in the proto-oncogene C-KIT have been reported in the human mast cell leukemia cell line HMC-1, and also in adult-onset mastocytosis, although with very different frequencies. 10 The D816V mutation has also been found to be common in adult mastocytosis patients, and its frequency in adult individuals with SM is estimated to be higher than 80%, although its presence does not necessarily imply associated hematologic disease and is not a reliable prognostic indicator, as was initially suggested. 11,12,13 In contrast, the V560G mutation has been reported in only a small number of patients. 14,15 Patients with CM have the best prognosis, followed by those with indolent systemic mastocytosis (ISM). Patients with SM associated with clonal hematologic non-mast cell lineage disease (SM-AHNMD), aggressive systemic mastocytosis (ASM), or mast cell leukemia (MCL) experience a more rapid and complex clinical course.
The purpose of the present study was to review all cases of mastocytosis seen in a university hospital, and to evaluate the use of WHO diagnostic criteria for systemic disease.
MATERIAL AND METHODS
Between January 2003 and March 2011, 24 adult patients with mastocytosis were assessed at the multidisciplinary center for cutaneous lymphoma in the Porto Hospital Center. Patient charts were retrospectively reviewed for information on their disease, treatment and outcome.
The initial evaluation included a physical examination, full blood count, biochemical survey, and the assessment of serum immunoglobulin, vitamin B12, folate, ferritin and total tryptase serum levels (range: 2-13 µg/L). Suspected cutaneous lesions were biopsied, and all patients underwent BM biopsy and aspirate in order to confirm the diagnosis and detect systemic disease. Biopsy specimens were routinely processed and stained with hematoxylin and eosin. When necessary, CD117 expression was assessed by immunohistochemical staining. BM aspirate smears were stained with Leishman's and toluidine blue stains. Additionally, BM MC were immunophenotyped by flow cytometry. MC were quantified and phenotyped by four-color staining using fluorochrome-conjugated monoclonal antibodies directed against CD2, CD25, CD45 and CD117. All patients with SM underwent bone mineral density testing and abdominal ultrasound at diagnosis.
The diagnostic properties of each WHO criterion were evaluated by calculating sensitivity, specificity, predictive values and efficiency, using the WHO requirements for the diagnosis of SM (1 major and 1 minor or 3 minor criterion) as the gold standard. Spearman's rank order correlations were run to determine the relationship between serum tryptase levels and percentage of MC in the BM. The Mann-Whitney test was applied to compare serum tryptase levels between patients with SM and individuals with CM.
RESULTS
Twenty-four adult patients with mastocytosis were studied, and 14 (58%) of these individuals were women. The median age of disease onset was 34 years, and ranged from 2 to 75 years, while the median age at first clinic visit was 45 years, and ranged from 18 to 75 years. Most patients (75%) were referred by Dermatology, followed by Hematology (12.5%), Internal Medicine (8.3%) and other services (4%). The median time of follow-up was 10 years (range: 1 to 27 years) from the first manifestation of the disease, 9 years (range: 6 months to 26 years) from the diagnosis of mastocytosis and 42 months (range: 3 to 108 months) from the first visit to the multidisciplinary clinic for cutaneous lymphomas.
Skin biopsies were performed in all patients with cutaneous manifestations (96%), and in all cases, skin involvement was confirmed. Pathological findings included multifocal aggregates of MC in the upper dermis and perivascular areas ( Figure 3). Whenever necessary, CD117 immunostaining was used to identify MC in biopsy specimens (skin and BM). Elevated total serum tryptase levels (range: 2 to 194 µg/L) were found in 14 cases (58%) at the first clinical assessment. BM examinations were performed in 19 (79%) patients, all of whom were biopsied. Aspirate smears were examined for 18 (75%) patients, and 16 (67%) individuals underwent MC immunophenotyping by flow cytometry. In order to evaluate the properties of each WHO criteria for the diagnosis of SM, their sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) and efficiency were calculated (Table 2). BM biopsy confirmed systemic involvement in 11 (58%) of the 19 patients. Of the eight patients with no apparent BM abnormalities, 3 had CM and 5 had SM (false negatives), as later confirmed by other diagnostic methods (Table 2). Of the 18 patients who underwent BM aspirate examination, 12 (67%) showed systemic involvement. Features of neoplastic MC included the presence of cytoplasmic processes, atypical nuclei and spindled, degranulated and hypogranulated forms (Figure 4). Atypical MC morphology was not observed in the remaining six patients. Three of these patients were true negatives (patients with CM), while the other three were diagnosed with SM (Table 2). Twelve (92%) of the 13 SM patients whose BM MC were analyzed by flow cytometry were found to have BM involvement, as detected by the presence of phenotypically abnormal CD25+ and/or CD2+ MC ( Figure 5). In the remaining case, flow cytometry was inconclusive (unrepresentative BM aspirate). Flow cytometry analysis was able to detect an aberrant MC population as low as 0.004%. In 3 patients, the BM MC showed a normal phenotype, confirming that mastocytosis was limited to the skin (Table 2).
Systemic involvement was confirmed in 16 patients, of whom 15 were diagnosed with ISM and one with SM-AHNMD (myeloproliferative syndrome, MPS), and CM was diagnosed in 3 patients. In 5 cases, a definitive differential diagnosis between MC and MS could not be established, since BM involvement was not evaluated. Eleven (69%) of the 16 SM patients who met the major criterion (multifocal dense infiltrates of MCs detected in the BM and/or in other Associated diseases were present in five patients. Two of these individuals had autoimmune cytopenias (anemia and thrombocytopenia in one patient; thrombocytopenia in another), one had BCR-ABL negative myeloproliferative disorder, another had anetoderma, while the remaining participant was diagnosed with colon carcinoma. One patient evolved from ISM to ASM 13 years after disease onset. Years before any evidence of disease progression was detected, this patient was found to carry the activating D816V mutation of the C-KIT gene in both MC and myeloid non-mast BM cells, while the patient diagnosed with systemic MC-mediator related symptoms was found to carry the activating V560G in BM MCs only. The other cases were not investigated for the presence of C-KIT mutations, since molecular analyses are not routinely performed in the hospital.
All patients were treated with antihistamines with some clinical response. Sodium cromoglycate (200 mg orally 5 times per day) was used by 22 (92%) patients with variable clinical benefit. One patient was treated with alpha-interferon (3MU, 3 times a week, subcutaneously) over 8 weeks without clinical response. The patient who progressed to ASM received corticosteroids and hydroxyurea for 6 months with limited clinical benefit and refused further treatment. A transient response was observed in patients with associated autoimmune cytopenias, all of whom were treated with oral corticosteroids and intravenous high-dose immunoglobulins.
DISCUSSION
Mastocytosis has been found to involve the skin in approximately 80% of cases. In agreement with other studies, the present results showed that UP was the most common skin manifestation of mastocytosis. 13 Only one patient was found to have atypical cutaneous alterations in the form of secondary anetoderma. Anetodermic lesions are an unusual clinical presentation of mastocytosis. 16,17 MCs play an important role in the development of anetodermic lesions, as they release different mediators which interfere with collagen synthesis and increase the fragmentation of elastic fibers. Moreover, the chemotactic substances released during mastocyte degranulation induce the accumulation of eosinophils, neutrophils and macrophages, which might promote elastase activity. 18 Cases of suspected CM should be investigated by skin biopsy. In the present study, cutaneous biopsies were performed in all patients with skin lesions. Typically, MC are more abundant in UP lesions than in normal skin. However, small increases in MC numbers have also been reported in patients with other conditions, such as unexplained flushing, chronic urticaria, and atopic dermatites. 19,20 The diagnostic approach to SM usually starts with BM analysis, since this site is almost universally involved in adult mastocytosis, and its examination allows for the detection of second hematologic neoplasms. 8,21,22 In the vast majority of cases (90%), SM can be diagnosed by BM examination alone (major criterion). However, the fact that multifocal and dense MC infiltrates (major criterion for SM according to the WHO classification) were only observed in 69% of our patients suggested that this criterion has a low NPV (38%) and low efficiency (74%) for diagnosing SM (Table 2). This can be explained in part by the intrinsic subjectivity of this analysis, and also by the fact that, in some cases, it can be especially difficult to identify the typical features of mastocytosis in the BM, especially when MC are significantly hypogranulated or when significant reticulin fibrosis is present. 8 Tryptase appears to be the most sensitive immunohistochemical marker of the disease. However, tryptase testing was not available in the hospital studied, so that in some cases, CD117 immunostaining was performed to identify MC in biopsy specimens.
Atypical MC morphology in BM smears was observed in 80% of patients. This criterion had higher sensitivity, NPV and efficiency in diagnosing SM than the presence of dense infiltrates of MC in BM specimens (Table 2).
MC immunophenotyping by flow cytometry proved to be an excellent method for screening for SM (sensitivity, NPV and efficiency of 92%, 75% and 94%, respectively; all values increased to 100% when only cases with representative BM aspirates were considered) (Table 2). Currently, CD117 is considered to be the best immunological marker of mature MC. 23 It can also be found in normal and pathological MC, as well as in several other cells such as CD34+ hematopoietic stem and progenitor cells, myeloid precursors, CD56 bright NK cells, and neoplastic cells from patients with various hematological and non-hematologic malignancies. 24 However, MC usually express much higher levels of CD117 than other CD45+ hematopoietic cells, and the combined analysis of CD117 expression and light scatter properties allows for the identification of MC by flow cytometry ( Figure 5). 1,23 Neoplastic MC are usually CD2 and/or CD25 positive, and the abnormal expression of at least one of these two antigens is considered a minor criterion according to the 2008 WHO classification. In the present study, CD25 positivity appeared to be the most sensitive and specific method to identify neoplastic MC, a finding which is in accordance with the literature 23 . All CM patients in the present sample had normal serum tryptase levels, and 13 (81%) of the 16 patients with confirmed SM showed elevated tryptase levels, leaving a total of three patients with systemic involvement but normal levels of serum tryptase. Although the present data confirm previous reports suggesting that the measurement of serum tryptase levels is a reliable noninvasive diagnostic approach to monitor MC burden in patients with mastocytosis, it is important to emphasize that this test has limited diagnostic utility for SM screening due to its relatively low sensitivity, NPV and efficiency (75%, 43% and 79%, respectively; these values increased to 80%, 50% and 83% when the patient with SM-AHNMD was excluded from the analysis) (Table 2). 25 Moreover, it is known that serum tryptase levels can be temporarily elevated during severe allergic reactions, and in a significant proportion of cases of acute myeloid leukemia (AML), myeloproliferative and myelodysplastic syndromes, which limits the diagnostic utility of this test in individuals with a second SM-associated myeloid neoplasm. 1,12,26 One study has reported that approximately 5% of AML patients have serum tryptase levels > 200 µg/L, possibly due to the presence of associated SM. 27 Myeloid disorders account for 80-90% of cases of SM-AHNMD. For these reasons, the WHO classification states that this criterion cannot be used reliably in patients with an associated myeloid disorder. Plasma cell myeloma is the most frequent lymphoid malignancy associated with SM. 28 It is not completely clear whether elevated serum tryptase levels are diagnostically effective in cases of SM associated with lymphoid neoplasms. 25 According to the literature, the proportion of ASM and SM-AHNMD patients who exhibit markedly elevated serum tryptase levels (> 200 µg/L) is significantly higher than that proportion among ISM patients. 4,8 In the present study, the patient with SM-AHNMD (MPD) had normal serum tryptase levels.
Despite their variable sensitivity and NPV, all WHO criteria proved to have high specificity and PPV for diagnosing of SM (100% in our series) ( Table 2). Despite the sample limitations, the present results suggested that patients who met these criteria had a significantly higher probability of systemic disease.
Of the 16 patients diagnosed with SM, 11 (69%) met both the major criterion and ≥2 minor criteria. The remaining 5 (31%) patients were diagnosed based only on the presence of ≥3 minor criteria. These findings are similar to other reports. A study of 53 patients with SM reported the presence of the major criterion in 68% of the sample. 25 Atypical MC morphology was observed in 100% of patients, an aberrant immunophenotype was identified in 96% of individuals, and elevated serum tryptase (>20 µg/L) levels were detected in 85% of the sample. 25 Spearman's rank order correlation was used to determine the relationship between serum tryptase levels and the percentage of MC in the BM. A positive correlation was found between the two variables (r = 0.340, P< 0.05). The Mann-Whitney test was applied to compare serum tryptase levels between patients with SM and those with CM, and it was found that patients with CM had significantly lower levels of serum tryptase than SM patients (P=0.014).
The conclusions of the present study regarding genetic aberrations are limited by the fact that the molecular analysis of MC is not routinely performed in our hospital, and only two cases were evaluated using this procedure. The patient who presented the D816V mutation, which is common in patients with adult onset SM, showed disease progression from indolent to aggressive SM 13 years after the first disease manifestation. 29 This is in accordance with the fact that individuals who carry this mutation in BM MC, CD34+ hematopoietic precursors and mature non-mast myeloid cells tend to have a poorer prognosis. 13 In contrast, the patient whose BM MC displayed the Val-560(r)Gly mutation, which is rare in both children and adult patients with mastocytosis, had an atypical presentation of SM, displayed associated CM and showed no evidence for disease progression for 19 years after the initial diagnosis. 13 It is unclear whether these mutations play a causal or permissive role, where additional factors are necessary to develop ISM and/or ASM, and whether they may influence the response to different tyrosine kinase inhibitors, as suggested by in vitro studies with mast cell leukemia cell lines. 30
CONCLUSION
When considered individually, the WHO criteria have different degrees of diagnostic utility for SM. Although all criteria have high specificity and PPV for the diagnosis, their sensitivity, NPV and efficiency is highly variable. The present results suggested that flow cytometry based immunophenotypic analysis of BM MC was the most adequate test for SM screening due to its high sensitivity and NPV. Serum tryptase analysis also has relatively good sensitivity for diagnosing SM, and may provide an indirect measure of MC mass. However, normal serum tryptase levels cannot be used to exclude systemic involvement due to their low NPV. Thus, when serum tryptase levels are found to be normal, the BM should be evaluated and MC phenotypes should be analyzed by flow cytometry. Other conventional methods, such as BM smears and biopsy, in that order, could also be used for a correct evaluation of systemic disease. Further studies are necessary to explain the differences between ISM and ASM, disease progression and the association between SM and other hematologic disorders in some patients. To help with this process, new algorithms are being proposed for a better diagnostic definition and prognostic classification of these disorders. 31 q
|
2018-05-08T17:38:23.360Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "9e8a86a0756915a01e7a4b01ba6883dede3971f4",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/abd/v89n1/0365-0596-abd-89-01-0059.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e8a86a0756915a01e7a4b01ba6883dede3971f4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245527350
|
pes2o/s2orc
|
v3-fos-license
|
Structural Investigation of the Synthesized Few-Layer Graphene from Coal under Microwave
This study focused on the structural investigation of few-layer graphene (FLG) synthesis from bituminous coal through a catalytic process under microwave heat treatment (MW). The produced FLG has been examined by Raman spectroscopy, XRD, TEM, and AFM. Coal was activated using the potassium hydroxide activation process. The FLG synthesis processing duration was much faster requiring only 20 min under the microwave radiation. To analyse few-layer graphene samples, we considered the three bands, i.e., D, G, and 2D, of Raman spectra. At 1300 °C, the P10% Fe sample resulted in fewer defects than the other catalyst percentages sample. The catalyst percentages affected the structural change of the FLG composite materials. In addition, the Raman mapping showed that the catalyst loaded sample was homogeneously distributed and indicated a few-layer graphene sheet. In addition, the AFM technique measured the FLG thickness around 4.5 nm. Furthermore, the HRTEM images of the P10% Fe sample contained a unique morphology with 2–7 graphitic layers of graphene thin sheets. This research reported the structural revolution with latent feasibility of FLG synthesis from bituminous coal in a wide range.
Introduction
In recent years, the focus on graphene in the research and commercial sectors has increased remarkably worldwide due to its novel properties such as thermodynamically stability, transparency, and higher mechanical strength and for its potential applications in several fields, such as in sensors [1], batteries [2], ultrafast photodetectors [3], transparent electrodes [4], and advanced nanocomposites [5]. Graphene is a sp 2 -bonded monolayer carbon atom arranged in a 2D honeycomb frame. It displays unique electronic properties with high mobility and transportation [6,7]. SLG has a zero bandgap structure, which affects its potential electronics applications and optical performance [8]. In contrast, FLG has attracted more commercial applications due to the potential to control electronic states using interlayer connections [9]. Currently, the synthesis of SLG and FLG can be achieved using several methods such as the liquid exfoliation of graphite [10], epitaxy growth in an ultrahigh vacuum [11], mechanical cleavage [12], chemical vapour deposition (CVD) [13], chemical reduction [14], etc. Each of the techniques has some drawbacks that are not feasible for the mass production of graphene [15]. However, these techniques are expensive, and the production processes also contain toxic chemicals and long processing times. Thus, there is a need to establish a cost-effective process that is a fast and simple technique and has scalable production [16]. Among the various methods, the CVD procedure is the most promising technique to make a high-quality mono-or few-layer graphene with large coverage areas using catalytic substrates and hydrocarbon gases [17,18].
In addition, coal is an inexpensive and plentiful material worldwide, leading to alternative applications [19]. Coals and food waste have also been used as solid carbon sources instead of gaseous hydrocarbons to produce graphene [20]. Moreover, the graphite materials are used as a precursor to synthesize the graphene oxide, a time-consuming modified Hummers reduction process [21]. These days, MW heating plays a significant role in processing many materials. It has an interaction capacity with materials at a molecular level, and its polarization plays an important role in heating materials [22]. Moreover, the heating equality depends on the microwave extrinsic properties (MW frequency, cavity, and power amplitude) and sample intrinsic properties (size and shape). The sample shape also plays a crucial role in MW heating consistency and heating rate [23,24]. The MWassisted graphene synthesis provides extra benefits compared to the conventional synthesis methods. It has also been used to synthesise many graphene-derivative products, such as reduced graphene oxide (rGO) [25], graphene nanoribbons (GNRs) [26], and graphene oxide (GO) [27], which are frequently used in supercapacitors and battery applications for energy storage [28,29].
Carbon and its derivatives such as graphite, carbon nanotubes (CNTs), and graphene are also synthesized using MW radiation. Carbon materials have an excellent MW absorption capacity due to the delocalisation of pi (π) electrons from the sp 2 hybridized carbon atom [30]. However, the CVD method synthesises the multilayer graphene, which has a multiplicative effect with a better diffusion barrier than SLG [31]. The properties of FLGs, such as their electronic band structure, depend on the samples due to numerous crystallographic stacking orders. Therefore, the FLG is used widely in various sectors due to its potential capacity to control the electronic states through an interlayer interaction [32]. Raman spectroscopy has been extensively used to analyse graphite materials at the electronic band arrangements and vibrational ranges using the double-resonance Raman scattering mechanism [33]. The principal features of the Raman spectra such as the D, G, and 2D bands alter the location, which is related to the structural and electronic properties of composite materials [34].
With the increasing benefits of graphene-based nanotechnologies, there is a high demand to develop the production techniques for high-quality FLG. However, the industrial sector also requires cost-and time-efficient graphene production methods with a way to control layer thickness. The present work synthesized FLG from bituminous coal using a catalytic graphitization process (iron (III) nitrate nonahydrate) with MW heating for 20 min at 1300 • C. Raman spectra and mapping were applied to determine numerous aspects of the graphene produced. The fabricated FLG samples were analysed using XRD, Raman spectroscopy, TEM, and AFM.
Materials and Sample Preparation
The Australian bituminous coal was activated using a potassium hydroxide (KOH) activation process to increase the porosity of FLG composite materials. The raw coal was heated at a fixed rate of N 2 flow (200 mL/min) for 30 min by the electric furnace (Carbolite-VST 12/300, UK) at 600 • C (10 • C/min) to remove the volatile matter and other lightweight compounds. The sample was then ground and sieved to <63 µm and was consequently activated by the potassium hydroxide (KOH) at a ratio of 1:4 (coal:KOH) followed by carbonization at a temperature of 900 • C [35]. According to sample weight percentage, the Fe(NO 3 ) 3 ·9H 2 O (≥99.95%) (Merck, Germany) catalyst was mixed overnight with a sample at 2%, 5%, 10%, and 20%, and then 0.1 M ammonia solution (25%) was used individually and stirred for 10 min. Finally, the slurry was washed with distilled water, filtered, and dried at 110 • C. The KOH-modified, iron-loaded porous samples were denoted as P2% Fe, P5% Fe, P10% Fe, and P20% Fe.
Synthesis of the FLG Materials
A custom modified quartz reactor and B-type (up to 1600 • C) thermocouple was used inside a MW oven (Tangshan microwave thermal instrument CO. Ltd.; Beijing, China) to synthesize and measure the sample temperatures, briefly explained in our previous study [36]. In addition, a 3 g sample for each experiment was treated continuously with N 2 gas flow (200 mL/min) through the quartz reactor for 30 min prior to the start of the experiment. The catalyst-loaded samples with different catalyst percentages, from 2% to 20%, were graphitized at a temperature of 1300 • C for 20 min with the same N 2 flow rate.
Structural Analysis of the FLG Materials
The crystallinity of FLG composite materials was investigated with a Horiba XploRA PLUS Raman microscope (HORIBA France) with a 532 nm wavelength, which indicated the three different bands such as D ≈ 1350 cm −1 , G ≈ 1580 cm −1 , and 2D ≈ 2700 cm −1 , which are used to quantify the graphitization degree of samples. The thicknesses and cross-sectional images of the FLG samples were measured using an Asylum Research Cypher AFM (Oxford Instruments, CA). In addition, the texture of the FLG samples was scrutinized by the transmission electron microscope (JEOL TEM 2100) at 200 kV. BET was used to calculate the surface area of the materials. The X-ray diffraction (XRD) (Bruker, Berlin, Germany) was used for crystallinity in a wide range of 10 • to 80 • . Furthermore, the sample crystal sizes (La) and heights (Lc) with an interlayer spacing (d 002 ) were measured by the Scherrer and Bragg equation. In addition, their g factor values were determined by the Marie and Meiring rules [37]. The Raman mappings were detected by an electron multiplication CCD camera (EMCCD). The confocal imaging was 0.5 µm. The fine powder samples were used to focus the laser on. In addition, three samples were used for each material. The repeat scans and acquisition times were dependent on the signal-to-noise ratios of the samples. The Raman spectra were focussed through a 100× objective microscope and were optimized at the highest spectral counts.
Morphological Analysis of the FLG Materials
XRD diffraction profiles of the Fe catalyst-loaded samples at a temperature of 1300 • C are shown in Figure 1a. The two distinctive peak positions were at around 26 • and 42.5 • , respectively, which can be assigned to 002 and 100 reflections of graphite, suggesting a typical graphitic structure [38]. The highest intensity peak was obtained for the P10% Fe sample. The P20% Fe sample's intensity was 26 • , which was less than the P10% Fe sample, due to having a more amorphous carbon structure. The metallic Fe present at around 45.2 • was formed by the degradation of the iron oxide during the graphitization period. It is also represented as a nucleus to form metallic iron's graphitic layers [39]. Table 1 shows the structural parameters measured with the XRD using the Scherrer and Bragg and Marie and Meiring equations, such as the particle size (La), thickness (Lc), interlayer distance (d 002 ), and g factor [37]. The results showed that the crystal sizes for the P2% Fe of 1.42 nm, P5% Fe of 1.50 nm, P10% Fe of 1.98 nm, and P20% Fe of 1.79 nm, as well as the thicknesses for the P2% Fe of 3.07 nm, P5% Fe of 3.24 nm, P10% Fe of 4.27 nm, and P20% Fe of 3.85 nm. The P10% Fe catalyst obtained the largest sizes and thicknesses. The results for the P20% Fe sample were reduced due to the catalytic agglomeration effect and having a more disordered structure. The values for the degree of graphitization (g factor) were 85.9%, 90.7%, 96.8%, and 92.2%, respectively (see Table 1). The highest graphitization was achieved by the P10% Fe sample, with 96.8%. Furthermore, it was found that the MW catalytic graphitization also assisted in reducing the interlayer spacing distances of the samples [40]. The interlayer distance (d 002 ) values were very close for all samples, showing that the crystal structures changed from disordered to ordered structures. The interlayer distance (d 002 ) values for the P2% to P20% Fe samples were in the range of 0.3357-0.3366 nm, which suggested a turbostratic structure due to larger values than those of graphite (0.335 nm) [41]. At a heating temperature of 1300 • C, the results showed that the P10% Fe samples had d 002 values of 0.3357, which was close to the structure of graphitic carbon. Moreover, the highest graphitization value (g factor) found was for the P10% Fe sample (96.8%), which corresponded with the degradation of numerous aliphatic chains and functional groups [42]. Raman spectroscopy has broadly been applied to scrutinize carbon nanostructures because it is a nondestructive tool and is sensitive enough to determine molecular bonding and geometric structures [44]. The Raman spectra for the graphene and graphite were screened to 1000-3000 cm −1 area and showed three prominent peaks such as D, G, and 2D. At 1350 cm −1 , the D peak was attributed to defects present in the samples, and its intensity also indicated the amount of disordered structure. The G peak (1580 cm −1 ) corresponded to the sp 2 -hybridization, and the peak position and intensity are influential in determining the f graphene layer numbers [45]. The 2D peak (2700 cm −1 ) arose from the two double resonance phenomena with equal but opposite wave vector phonons and is the second most prominent graphite band. Ferrari and his co-workers investigated that the multilayer graphene electronic band structure has changed for the 2D band position, intensity, and shape [46].
The Raman spectra of P2% to P20% are recorded in Figure 1b. The disordered and ordered structures are represented by the D and G bands in the Raman spectra, and the ID/IG ratio showed the graphitization degree. In addition, the ID/IG results showed P2% Fe was 0.89, P5% Fe was 0.73, P10% Fe was 0.35, and P20% Fe was 0.862, with the P10% Fe sample containing the lowest value (0.35) (Figure 2a). It showed that the homogenous and continuous graphene carbon nanostructures were formed in the P10% Fe sample [47]. However, the degree of graphitization value for the P20% was higher than the P10% sample due to the catalytic aggregation, and a high amount of disordered structures were present in the sample [48]. The adsorption-desorption isotherm measured the textural properties of composite samples. The P10% Fe sample-specific surface area was 315.45 m 2 g −1 , which was higher than the other percentages (see Table 1). It has been found that KOH-activated samples have higher surface areas than steam-activated single and dual catalyst loading samples (109.3 m 2 g −1 and 175.61m 2 g −1 ) [36,43].
Raman spectroscopy has broadly been applied to scrutinize carbon nanostructures because it is a nondestructive tool and is sensitive enough to determine molecular bonding and geometric structures [44]. The Raman spectra for the graphene and graphite were screened to 1000-3000 cm −1 area and showed three prominent peaks such as D, G, and 2D. At 1350 cm −1 , the D peak was attributed to defects present in the samples, and its intensity also indicated the amount of disordered structure. The G peak (1580 cm −1 ) corresponded to the sp 2 -hybridization, and the peak position and intensity are influential in determining the f graphene layer numbers [45]. The 2D peak (2700 cm −1 ) arose from the two double resonance phenomena with equal but opposite wave vector phonons and is the second most prominent graphite band. Ferrari and his co-workers investigated that the multilayer graphene electronic band structure has changed for the 2D band position, intensity, and shape [46].
The Raman spectra of P2% to P20% are recorded in Figure 1b. The disordered and ordered structures are represented by the D and G bands in the Raman spectra, and the I D /I G ratio showed the graphitization degree. In addition, the I D /I G results showed P2% Fe was 0.89, P5% Fe was 0.73, P10% Fe was 0.35, and P20% Fe was 0.862, with the P10% Fe sample containing the lowest value (0.35) (Figure 2a). It showed that the homogenous and continuous graphene carbon nanostructures were formed in the P10% Fe sample [47]. However, the degree of graphitization value for the P20% was higher than the P10% sample due to the catalytic aggregation, and a high amount of disordered structures were present in the sample [48]. Another prominent 2D band was detected at 2700 cm −1 , which is frequently used to confirm the thickness of the graphene. In addition, the 2D peak is highly influential because the duel-or triple-resonance produces a photoexcited electron-hole pair with a different energy. The layer numbers changed with the nature of the 2D peak, which indicated that the electronic band structure changed. Moreover, the 2D band positions lifted to a higher number as percentages of the catalyst were increased, which corresponded to the increase in the number of graphene layers [49] (see Table 2). Furthermore, the I2D/IG ratio determined the number of graphene layers. The lowest ratio for the I2D/IG was 0.44, which was observed for the P10% Fe sample, and it corresponded to the FLG structure ( Figure 2b) [47]. Moreover, the I2D/ID ratio signified the overall crystalline structures of all samples analysed, shown in Table 2. The P10% Fe sample obtained the highest I2D/ID value of 1.24, which indicated a longer graphitic structure [50]. The full width half maximum (FWHM) of the 2D band measured the number of graphene layers, which is shown in Figure 2c [51]. The FWHM values for the different percentages of catalyst were 48 ± 0.15 cm −1 for P2% Fe, 64 ± 0.17 cm −1 for P5% Fe, 75 ± 0.26 cm −1 for P10% Fe, and 74 ± 0.24 cm −1 for P20% Fe, which suggest the formation of FLG, which is supported by the findings in other studies [47,52] ( Table 2). The Raman mapping was applied to determine the thicknesses of graphene [53]. The results of the Raman intensity mapping for the P10% Fe sample on a quartz substrate are represented in Figures 3 and 4. The peak area locations for the three (3) bands were 1350 cm −1 , 1580 cm −1 , and 2700 cm −1 , which are displayed in red, green, and blue, respectively. The mapping indicated that the sheets were homogeneously spread on the quartz substrate (Figures 3 and 4). The brighter zones of the D band showed the sample's high intensity ( Figure 3). In addition, the defects were larger in the perkier region, ascribed to edge defects. The degree of graphitization ratios (ID/IG) for the points (1) Another prominent 2D band was detected at 2700 cm −1 , which is frequently used to confirm the thickness of the graphene. In addition, the 2D peak is highly influential because the duel-or triple-resonance produces a photoexcited electron-hole pair with a different energy. The layer numbers changed with the nature of the 2D peak, which indicated that the electronic band structure changed. Moreover, the 2D band positions lifted to a higher number as percentages of the catalyst were increased, which corresponded to the increase in the number of graphene layers [49] (see Table 2). Furthermore, the I 2D /I G ratio determined the number of graphene layers. The lowest ratio for the I 2D /I G was 0.44, which was observed for the P10% Fe sample, and it corresponded to the FLG structure (Figure 2b) [47]. Moreover, the I 2D /I D ratio signified the overall crystalline structures of all samples analysed, shown in Table 2. The P10% Fe sample obtained the highest I 2D /I D value of 1.24, which indicated a longer graphitic structure [50]. The full width half maximum (FWHM) of the 2D band measured the number of graphene layers, which is shown in Figure 2c [51]. The FWHM values for the different percentages of catalyst were 48 ± 0.15 cm −1 for P2% Fe, 64 ± 0.17 cm −1 for P5% Fe, 75 ± 0.26 cm −1 for P10% Fe, and 74 ± 0.24 cm −1 for P20% Fe, which suggest the formation of FLG, which is supported by the findings in other studies [47,52] (Table 2). The Raman mapping was applied to determine the thicknesses of graphene [53]. The results of the Raman intensity mapping for the P10% Fe sample on a quartz substrate are represented in Figures 3 and 4. The peak area locations for the three (3) bands were 1350 cm −1 , 1580 cm −1 , and 2700 cm −1 , which are displayed in red, green, and blue, respectively. The mapping indicated that the sheets were homogeneously spread on the quartz substrate (Figures 3 and 4). The brighter zones of the D band showed the sample's high intensity (Figure 3). In addition, the defects were larger in the perkier region, ascribed to edge defects. The degree of graphitization ratios (I D /I G ) for the points (1), (2), (3), (4), (5), and (6) were 0.15, 0.19, 0.28, 0.32, 0.34, and 0.37 respectively. The defect ratios for points (1)-(3) were 0.15-0.28. However, points (4)-(6) were 0.32-0.37, indicating that the lower height of the D band was correlated with the high quality of the FLG, and the other findings are supported. Furthermore, the D band arises from the red boundary. The histogram of I D /I G ratio of 600 data points was obtained using the Raman mapping. It is clear from the histogram graph that the I D /I G ratio of all samples remained the same value, and the low intensity of the D band represents the FLG obtained.
The Raman intensity mapping at various points for the G and 2D are pres Figure 4 a,b. The G and second order 2D bands rise from the green and navy-blue aries. Both bands have brighter areas and defect zones. The I2D/IG intensity ratio i the layer numbers of the graphene sheets, which are obtained from the Raman m The I2D/IG intensity ratios (Figure 4 c) were around 78% for points (1)-(4), 0.45 However, for points (5) and (6), the I2D/IG values were 0.68 and 0.77, respectivel was remarkably high and indicated that the FLG formed. The histogram (Figure sults also correlated with the I2D/IG ratios exposed in Figure 4 [54].
AFM and TEM Examine the FLG Materials
Morphological changes of the P10% Fe sample were examined using AFM, and the results are displayed in Figure 5 a. The thickness of the graphene was measured with the AFM technique by studying the upper view image and the cross-sectional of the composite materials [55]. Figure 5 b shows that the graphene sheets were sound-exfoliated and that the average sheet thickness was around 4.5 nm, which indicated the existence of FLG sheets [56]. The Raman intensity mapping at various points for the G and 2D are presented in Figure 4. The G and second order 2D bands rise from the green and navy-blue boundaries. Both bands have brighter areas and defect zones. The I 2D /I G intensity ratio indicated the layer numbers of the graphene sheets, which are obtained from the Raman mapping. The I 2D /I G intensity ratios were around 78% for points (1)-(4), 0.45 to 0.51. However, for points (5) and (6), the I 2D /I G values were 0.68 and 0.77, respectively, which was remarkably high and indicated that the FLG formed. The histogram results also correlated with the I 2D /I G ratios exposed in Figure 4 [54].
AFM and TEM Examine the FLG Materials
Morphological changes of the P10% Fe sample were examined using AFM, and the results are displayed in Figure 5. The thickness of the graphene was measured with the AFM technique by studying the upper view image and the cross-sectional of the composite materials [55]. Figure 5 shows that the graphene sheets were sound-exfoliated Nanomaterials 2022, 12, 57 8 of 12 and that the average sheet thickness was around 4.5 nm, which indicated the existence of FLG sheets [56].
AFM and TEM Examine the FLG Materials
Morphological changes of the P10% Fe sample were examined using AFM, and the results are displayed in Figure 5 a. The thickness of the graphene was measured with the AFM technique by studying the upper view image and the cross-sectional of the composite materials [55]. Figure 5 b shows that the graphene sheets were sound-exfoliated and that the average sheet thickness was around 4.5 nm, which indicated the existence of FLG sheets [56]. The structure and morphology were further investigated by employing TEM to confirm the FLG materials. The TEM micrograph for the P10% Fe sample showed FLGlike nanosheets at numerous magnifications on a lacy carbon grid (Figure 6a,b). The graphene plate was thin, and the plane with the wrinkles formed the back foldaway and touched the edge because of the transfer method. A transition metal such as Fe could reduce the melting temperature of the Fe and carbon due to their d-electron configuration and ionization abilities. The amorphous carbon melted over the catalyst at the supersaturation point of Fe-C. As a result, the graphitic layer was formed due to the dissolution and precipitation mechanism, whereas the catalysts worked as nuclei [57,58]. In addition, the Fe-C eutectic point was 1148 • C, which was recorded from the Fe-C phase diagram [59]. The high-resolution HRTEM images of the P10% Fe sample in the selected regions highlighted the distorted nanosheets containing around 2-7 layers (Figure 6c,d).
The KOH-activated sample achieved a fewer number of layers of graphene compared to the steam-activated sample [43].
Conclusions
The coal-based synthesis of FLG was fabricated through potassium hydroxide m ification with an MW graphitization technique. It was found that the catalyst loading, crowave temperature, and potassium hydroxide activation played significant role manufacturing the FLG composite materials. The synthesis of the P10% Fe-loaded sam at a heating temperature of 1300 °C created a unique morphology with 2-7 graphitic la and lower defect levels, demonstrating that more regular and continuous thin shee graphene were formed. The catalyst loading percentages determined the struct change. The results of the TEM analyses revealed the sheets of the synthesized graph (P10% Fe). The Raman mapping measurements showed that at 1300 °C, the P10% loaded sample was homogeneously distributed. The average detected ID/IG ratio around 0.35, and the highest I2D/IG values were 0.68-0.77, indicating a sheet of FLG. M over, the AFM technique measured that the thickness of the FLG was around 4.5 nm. few-layer graphene has several potential applications in many fields such as energy s age (lithium-ion battery and supercapacitor), biomedical applications to targeted drug livery, sensors, membranes, and the electronics arena. The HRTEM images also identified that the interlayer distance was around 0.34 nm (Figure 6d), which resembled the plane structure of FLG (002). It also agreed with the XRD data, as mentioned above. Furthermore, the fast Fourier transform (FFT) image (inset of Figure 6d) showed the hexagonal spot configuration. It confirmed the six-fold symmetry graphene features with the crystalline nature of the materials [60].
Conclusions
The coal-based synthesis of FLG was fabricated through potassium hydroxide modification with an MW graphitization technique. It was found that the catalyst loading, microwave temperature, and potassium hydroxide activation played significant roles in manufacturing the FLG composite materials. The synthesis of the P10% Fe-loaded sample at a heating temperature of 1300 • C created a unique morphology with 2-7 graphitic layers and lower defect levels, demonstrating that more regular and continuous thin sheets of graphene were formed. The catalyst loading percentages determined the structural change. The results of the TEM analyses revealed the sheets of the synthesized graphene (P10% Fe). The Raman mapping measurements showed that at 1300 • C, the P10% Fe-loaded sample was homogeneously distributed. The average detected I D /I G ratio was around 0.35, and the highest I 2D /I G values were 0.68-0.77, indicating a sheet of FLG. Moreover, the AFM technique measured that the thickness of the FLG was around 4.5 nm. The few-layer graphene has several potential applications in many fields such as energy storage (lithiumion battery and supercapacitor), biomedical applications to targeted drug delivery, sensors, membranes, and the electronics arena.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-12-29T16:21:07.600Z
|
2021-12-26T00:00:00.000
|
{
"year": 2021,
"sha1": "aea21f339d35ac240944537a89742296d06b57e5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "489f62388821a5eacaa2c3cff4f57db6d0ea428b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222310314
|
pes2o/s2orc
|
v3-fos-license
|
Simulation-based inference methods for particle physics
Our predictions for particle physics processes are realized in a chain of complex simulators. They allow us to generate high-fidelity simulated data, but they are not well-suited for inference on the theory parameters with observed data. We explain why the likelihood function of high-dimensional LHC data cannot be explicitly evaluated, why this matters for data analysis, and reframe what the field has traditionally done to circumvent this problem. We then review new simulation-based inference methods that let us directly analyze high-dimensional data by combining machine learning techniques and information from the simulator. Initial studies indicate that these techniques have the potential to substantially improve the precision of LHC measurements. Finally, we discuss probabilistic programming, an emerging paradigm that lets us extend inference to the latent process of the simulator.
1. Particle physics measurements as a simulation-based inference problem
A fundamental problem for LHC measurements
Among the sciences, particle physics has the luxury of having a very well established theoretical basis. Quantum field theory provides a framework not only for the Standard Model, but also for theories of physics beyond the standard model (BSM). We almost take for granted the predictive power of our theories, but the way our field formulates searches for new new physics in terms of hypothesis tests and confidence intervals is critically tied to the fact that we have predictive models to test in the first place.
Often we seem to equate the predictions of a theory with Feynman diagrams and the matrix element for a hard scattering process, which in turn can be used to predict a fully differential cross-section. Of course, that is not the full story, as one must include parton density functions and quarks and gluons give rise to a parton shower and subsequent hadronization process. Moreover, we observe electronic signatures tied to scintillation, ionization, etc. in our detectors, not the final-state particles directly. Therefore the predictive model for a theory must incorporate the response of the detector to the final state particles.
While all of these points are well known to an experimental particle physicist, it has not been customary to describe the full simulation chain explicitly as a probabilistic model for the data. Why is that? In part that is because we have no explicit closed-form equation to write down nor do we have a function that we can provide to Minuit [1] that describes the probability distribution for the raw data in terms of parameters that appear in the Lagrangian for a given theory. Nevertheless, we can produce synthetic data using Monte Carlo simulation tools like MadGraph [2], Sherpa [3], Pythia [4], Herwig [5], and GEANT4 [6].
The colloquial term or jargon for both the simulation tools and the synthetic data they produce is Monte Carlo. This term refers to methods that sample from probability distributions to compute an integral. Particle physics simulators use such methods to compute the cross section of a process by sampling a number of events following the probability distribution p(x, z d , z s , z p |θ) = p x (x|z d ) p d (z d |z s ) p s (z s |z p ) p(z p |θ) .
(1) Kinematic likelihood for single event Implicit density (normalized fully differential xsec, Eq. (3)) pp(zp|θ) Parton-level distribution Tractable density ps(zs|zp) Parton-shower effects Implicit density p d (zs|zp) Detector effects Implicit density px(x|z d ) Detector readout Implicit density r(x|θ) Likelihood ratio function, see Eq. (4) r(x, z|θ) Joint likelihood ratio, see Eq. (8) Unbiased est. of r(x|θ) t(x) Score (locally optimal obs., Eq. (10)) t(x, z|θ) Joint score, see Eq. (9) Unbiased est. of scorê θ Best fit for theory parameters Estimator for θ p(x|θ) Parameterized estimator for likelihood r(x|θ) Parameterized estimator for likelihood ratiô s(x|θ) Parameterized classifier decision function t(x) Estimator for scorê p tf (x|zp) Approximate shower and detector effects (transfer function) Here the vector z p is the parton-level phase-space point of a simulated event (i. e. the parton four-momenta, helicities, and charges); the vector z s summarizes the parton shower simulation, including the stable particles that emerge from it; z d are the interactions in the detector. These three vectors collectively define the "Monte-Carlo truth record" of a simulated event and are the latent variables of the process: we cannot measure them, and in fact they are only well defined within a given simulator code. Finally, x is the vector of observables. While a real-life observation consists of tens of millions of sensor read-outs, one can consider the reconstruction of the event as part of the measurement process and take x as a vector of fourmomenta and other properties of the reconstructed particles. In Tbl. 1 we provide a look-up table for these and other symbols that appear in this review and translate between particle physics and machine learning or statistics nomenclature.
There is an established chain of high-fidelity simulators that can sample events from the probability density in Eq. (1). However, statistical inference-quantifying the degree to which parameter values θ are in agreement with an observed set of events D = {x i } n i=1 -is surprisingly challenging. Why? The key quantity for both frequentist and Bayesian inference method is the likelihood function p full (D|θ), the probability density of an observed set of events D as a function of the parameters θ. The full likelihood function is given by where Pois(n| L σ(θ)) is the Poisson probability density for n observed events, efficiency and acceptance factors , integrated luminosity L, total cross section σ(θ), and where is the probability density for an individual event to have data x. This likelihood function involves integrals over the entire parton-level phase space, all possible shower histories, and all possible detector interactions compatible with the measurement x. The integral over this enormous space clearly cannot be computed in practice, so we cannot directly evaluate the likelihood of an observed event under different parameter values θ. This means that we cannot directly find the maximum-likelihood estimators that best fit a given observation, construct confidence limits based on a likelihood ratio test statistic, or compute the Bayesian posterior p(θ|x)! The task of performing statistical inference when the data generating process does not have a tractable likelihood is known as simulation-based or likelihood-free inference. This case is not at all unique to particle physics. The formulation of this problem in a common, abstract language has led to statisticians, computer scientists, and domain scientists from various fields developing powerful methods for simulation-based inference together. While this review focuses on the particle physics case, the methods apply equally to a range of problems for instance in neuroscience, cosmology, or epidemiology.
Solving the problem with summary statistics
If the intractability of the likelihood function is such a problem, how have high-energy physicists successfully analyzed particle collisions for decades?
The reason that this problem is rarely acknowledged explicitly is that particle physicists have a track record of developing a good intuition about processes they study and finding powerful summary statistics for them. Summary statistics are individual variables that condense a high-dimensional observation. Typical examples are the reconstructed mass of a decaying unstable particle, decay angles between decay products, or other kinematic variables [8,9]. An ideal summary statistics vector v captures all of the relevant information in the observed event x relevant to the parameter θ, while being of much lower dimensionality. Given one or two summary statistics, we can easily compute the likelihood function p(v|θ) with histograms, kernel density estimation, or other density estimation techniques and then find the maximum-likelihood estimator in the parameter space and construct confidence limits based on the (profile) likelihood ratio test statistic [10,11]. This approach has been the workhorse of statistical analysis in collider physics for decades.
Note that most uses of machine learning in experimental particle physics take place within this approach. Experimental particle physicists have embraced the use of multivariate models (commonly boosted decision trees or fully connected neural networks) in the event selection. The statistical analysis of the events that pass this selection is then still based on histograms of kinematics-based summary statistics or the neural network output itself.
The reduction of data to summary statistics also enables Approximate Bayesian Computation (ABC) [12,13], a simulation-based inference method that is gaining popularity in cosmology and is widely used in many scientific fields outside of physics. It directly targets Bayesian inference, using repeated runs of the simulator together with an accept-reject criterion to draw parameter samples that approximately follow the posterior.
Both the histogram method popular in particle physics and ABC suffer from the curse of dimensionality: the number of simulations required scales exponentially with the dimensionality of x or v. This is why they only work with a low-dimensional statistic v and cannot be effectively applied to high-dimensional data x. However, finding suitable summary statistics is a difficult and task-dependent problem and almost any choice of summary statistics discards some information. As a result, data analysis based on summary statistics typically leads to reduced sensitivity and statistical power.
The frontier of simulation-based inference
In the next sections, we will describe modern simulation-based inference methods that allow us to analyze higher-dimensional data, improve the quality of inference, and improve the sample efficiency. Three developments are the key drivers behind these improvements [14]: (1) The revolution in machine learning provides us with powerful surrogate models for the likelihood, likelihood ratio, or posterior function, or for optimal summary statistics. We can thus tap into the ability of modern machine learning methods to learn useful representations directly from high-dimensional data. (2) Active learning methods iteratively use past results to steer the next simulations, leading to a better sample efficiency. (3) Integrating inference capabilities with the simulation code and augmenting the training data with additional information that can be extracted from the simulator can substantially improve sample efficiency and quality of inference.
Against the backdrop of these three broad trends, many different inference algorithms have been proposed in recent years, see Ref. [14] for an overview. Here we focus on a few methods that are particularly relevant for particle physics. In Sec. 2 we discuss techniques that aim to estimate the likelihood function or the likelihood ratio function, ranging from the Matrix-Element Method to machine learning-based methods to techniques that bring together matrix-element information and machine learning. Section 3 covers methods that aim to define powerful summary statistics, from parton-level Optimal Observables to neural network surrogates for the score function. We summarize and compare the main methods we discuss in Tbl. 2. In the following sections we will briefly discuss diagnostic tools and systematic uncertainties as well as software implementations of these ideas. In Sec. 5 we focus more on the latent process of the simulators and describe the paradigm of probabilistic programming. We discuss implementations of these methods in the HEP software stack in Sec. 6, before concluding with a summary in Sec. 7.
Inference with surrogates
The first class of methods that we discuss tackles the problem head-on and constructs an estimator for the likelihood function p(x|θ) or the closely Table 2. Simulation-based inference methods for particle physics. We classify methods by the key quantity that is estimated in the different approaches, by whether they rely on a manual choice of summary statistics, are based on a transfer-function approximation ("TF"), whether their optimality depends on a local approximation ("local"), by whether they use any other functional approximations such as a histogram binning or a neural network ("NN"), whether they leverage matrix-element information ("|M| 2 "), and by the computational evaluation cost. Derived from a table in Ref. [15].
where the denominator is some reference distribution, for instance using a reference value of the parameter points such as the Standard Model, a model average of multiple parameter points, or uniform phase space.
Once we have such an estimator, which we will denotep(x|θ) orr(x|θ), we can immediately use it in the established statistical pipeline: we can find the maximum-likelihood estimator for instance aŝ θ MLE = arg max θ Pois(n|L σ(θ)) ir (x i |θ) (5) and similarly construct exclusion limits based on asymptotic properties of the (profile) likelihood ratio [16]. Additionally, we can use the resulting likelihood ratio test statistic together with toy Monte Carlo to guarantee coverage, as discussed in Sec. 4.
An approximation: the Matrix-Element Method
The Matrix-Element Method (MEM) [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32] approximates the likelihood in Eq. (3) by replacing the precise model of the effects of shower and detector with a simple, tractable transfer functionp tf (x|z p ). This simplifies the marginal distribution that would involve integrating over a large number of microscopic interactions to a convenient probability density such as a Gaussian. The MEM likelihood is given, schematically, bŷ where |M(z|p|θ)| 2 is the squared matrix element evaluated at a phase-space point z p and parameters θ and for simplicity we have left out parton densities as well as phase-space and efficiency factors. Since the integrand is tractable and the integral is over a much lower-dimensional space than the on in Eq. (3), it is feasible-though expensive-to compute this approximate likelihood function. a In some processes, particularly those involving only leptons and photons, the MEM can give a reliable estimate of the true likelihood. However, jets are less well modeled by transfer functions and additional jet radiation is difficult to describe in this approach. Finally, the MEM still requires a computationally expensive numerical integration for every event that is evaluated, which can be prohibitive.
Learning the likelihood
Rather than computing the integral in the likelihood for every event to be evaluated, we can fit a surrogate model to data from the simulator and then use that for inference. Such a surrogate model needs to be flexible enough to accurately represent a complicated and multimodal probability distribution, we have to fit it to limited training data, and its likelihood function needs to be computed efficiently. Kernel density estimation has been used in this context [37], but it was limited to roughly five-dimensional data. Recently, several machine learning models have been developed for this task, which are effective for estimating distributions of high-dimensional data. In particular, neural density estimators such as normalizing flows [38,39] are flexible probabilistic models with a tractable likelihood function.
This lets us solve the problem of simulation-based inference in three phases [40]: (1) We run the usual simulator chain a number of times with different input parameters θ and saving θ together with the simulated events x ∼ p(x|θ).
(2) Next, a neural density estimator is trained to learn the conditional probability density p(x|θ). We use a single model for the whole parameter space (as opposed to individual models for a number of points along a grid in the parameter space), the parameter point θ to be evaluated is an additional input to the model. Such a parameterized model [41,42] can leverage the smooth dependence on the parameter space, the probability density at each parameter point can "borrow" information from nearby points). (3) After training, we can evaluate this model for arbitrary observations x and parameter points θ and efficiently get an estimator for the likelihood functionp(x|θ). We can then use this to define best-fit points and exclusion limits with the usual statistical tools.
Two aspects of this approach are noteworthy. First, we can use any state-of-the-art simulator in this approach, including shower and detector effects. Unlike in the MEM, there is no need for any approximations on the underlying physics. Second, the approach is amortized : after an upfront simulation and training phase, we can evaluate the approximate likelihood function very efficiently for a large number of events and parameter values.
Neural density estimators like normalizing flows have other useful properties. They are generative models, i. e. one can sample from the probability distributions they have learned. This is not only a convenient cross check, but can also be used for event generation, to unfold reconstruction-level variables to the parton level [43], and for anomaly detection [44].
Learning the likelihood ratio
Training a surrogate for the likelihood function actually solves a harder problem then necessary for inference. To find the maximum-likelihood parameter point and to construct exclusion limits we do not actually need to know the likelihood function itself-the likelihood ratio r(x|θ) defined in Eq. (4) is in fact just as useful! As it turns out, training a neural network to learn the likelihood ratio is often easier than learning the likelihood function.
The key idea is known as the likelihood ratio trick : a binary classifier trained to discriminate samples x ∼ p(x|θ) from samples x ∼ p ref (x) will eventually b converge to the outputŝ(x|θ) → p ref (x)/[p(x|θ)+p ref (x)], which is a monotonic function of the likelihood ratio r(x|θ). In other words, we can transform the output of a classifierŝ(x|θ) into an estimator for the likelihood ratio function asr (x|θ) = 1 −ŝ(x|θ) s(x|θ) .
We can use this in an inference algorithm similar to the one discussed in the previous section [41,[45][46][47][48][49][50][51][52][53][54]: (1) We again start by running the simulator chain, generating one set of events from a reference distribution p ref (x) (e. g. the SM) and a second set of events from various parameter points θ. (2) Next, a neural classifier is trained to discriminate between these two sets, using the binary cross-entropy as a loss function. Like before, the classifier is parameterized: the parameters θ are used as explicit inputs into the classifier. (3) After training, we can transform the output of the classifier into an estimator for the likelihood ratio function with Eq. (7) and an optional calibration procedure. This surrogate model can then be used to find the best-fit point and exclusion contours using established statistical tools.
This method is known as CARL [41,48,55]. It again supports arbitrary simulators without requiring approximations on the underlying physics and is amortized, allowing for an efficient evaluation after an upfront simulation and training cost. Compared to learning the likelihood function with a neural density estimator, the CARL approach can be more sample efficient (saving computation time). While a surrogate model for the likelihood ratio does not allow us to generate samples, it can be used for reweighting. For the simulation-based inference problem, this can be useful as a diagnostic tool. In other contexts, this ability can be used to reweight events [47,54,55], tune shower and detector-simulation parameters to data [54], for unfolding [56], and for anomaly detection [57].
Integration and augmentation
Both inference techniques described above-training a neural density estimator to learn the likelihood function and training a classifier to learn the likelihood ratio-treat the simulator chain as a black box that takes parameters θ as input and outputs samples x ∼ p(x|θ). In reality, though, we know more about the particle physics processes. They consist of the separate pieces of parton-level generator, parton shower, and detector simulation, as given by Eq. (3); typically only the parton-level step explicitly depends on the theory parameters of interest θ.
We can leverage this understanding of the simulated process to extract more information from the simulator and use it to augment the training data for the likelihood or likelihood ratio model. c In particular, we can access the latent variables (or Monte-Carlo truth record) z = (z p , z s , z d ), while tools like MadGraph let us compute matrix elements for arbitrary theory parameters. For each simulated event we can thus compute two useful quantities: the joint likelihood ratio [7,58,60] r and the joint score Here |M| 2 (z p |θ) and |M| 2 ref (z p ) are the squared matrix elements for partonlevel phase space points z p for theory parameters θ and under the reference distribution, respectively, while σ(θ) and σ ref are the total cross sections. The joint likelihood ratio and joint score quantify how the probability of one simulated event-fixing all of the latent variables in the simulation chain-changes if we change the theory parameters θ.
How are these two quantities useful, especially given that they depend on latent variables z that are only meaningful for simulated events, but not for real measurements? It turns out that the joint likelihood ratio r(x, z|θ) is an unbiased estimator of the likelihood ratio r(x|θ) and the joint score provides unbiased gradient information. This means that we can augment c The extraction of the joint likelihood ratio and score is in fact more general and can be realized for many simulators [58,59]. However, the particular structure of particle physics processes makes it easy to compute these quantities, which in this case are closely linked to the squared matrix element.
Simulation
Machine Learning Inference x z < l a t e x i t s h a 1 _ b a s e 6 4 = " H j Z 6 R x R D d Z u 1 3 n j 0 L y k k C n s r x + N 9 v v P m j H / / k p z 9 7 6 + c 7 b 9 / 4 x S / f e f e 9 X 7 1 I 4 0 y 4 7 L k b B 7 H 4 3 q E p C 3 j E n k s u A / Z 9 I h g N n Y B 9 5 8 w / x / 7 v L p h I e R w d y 2 X C T k M 6 i / i U u 1 R C 0 / l 7 b / / N d t i M R 7 n k 8 8 u E u z I T r D g 5 3 S H E x p Z U L g O W u 3 E U M R c F i q O T 1 I + F Z B H 5 9 M h M p C F 9 7 s 4 N 4 g n 6 6 s g J q D u / + W B s k G H E U X p P S x n L 0 z q 5 p C z 7 z 5 V 5 D w B G b C P z s a Z M u T C C G 6 n O D V 2 e a J 8 X 5 8 Z l e Z n n I U 5 y a l j B r C n 9 5 p Q x s V + n L Q n 2 f 2 R 6 d z R h k Q n h o w K B G g X B 7 d l 6 S 4 Z M O j e e w C T Z m h F 2 o Z Z C z F 6 2 w X X X N 2 3 3 h q u 9 x u 2 + 2 6 v u q 3 S d X f c f t v q m j u + C 7 u Z w 3 X W f 5 s J 2 t k r I 7 a U t u u l a S 4 p n I k W d t K P X b a x y z 5 s 0 3 T A d n s E b U i h K g i 6 D D H 6 h l i d x e r W u 0 / m s I P e s e y E 7 5 3 t F 3 0 a + 8 Q n V f F J p 7 h S n / S q T 6 6 j v k 9 8 U h X v V q / E e / y V k H t k J a x c d 4 u 8 Y C 6 W S K p k g h U s e O v 0 I l / F F 8 z V t Y j + s T 9 R w M X d 4 s Q 9 J V i S 2 a o Y w 9 e Q 6 6 5 m I Y R O U T T 7 V 9 H A d X 8 r F 4 5 Q U U 2 u Q 6 W u k + 3 W l Y y i m 1 J c k 7 L l N n X e 6 j f y Q y D U f B 9 e a a H y n + J 7 u J U P L / v a P L g 7 w M t 9 Z e 4 D v D 3 E y 8 P t i p T Z W z 3 a 1 a U r T 9 y 7 4 c g v y v P S F 8 w N o K j C 5 m / K V p h L q H j g f 0 / v M f Q e 9 / Z 6 n M 6 K X F 1 7 E E 8 Z F o N P W V / / k 5 h D v a S u P Q g 3 h s o d L z 3 9 c P I o c r z 0 9 L M F l J t S v X 0 s 3 z s 4 T v 5 l 0 Q f / 7 G m B b y V 6 e n 2 3 y P 2 R b b i j b s R d f G k 4 C y l s r / B t G 3 i 3 D c i j F R D u e n Q K F m T o w q + f 9 y G C e A a n b w 6 2 r e / 6 l D 5 + 9 m W R 4 6 U P I E X I K F g l Q x V C x 3 x + q X 5 K q P y R U z 4 e P T h w w 2 L V v v p h K 7 d q j f r 3 r S b Y i Y W n 0 e o P A y r t i 9 T n U w n N i k a 3 B 2 l F q b X B l + 1 M b 4 m o o 9 2 F P 3 G 0 W 2 H I M 1 h H l Y 6 1 + a 1 G / S a 6 O i 7 s a I z r / N 1 d s / k j X P v m h T U y x y P z 2 / H u o 0 c D / X l r 8 P 7 g N 4 M 7 A 3 P w Y P B o 8 L v B k 8 H z g f v 2 / 2 7 c v H H 3 x k e 3 / 3 r 7 7 7 f / c f u f G v r m G 6 X M r w e 1 z + 1 / / R + 6 g F M 9 < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 U Z a 7 t G H E N D K 6 m M n q 3 w 8 y m W 1 b B 2 m 2 r D r a R l p 8 4 L P J I w K b y 2 B y V S 7 j b N C 1 m t s S s 0 W n N i N 9 C W S K p Z J V 1 V 9 k e T 9 W S G 6 g V B 4 + w 4 g j k h 6 F 1 q B P W B 7 b C n o 8 s F B P z F J 7 g Z J e 6 + Z 5 Z F H U Y 3 C e Q L l Q k F j n s n g K y c g L G O 1 C V z G 2 H z 9 I 5 T 2 p t U c w j 2 J 9 k g w n i k 1 / g X G i N P k x C 7 k u Z f H z v n u o a x m J 2 D 6 L 5 H h i h D Z I S B v 0 9 v / g Y z W q w 4 Q 1 b W Q 9 u m E E K p C K 3 0 + m U h j x Y 2 i l k u 0 R i y C u u Z q L S l B V H Y f a F V U n q a n h Y I L 3 0 R Z j z p g 0 e n 0 4 3 3 V 6 z m 8 2 K P G d D 2 5 g N i T q W 7 s E P j a + p Y o 1 s i Z V P 7 7 i O 3 3 1 7 q a U o p 4 n 4 w 5 d + W A 0 P D i E L F 7 i U M 6 + S H w a y T i E w i o L W G 7 i 6 b g q t A J D a k i r u n T l i X s 3 H P l F e V 7 6 g r k B F F X Y / E 3 Z C n M J F Q / 8 7 + k 9 g d 6 T 3 l 6 P 0 1 m R q 2 s P 4 i n D Y v A p 6 + t / E n O o l 9 S 1 B + H G U L n j p a c f T h 5 F j p e e f r a A c l O q t 4 / l e w f H y b 8 s + u C f P S 3 w r U R P r + 8 W u T + 0 D X f Y j b i L L w 1 n I Y X t F b 5 t A + + 2 A X m 0 A s J d j 0 7 B g g x d + P X z P k Q Q z + D 0 z c G 2 9 V 2 f 0 s f P v i x y v P Q B p A g Z B a t k q E L o h M 8 v 1 U 8 J l T 9 y y k f D B 4 d u W K z a V z 9 s 5 V a t U f + + 1 Q Q 7 s f A 0 W v 1 h Q K V 9 k f p 8 K q F Z 0 e j 2 I K 0 o t T b 4 s p 3 p L R F 1 t L v w J 4 5 2 K w x 5 B u u o 0 r E 2 v 9 W o 3 0 R X x 4 U d j X G d v 7 t n N n + E a 9 + 8 s I b m a G h + O 9 p 7 9 G h H f 9 7 e e X / n N z t 3 d s y d B z u P d n 6 3 8 2 T n + Y 7 7 z v 9 u 3 L x x 9 8 Z H t / 9 6 + + + 3 / 3 H 7 n x r 6 5 h u l z K 9 3 a p / b / / o / y + d T h w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 U Z a 7 t G H E N D K 6 m M n q 3 w 8 y m W 1 b B 2 m 2 r D r a R l p 8 4 L P J I w K b y 2 B y V S 7 j b N C 1 m t s S s 0 W n N i N 9 C W S K p Z J V 1 V 9 k e T 9 W S G 6 g V B 4 + w 4 g j k h 6 F 1 q B P W B 7 b C n o 8 s F B P z F J 7 g Z J e 6 + Z 5 Z F H U Y 3 C e Q L l Q k F j n s n g K y c g L G O 1 C V z G 2 H z 9 I 5 T 2 p t U c w j 2 J 9 k g w n i k 1 / g X G i N P k x C 7 k u Z f H z v n u o a x m J 2 D 6 L 5 H h i h D Z I S B v 0 9 v / g Y z W q w 4 Q 1 b W Q 9 u m E E K p C K 3 0 + m U h j x Y 2 i l k u 0 R i y C u u Z q L S l B V H Y f a F V U n q a n h Y I L 3 0 R Z j z p g 0 e n 0 4 3 3 V 6 z m 8 2 K P G d D 2 5 g N i 7 8 0 + j h k j p x D H 2 v 3 s Z c Z v 6 A B j g 7 G w 0 P 2 U h n 6 R 4 i P K C Y h X T q s L r B k C S B h d c H G h s a g d 5 o p E g p F 1 2 e u W h Y t Z 0 I M e f G 0 n 0 M n a K C g q W w J w 9 S z f l H l b P V S A b b D 0 u X P V L D X W V 6 + f J l R g N r q m + i v o o V Z g 1 a o H t g G t w Z q J K h / 4 i 9 T 7 q a r G a g q t A J D a k i r u n T l i X s 3 H P l F e V 7 6 g r k B F F X Y / E 3 Z C n M J F Q / 8 7 + k 9 g d 6 T 3 l 6 P 0 1 m R q 2 s P 4 i n D Y v A p 6 + t / E n O o l 9 S 1 B + H G U L n j p a c f T h 5 F j p e e f r a A c l O q t 4 / l e w f H y b 8 s + u C f P S 3 w r U R P r + 8 W u T + 0 D X f Y j b i L L w 1 n I Y X t F b 5 t A + + 2 A X m 0 A s J d j 0 7 B g g x d + P X z P k Q Q z + D 0 z c G 2 9 V 2 f 0 s f P v i x y v P Q B p A g Z B a t k q E L o h M 8 v 1 U 8 J l T 9 y y k f D B 4 d u W K z a V z 9 s 5 V a t U f + + 1 Q Q 7 s f A 0 W v 1 h Q K V 9 k f p 8 K q F Z 0 e j 2 I K 0 o t T b 4 s p 3 p L R F 1 t L v w J 4 5 2 K w x 5 B u u o 0 r E 2 v 9 W o 3 0 R X x 4 U d j X G d v 7 t n N n + E a 9 + 8 s I b m a G h + O 9 p 7 9 G h H f 9 7 e e X / n N z t 3 d s y d B z u P d n 6 3 8 2 T n + Y 7 7 z v 9 u 3 L x x 9 8 Z H t / 9 6 + + + 3 / 3 H 7 n x r 6 5 h u l z K 9 3 a p / b / / o / y + d T h w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " K G K Y m y K W j U V u p N z Z Y 8 D V U S h r v q U = " > A A A o E H i c p V r b c t z G E a W d m 0 M r i Z 0 8 + m U U W t b F 2 N U C X E q U X X Q 5 v p S T K i t S J E p 2 Q p C s A T C 7 m F r c N B h Q u 0 S Q j 0 j l Y / K W y m v + I N + S l 3 T P A L u 4 L l n y q o Q F Z k 6 f 7 u n p 6 e n B 0 k k C n s r J 5 L 9 v v f 2 j H / / k p z 9 7 5 + e 7 7 9 7 4 x S 9 / 9 d 7 7 v 3 6 Z x p l w 2 Q s 3 D m L x v U N T F v C I v Z B c B u z 7 R D A a O g H 7 z l l 8 i f 3 f X T C R 8 j g 6 l q u E n Y Z 0 H v E Z d 6 m E p v P 3 3 / 2 H 7 b A 5 j 3 L J F 5 c J d 2 U m W H F y u k u I j S 2 p X A U s d + M o Y i 4 K F E c n q R 8 L y S L y 2 Z G Z S E P 6 3 F 0 Y x B P 0 9 Z E T U H d x 8 + H E I K P 8 s 5 P U p Q E 7 M k 8 L g 0 S x x 4 g H g 6 G R y 4 7 s K A r o C m x i S U v P L I v 6 t c A I V q R S R V O f J F R K J q K j P I 7 I f i J J P J s R K 5 H F d k t a 6 i K W C a U M Z Q y X C z d g 5 Y B m P A i O X v t c M i P k E Q + z k K T 8 U t m O g 8 H 7 F t m S r O n 0 T d 0 U g 2 w I W 3 L S Z 5 I O y H p U L E Z O k L F S f v 1 8 0 5 z c 7 C P z u e e B 0 7 Z Y 0 v Y 4 j 2 j w h p Y v S U A d F h T k i J x I t p T a P M G 8 f q N a 4 G F r + l j n g r G o 1 3 N 9 a H T R 6 S 6 g b x E F Q r l Z D M v C 9 Y k d 0 Z C R + 8 R e E R 6 R f G K M x 2 P D L B C C M 3 t S n 4 9 T c k c 9 j p T Q X U I l u T M x L D c k I x B 3 w 7 s k L z 5 V a p Z X q r A 2 K p Y b + m W T e m S 2 u B W 5 E y 9 Z i t I 4 L S c Y / 8 w z y u X Q j B N g H E 3 G D 4 z J e H q X j E Z A 2 X 6 w x g / 0 w 6 j 7 h M B P + / V U x K P 9 S q L n a a 1 o 1 P e I 2 H J A v n m l u 6 Y b d z W C G g z x z b r T a t n E I O b 4 g K A H y c a F p F R p / S C V V l 2 l d a + u t E e n V h n E c 7 z p h f W g r 7 P n k w 8 b C X W 4 R H p k b 6 W V T r L 7 d a 9 n 9 c T d P l C l z Q 6 r s A B c T 2 6 c y F 8 W H P U m k x t n I H + T j O t N a g a J E t h T Z 3 p j M q l P 9 M M P K D W r N J j W b D f P a K M D P 3 9 u b j C f q Q 7 o 3 Z n m z t 1 N + n p 6 / / 4 F r e 7 G b h S y S b k D T 9 M S c J P I 0 p 0 J y q G q L X T s D q n D W T P W i e r Q E r k + v X V m F R F R V g y L Q W e P C E t S H s T R F r k L K i p Q j / y y J V m u 6 G U f 9 r I f e 9 m H X f V j V 3 3 Y i 3 7 s R R 9 W 9 m N l r 7 1 M x P 3 w i Z 7 I x 0 z 6 s d e a Q 5 u 7 I a W P n 3 9 d 5 H g Z A k g R M g p W y V C F 0 D F f X K q f E m p / 5 J R P x g 8 P 3 L C o 2 q s f t n K r 0 a h / 3 2 q D n V h 4 G q 3 + M K D W v k x 9 P p P Q r G h 0 e 5 D W l F o b f N n O 9 J a I O r p d + B N H t x W G P I d 1 V O t Y m 9 9 p 1 G + i 6 + P C j t a 4 z t / b M 9 s / w n V v X l p j c z I 2 / z T Z + / z z 8 g e 6 d 3 Y + 2 P n t z p 0 d c + f h z u c 7 v 9 9 5 u v N i x 3 3 3 f z d u 3 r h 3 4 + P b f 7 / 9 z 9 v / u v 1 v D X 3 7 r V L m N z u N z + 3 / / B + t 8 V D E < / l a t e x i t > r(x, z|✓) < l a t e x i t s h a 1 _ b a s e 6 4 = " n 4 E C w T Y d C f u L w J X C Z l 6 + y b i K m 1 E = " > A A A c o X i c p V l t b 9 y 4 E d 6 7 v l 3 d t 1 z 7 p c D 5 A 6 + G 0 + S w 3 u y u 7 c S 5 I k C Q u + B 6 Q N K 4 j p N L a 9 k G J Y 0 k Y i V R J q n N 7 u r U X 9 B f 0 6 / t H y n Q H 9 M h p b X 1 u j H a N a y l O M 8 z H A 6 H o 6 H W T k I m 1 X j 8 7 4 8 + / s E P f / T j n 3 z y 0 6 2 f / f w X v / z V n U 9 / / V b y V D j w x u E h F + 9 s K i F k M b x R T I X w L h F A I z u E 7 + z Z V 1 r + 3 R y E Z D w + V c s E z i P q x 8 x j D l X Y d X n n r q V g o Y y e z K V i t i f A z T N x b z E k K / I 9 s V Q A i t 7 P L + / s j E d j 8 y H t x q R s 7 D z 9 7 e o / A / w c X 3 7 6 m W O 5 3 E k j i J U T U i n P J u N E n W d U K O a E k G 9 Z q Y S E O j P q w x k 2 Y x q B P M + M I T n Z x R 6 X e F z g f 6 y I k X H r o p x p n I 1 P h P G T p 8 t c m 2 O t a r q 6 u U o p Q y 3 y T 4 i t v Y a 5 B a 1 Q P 7 A Z 3 D S y Q O P x x s J T M k e s V r 5 N x A u u Y U o F D w + x l f m k C q G N v 6 I n X s K / y y w 5 Y 6 I s 6 7 M U G l c J t q r R C 8 N S 9 n Y k l m B + o + w 2 C L W 4 i 8 N l J U 1 2 U Y A z V 1 0 Z f b S 9 L 8 s v T i 2 K b Z R G T e m l a Z G i S n 3 + Q g 4 8 r e Z W b 7 w v L p b 4 P m A n x p g H D G g X D 7 f V l q U z f F a H x B h + C j R W B u d k G G b x t h e 1 a N G v L o r X s Z V v m r 2 X f t G V q L T t t y z y 7 E O F 3 c z v f i C 6 y v X a 2 S k p x 0 m b e i N Z M d M P X W H A J Z q c m y 5 P m f t P P 8 v x s W o m S P + X W 5 / h X R g q x I u a 6 I X x P d q b k O m y 0 W h C Y W R S b Y y p B X e Z Z k y q q u G j 4 P H F B F C N 4 W C o S k / c V w y y J E V u 9 m 7 b c 1 y T q H F m y i m a b 0 s m 5 m F Z Z G d I u W k y v R Y R Q 0 T W v a E 8 L h z 7 j o d u 9 4 b H H N W e C e q w T I y g Y W X F o a K 2 q R s R p 3 k 9 D Y Q c F E s l C H m / g z a l Y g z r 4 i w a z 3 N G L L u y q G 7 v q w i 6 7 s c s u 7 L w b O + / C q m 6 s 6 r Q e W p n 9 1 4 5 B f l e e k r c E I s q n T 3 q 7 I X 1 x I r H v z v k R 6 j 9 L h X 6 j L q 5 5 m 5 9 i C O Q B e D R 9 A n P + Q M 6 y V z 7 U E 4 H C t 3 f e m R 4 8 k j z / S l R w 4 L L D e V e f t Y v n e w 7 e x 5 3 g d / d p T r t x I 9 0 s D J s 2 B k D Z 1 R N + J z / d L Q j y g + X v H b G u r W O i C L V 0 B s 9 Y w p I E y 1 C 1 + c 9 C F C 7 u P p m 6 F t 1 6 2 + Q V + + f p 5 n + t I H U C I C i l a p y I T Q M Z t d m Z 8 S r D i O s T D U b y e z 8 W h v 1 4 n y V X 9 I l y A k J N m 0 1 m l D q D s b Y J s L t 0 C P R / X + h Q y Y p 7 D b q C n 6 Q 1 k Z d H q D L / u h e C T q M d o i / R N H u x e n 7 O M + q g i u z W 9 1 F m + i q / P S g s a 8 L u 5 s T p o / w r U b b 6 a j y X g 0 + X a 8 + X R / U H w + G n w 6 + P 3 g w W A y 2 B s 8 H f x p c D g 4 G T i D f w x + G P x z 8 K + 7 m 3 e / u X t 4 9 6 i A f v h B y f n t o P a 5 e / p f a B O x L A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " k Q L Q G J 4 X e + W R f s B c 6 T u 8 q U q o q p g = " > A A A c o X i c p V l t c 9 y 2 E b 6 k b 6 n 6 5 r Q f o w 9 t u L z f x A q w + G 6 u K F q n 0 w B + W f Z n V p E k F / Y f 2 n N Y J A I r r x b 0 9 5 1 Y 2 s J d 0 E 7 g r A 2 c d Q F 9 0 j a z y G E a 3 Y J f N b F X P U p b w C J 5 d N r a w i 4 s e 1 j V 3 6 J g M Q X Q t P t 8 p 9 v B P k 9 F C 7 v b j 1 2 2 s c s + b N N 0 x H Z 7 R N s g s S p g X Y Y M + 6 Z Y X c X q o 3 f H m u E T 9 P 7 U T u j n 5 z t 5 3 4 h 9 9 N 0 q f b e T b o Z P e o d P b j N 8 H 3 2 3 S u 8 e 3 t B 7 / J V Y D 6 w V 2 b j u r v U G X F 0 i m Z I J d 7 C g r d O L e s f n 4 B a 1 i A N 4 F s 0 S A 1 x 8 k Z + 6 Z 5 Y u y W x T j A G i r 0 X N Q k g 7 x a j Z e Z 8 a v O 6 s 1 a V n a F T t 3 k a V u e 6 u t 6 7 U K L p V i l u q b L n N n L f 6 j b y H C g t 9 9 9 5 r o f G f 0 X e w V p + + 7 B T m Y W t P X x 4 a c x / p 5 r 6 + H K w f y J i 9 1 h P / i 9 M T c i s j S g T p h C Q k i c X e l A r Q x 2 j v N F G l Z t h u C a 7 Z F y 5 k Y Q x 7 3 + 3 U U C R p V E K l a Z F x 6 6 K c a Z y N T 4 T x k 6 f J X J t j r W i 4 u L l K C U N t 8 W 8 V X 3 s J c g V a o H t g 1 7 g p Y I H H 4 w 3 A p q S t X K 1 4 n 4 w R W M a V C l 7 D s R X 5 u A q h j b + i J 1 7 A v 8 / M O G A t E H f Z 8 j U r h N V X a D H x 1 b 2 t i C x q E 6 r M G w R H X E f j k q K k u S j C G 6 m u j r 4 6 f J f n 5 8 V m x z b K I S r 0 0 L T I 0 y U / f y 8 H H l b z I z f e Z 7 Z E g A M y E e N O A Y Y 2 C 4 f b q v F S m 7 4 r Q e I 0 P w c a K w N x s g w z e t M J 2 J Z q 1 Z d F K 9 q I t C 1 a y Z 2 2 Z W s m O 2 z L f K U T 4 3 d z O 1 6 K z b L u d r Z J S n L S Z 1 6 I V E 9 3 w F R Z c g j q p y f J W c 7 / p Z 3 l + M q 1 E y Z 9 z + 1 P 8 K y P F s i P q e Q z + Z m 1 N r a u w 0 W p B Y G Z R d I M T c i s j S g T p h C Q k i c X e l A r Q x 2 j v N F G l Z t h u C a 7 Z F y 5 k Y Q x 7 3 + 3 U U C R p V E K l a Z F x 6 6 K c a Z y N T 4 T x k 6 f J X J t j r W i 4 u L l K C U N t 8 W 8 V X 3 s J c g V a o H t g 1 7 g p Y I H H 4 w 3 A p q S t X K 1 4 n 4 w R W M a V C l 7 D s R X 5 u A q h j b + i J 1 7 A v 8 / M O G A t E H f Z 8 j U r h N V X a D H x 1 b 2 t i C x q E 6 r M G w R H X E f j k q K k u S j C G 6 m u j r 4 6 f J f n 5 8 V m x z b K I S r 0 0 L T I 0 y U / f y 8 H H l b z I z f e Z 7 Z E g A M y E e N O A Y Y 2 C 4 f b q v F S m 7 4 r Q e I 0 P w c a K w N x s g w z e t M J 2 J Z q 1 Z d F K 9 q I t C 1 a y Z 2 2 Z W s m O 2 z L f K U T 4 3 d z O 1 6 K z b L u d r Z J S n L S Z 1 6 I V E 9 3 w F R Z c g j q p y f J W c 7 / p Z 3 l + M q 1 E y Z 9 z + 1 P 8 K y P F s i P q e Q z + Z m 1 N r a u w 0 W p B Y G Z R d I x a s Z c / b M r W W n b R l v l O I 8 L u 5 n W 9 E 5 9 l u O 1 s l p T h p M 2 9 E a y a 6 4 R s s u A R 1 U p P l r e Z + 0 8 / y / H R a i Z I / 5 / b n + F d G i m V H 1 P M Y / M 3 a n l r X Y a P V g s D M o u g C U w n q M s + a V B H F R c P n i Q e i G M H H U t E y e V 9 R z J I Y s d W 7 a c t 9 T a L O k S W r a L Y p n Z z z a Z W V I e 2 8 x f R b R G C K r H l F e 1 o 4 9 C l n X v e G x x 6 v O B L U Y t 0 y g o K x P j M 0 x 9 e I O M 3 7 a S j s o E A i K e P x B t 6 C i D W o g 7 9 s M M s d v e z C X n V j r 7 q w q 2 7 s q g u 7 6 M Y u u r C q G 6 s 6 7 Q X B u + H j Y i F f g g q 5 1 1 h D l 2 B R e V 3 E f K 3 v 7 L K 2 q g M F Z + I G e K z v u o E S i 8 X V D f K 1 u e 2 H 0 p j X w b q j G + 4 S i T u 2 a q 2 5 7 7 G 3 A T 5 u g M 3 D F F x 9 6 r U 4 B q n o j n E T v r r o L Z v r v K u 7 W g k p V v p 0 S P C Y X a 8 q r K y z k E o 2 a R C 3 0 S A 2 a Z C 3 0 S A 3 a V C 3 0 d A u a y o a r m 6 j 4 a 8 t D Y z j W k U c E 9 G t / L h e F E M r H q 2 4 v N / G C r D 4 b q 4 o W q f T A H 5 Z 9 u d W k S S X 9 h / b c 1 g m A i u v F v Q P n V j a w l 3 Q T u C 8 D Z x 3 A X 3 S N r P I Y R r d g l 8 1 s V c 9 S l v A I n l 0 2 t r C L i 1 c X v W V z l X d 1 V y s h x U q f D g k e u + t V h Z V 1 F l L J O g 3 i N h r E O g 3 y N h r k O g 3 q N h r a Z U 1 F w 9 V t N P y l p Y F x X K u I Y y K 6 l R 9 X i 2 J o x a M V l / e P s Q I s v p s r i t b p N I B f l v 2 V V S T J h f 3 7 9 h w W i c D K q w X 9 X S e W t n A X t B M 4 a w N n X U C f t M 0 s c p h G t + B X T e x V j 9 I W s E g e n b a 2 s A v L H l b 1 t y h Y T A E 0 7 T 7 f 7 X a w z 1 P R w u 7 1 Y 5 d t 7 L I P 2 z Q d s d 0 e 0 T Z I r A p Y l y H D v i l W V 7 H 6 6 N 2 1 Z v g E v T + 1 E / r 1 + W 7 e N 2 I f f a 9 K 3 + u k m + G T 3 u G T 2 w z f R 9 + r 0 r u H N / Q e f y X W Q 2 t F N q 7 b t t 6 C q 0 s k U z L h D h a 0 d X p R H / g c 3 K I W c Q D P o l l i g I s H + a l 7 Z u m S z D b F G C D 6 W t Q s h L R T j J r d j 6 n B 6 + 5 a X X q G R t X e b V S Z 6 9 5 l t c 9 y 2 E b 4 k f U n V N y f 9 G H 1 g q p H r e E 7 n u 5 N k S + l 4 J u P E k 3 b G r l V Z s p 2 K k g Y k l y T m Q I I C w P O d W P Z P 9 N f 0 a / s r + m + 6 A H k S X 8 + a 9 D T i g d j n W S w W i + W C 5 y S M S j U e / / e j j z / 5 y U 9 / 9 v N P f 7 H x y 1 / 9 + j e / v f f Z 5 2 8 k T 4 U L p y 5 n X L x z i A R G Y z h V V D F 4 l w g g k c P g r T P 7 V s v f z k F I y u M T t U z g P C J B T H 3 q E o V d l / d 2 t m 0 F C 2 U U Z R 4 R s x 2 H p Z B n t g p B k X z D J i K I a H w Z W C / O g v P L e 1 v j 0 d h 8 r H Z j U j a 2 B u X n 6 P K z L 1 z b 4 2 4 a Q a x c R q Q 8 m 4 w T d Z 4 R o a j L A P W n E h L i z k g A Z 9 i M S Q T y P D P m 5 N Y 2 9 n i W z w X + T v Y f H z y e P t n b n + Q d T D t M 4 Y Y 6 v q H e g e k L g L h O r X L G h 6 j D 6 M s 3 7 r X Z X N D Y X 4 0 8 K e i P 9 5 6 M 9 w 8 O p r v 7 0 8 O 9 y e S g Z L + H X M 5 4 r 0 R 3 r d 7 Q l X r p h 7 q t O A 9 l L W I y y d K Y q U W 9 0 x c 0 C Z j T 6 I 3 S U D H B 3 w 1 t z m e K 2 n K I l z S k Y j H 0 Q k 5 V P R T 1 o E 9 j L i I a S n Y N 5 5 l M b Y / 5 j d E x o A N w h 7 a g M 6 g r y D x Y x l G y Q 1 P F 6 5 o l F + q + w y P c i x I j P a b K Z j Z C j n B z v E r 0 1 p M n / K j U E i y T A G K Z Z 6 k I 8 6 q W x P U U L I Y B c / X + n s m h v h o / P 6 1 E x 5 A 4 T E G 1 u 1 j 6 I U F 9 1 W 4 d h c Y x E d 5 J n k B s F C r u P K V h e K 7 t A C H A q 8 8 N I g b q N B b N I g b 6 N ✓ j < l a t e x i t s h a 1 _ b a s e 6 4 = " I 4 3 q A b / N 9 + g E l e D Q 8 8 L U j U 6 g B U M = " > A A A c g X i c p V l t c 9 y 2 E b 4 k f U n V N y f 9 a H 1 g q r F r Z 0 7 n u 5 N k S 8 l 4 x u P E k 3 b G r l V Z c t y I k g Y k l y R 6 I E E B 4 P l O L P s j + r X 9 Y / 0 b / Q V d g D y J r 2 d N e h r x Q O z z L B a L x X L B c x J G p R q P / / P R x 5 / 8 5 K c / + / m n v 9 j 4 5 a 9 + / Z v f 3 v n s 8 7 e S p 8 K F E 5 c z L t 4 5 R A K j M Z w o q h i 8 S w S Q y G H w v T P 7 R s u / n 4 O Q l M f H a p n A W U S C m P r U J Q q 7 3 t k q B E U u / n Z x Z 2 s 8 G p u P 1 W 5 M y s b W s 7 v / f T 7 A z + H F Z 3 d d 2 + N u G k G s X E a k P J 2 M E 3 W W E a G o y y D f s F M J C X F n J I B T b M Y k A n m W G Y N z 6 x 7 2 e J b P B f 7 H y j K 9 V U Z G I i m X k Y P I i K h Q N m W 6 s 0 t 2 m i p / / y y j c Z I q i N 1 i I D 9 l l u K W n r 3 l U Q G u Y k t s E F d Q t N V y Q y K I q 9 n r Q z d q 3 K s u N W l D Q g u S T R A w k K A M 9 3 Y t l P 0 K e + t l + s X 6 P P f e g C 5 E n 8 e 9 a k p x E P x P 5 + i 8 V i s V z w n I R R q c b j f 3 / w 4 U c / + e n P P v 7 k 5 x u f / u K X v / r 1 r c 8 + f y N 5 K l x 4 7 X L G x V u H S G A 0 h t e K K g Z v E w E k c h j 8 4 M y + 1 f I f 5 i A k 5 f G x W i Z w G p E g p j 5 1 i c K u t 7 Y K Q Z H z v 5 z f 2 h q P x u Z j t R u T s r H 1 5 P Z / n n 7 8 6 d / / e 3 j + 2 W 3 X 9 r i b R h A r l x E p T y b j R J 1 m R C j q M s g 3 7 F R C Q t w Z C e A E m z G J Q J 5 m x u D c u o M 9 n u V z g f + x s k x v l Z G R S M p l 5 C A y I i q U T Z n u 7 J K d p M o / O M 1 o n K Q K Y r c Y y E + Z p b i l Z 2 9 5 V I C r 2 B I b x B U U b b X c k A j i K v R R b R R j 0 3 B R G L y x c c c y v p R o a y x x W X D a 1 j u q Q i t h X C H T A x 9 X w K A z j 4 j Z t g A v z 0 T g 5 N l 4 d D D U v t S X y c F 4 5 q e T r B w + M a M R F 8 A C j + Q E a U R i k F E 7 6 L Z 1 / r c 1 q a N M N W F m P b g g w B R K R 2 d L 3 S U T Z 0 p a Y 7 R K l Q 9 7 o a i a q Q m X F U T r 7 4 q 6 0 6 s P Q K N f q V S i i j D Z t 8 K j v X 4 u 9 p h i C P M t g Z A + D U f 6 3 h o x i 5 s g o y q A t g 4 u U z g n T s 8 P 5 0 A g u j K F / w v i I u R W R p Q N 1 w h I S R O L u S g V o Y 7 R 3 m i n S s m w 3 B N d s i 5 Y z M Y Y 8 7 v f r K B I 0 q i B S t c i 4 9 N B P N c 5 G p s J 5 y N L l r 0 y w 1 7 V c X F y k B K G 2 + b a K r 7 y F u Q K t U D 2 w a 9 w V s E D i 8 I f h U l J X r l a 8 T s Y J r G J K h S 5 h 2 Y v 8 3 A R Q x 9 7 Q E 6 9 h X + b n H T A W i D r s + R q V w m u q t B n 4 6 t 7 W x B Y 0 C N X 9 B s E R 1 x H 4 9 K i p L k o w h u p r o 6 + O n y X 5 + f F Z s c 2 y i E q 9 N C 0 y N M n P 3 s v B x 5 W 8 y M 3 3 m e 2 R I A D M h H j T g G G N g u H 2 6 r x U p u + K 0 H i N D 8 H G i s D c b I M M 3 r T C d i W a t W X R S v a i L Q t W s u / b M r W S H b d l v l O I 8 L u 5 n a 9 F Z 9 l 2 O 1 s l p T h p M 6 9 F K y a 6 4 T s s u A R 1 U p P l r e Z + 0 8 / y / G R a i Z I / 5 P a X + F d G i m V H 1 P M Y / N X a m l p X Y a P V g s D M o u g c U w n q M s + a V B H F R c P n i Q e i G M H H U t E y e V 9 R z J I Y s d W 7 a c t 9 T a L O k S W r a L Y p n Z y z a Z W V I e 2 s x f R b R G C K r H h F e 1 o 4 9 C l n X v e G x x 7 P n A n q s W 4 Z Q c H I i k N D a 1 U 1 I k 7 z f h o K O y i Q S M p 4 v I Y 3 J 2 I F 6 u A v G s x y R y + 6 s J f d 2 M s u 7 L I b u + z C z r u x 8 y 6 s 6 s a q T n t B 8 G 7 4 u F j I F 6 B C 7 j X W 0 C V Y V F 4 V M d / q O 7 u s r e p A w Z m 4 B h 7 p u 2 6 g x G J x e Y 1 8 Z W 7 7 o T T m d b D u 6 I a 7 R O K O r V p r 7 n v s b Y C P G m D z M A V X H 2 8 t j k E q u m P c h K 8 u e s v m K u / q r l Z C i p U + H R I 8 T 9 e r C i v r L K S S d R r E T T S I d R r k T T T I d R r U T T S 0 y 5 q K h s u b a P h z S w P j u F Y R x 0 R 0 I z + u F s X Q i k c r L u / v Y w V Y f D d X F K 3 T a Q C / L P t L q 0 i S C / u b 9 h w W i c D K q w X 9 b S e W t n D n t B M 4 a w N n X U C f t M 0 s c p h G t + C X T e x l j 9 I W s E g e n b a 2 s A v L H l b 1 t y h Y T A E 0 7 T 7 b 6 X a w z 1 P R w u 7 2 Y 5 d t 7 L I P 2 z Q d s d 0 e 0 T Z I r A p Y l y H D v i l W V 7 H 6 6 N 2 x Z v g E v T e 1 E 3 r / b C f v G 7 G P v l u l 7 3 b S z f B J 7 / D J T Y b v o + 9 W 6 d 3 D G 3 q P v x L r g b U i G 9 f d s d 6 A q 0 s k U z L h D h a 0 d X p R 7 / g c 3 K I W c Q D P o l l i g I u h B y X n i 0 H t s / n N / w D r a 6 S f < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " c a X f t i + Q m n I l g O / v w 7 R C Z 7 m P W r s = " > A A A c g X i c p V l f c 9 y 2 E b 8 k b Z M q / e M k j 9 Y D U 4 1 d O 3 M 6 3 5 0 k S 0 7 G M x 4 n n r Q z d q 3 K s u N W l D Q g u S T R A w k K A M 9 3 Y t l P 0 K e + t l + s X 6 P P f e g C 5 E n 8 e 9 a k p x E P x P 5 + i 8 V i s V z w n I R R q c b j f 3 / w 4 U c / + e n P P v 7 k 5 x u f / u K X v / r 1 r c 8 + f y N 5 K l x 4 7 X L G x V u H S G A 0 h t e K K g Z v E w E k c h j 8 4 M y + 1 f I f 5 i A k 5 f G x W i Z w G p E g p j 5 1 i c K u t 7 Y K Q Z H z v 5 z f 2 h q P x u Z j t R u T s r H 1 5 P Z / n n 7 8 6 d / / e 3 j + 2 W 3 X 9 r i b R h A r l x E p T y b j R J 1 m R C j q M s g 3 7 F R C Q t w Z C e A E m z G J Q J 5 m x u D c u o M 9 n u V z g f + x s k x v l Z G R S M p l 5 C A y I i q U T Z n u 7 J K d p M o / O M 1 o n K Q K Y r c Y y E + Z p b i l Z 2 9 5 V I C r 2 B I b x B U U b b X c k A j i K v R R b R R j 0 3 B R G L y x c c c y v p R o a y x x W X D a 1 j u q Q i t h X C H T A x 9 X w K A z j 4 j Z t g A v z 0 T g 5 N l 4 d D D U v t S X y c F 4 5 q e T r B w + M a M R F 8 A C j + Q E a U R i k F E 7 6 L Z 1 / r c 1 q a N M N W F m P b g g w B R K R 2 d L 3 S U T Z 0 p a Y 7 R K l Q 9 7 o a i a q Q m X F U T r 7 4 q 6 0 6 s P Q K N f q V S i i j D Z t 8 K j v X 4 u 9 p h i C P M t g Z A + D U f 6 3 h o x i 5 s g o y q A t g 4 u U z g n T s 8 P 5 0 A g u j K F / w v i I u R W R p Q N 1 w h I S R O L u S g V o Y 7 R 3 m i n S s m w 3 B N d s i 5 Y z M Y Y 8 7 v f r K B I 0 q i B S t c i 4 9 N B P N c 5 G p s J 5 y N L l r 0 y w 1 7 V c X F y k B K G 2 + b a K r 7 y F u Q K t U D 2 w a 9 w V s E D i 8 I f h U l J X r l a 8 T s Y J r G J K h S 5 h 2 Y v 8 3 A R Q x 9 7 Q E 6 9 h X + b n H T A W i D r s + R q V w m u q t B n 4 6 t 7 W x B Y 0 C N X 9 B s E R 1 x H 4 9 K i p L k o w h u p r o 6 + O n y X 5 + f F Z s c 2 y i E q 9 N C 0 y N M n P 3 s v B x 5 W 8 y M 3 3 m e 2 R I A D M h H j T g G G N g u H 2 6 r x U p u + K 0 H i N D 8 H G i s D c b I M M 3 r T C d i W a t W X R S v a i L Q t W s u / b M r W S H b d l v l O I 8 L u 5 n a 9 F Z 9 l 2 O 1 s l p T h p M 6 9 F K y a 6 4 T s s u A R 1 U p P l r e Z + 0 8 / y / G R a i Z I / 5 P a X + F d G i m V H 1 P M Y / N X a m l p X Y a P V g s D M o u g c U w n q M s + a V B H F R c P n i Q e i G M H H U t E y e V 9 R z J I Y s d W 7 a c t 9 T a L O k S W r a L Y p n Z y z a Z W V I e 2 s x f R b R G C K r H h F e 1 o 4 9 C l n X v e G x x 7 P n A n q s W 4 Z Q c H I i k N D a 1 U 1 I k 7 z f h o K O y i Q S M p 4 v I Y 3 J 2 I F 6 u A v G s x y R y + 6 s J f d 2 M s u 7 L I b u + z C z r u x 8 y 6 s 6 s a q T n t B 8 G 7 4 u F j I F 6 B C 7 j X W 0 C V Y V F 4 V M d / q O 7 u s r e p A w Z m 4 B h 7 p u 2 6 g x G J x e Y 1 8 Z W 7 7 o T T m d b D u 6 I a 7 R O K O r V p r 7 n v s b Y C P G m D z M A V X H 2 8 t j k E q u m P c h K 8 u e s v m K u / q r l Z C i p U + H R I 8 T 9 e r C i v r L K S S d R r E T T S I d R r k T T T I d R r U T T S 0 y 5 q K h s u b a P h z S w P j u F Y R x 0 R 0 I z + u F s X Q i k c r L u / v Y w V Y f D d X F K 3 T a Q C / L P t L q 0 i S C / u b 9 h w W i c D K q w X 9 b S e W t n D n t B M 4 a w N n X U C f t M 0 s c p h G t + C X T e x l j 9 I W s E g e n b a 2 s A v L H l b 1 t y h Y T A E 0 7 T 7 b 6 X a w z 1 P R w u 7 2 Y 5 d t 7 L I P 2 z Q d s d 0 e 0 T Z I r A p Y l y H D v i l W V 7 H 6 6 N 2 x Z v g E v T e 1 E 3 r / b C f v G 7 G P v l u l 7 3 b S z f B J 7 / D J T Y b v o + 9 W 6 d 3 D G 3 q P v x L r g b U i G 9 f d s d 6 A q 0 s k U z L h D h a 0 d X p R 7 / g c 3 K I W c Q D P o l l i g I u g X i c p V l t c 9 y 2 E b 4 k f U n V N 6 f 9 G H 1 g q p F r Z 0 6 n u 5 N k y c l 4 J u P E k 3 b G r l V Z c t y K k g Y k l y R 6 I E E B 4 P l O L P s j + r X 9 Y / 0 3 X Y A 8 i a 9 n T X s a 8 U D s 8 y w W i 8 V y w X M S R q U a j / / z 0 c e f / O j H P / n p p z / b + P k v f v m r X z / 4 7 D d v J U + F C 2 c u Z 1 y 8 c 4 g E R m M 4 U 1 Q x e J c I I J H D 4 A d n 9 q 2 W / z A H I S m P T 9 U y g Y u I B D H 1 q U s U d r 2 z V Q i K X P 3 t 6 s H W e D Q 2 H 6 v d m J S N r U H 5 O b 7 6 7 H P X 9 r i b R h A r l x E p z y f j R F 1 k R C j q M s g 3 7 F R C Q t w Z C e A c m z G J Q F 5 k x u D c 2 s Y e z / K 5 w P 9 Y W a a 3 y s h I J O U y c h A Z E R X K p k x o u q h q A d 9 F n M R E S b p D V x k M n V 8 G j R G x 4 A O w R s 6 g s y g r i D z Y R l H y Q 5 J F a 9 r l l y o h y 6 P c P t J j P S Y K I c 6 C D n G z f E 6 0 b t N n v L j U k u 4 T E K I Z Z 6 l g u V V L Y n n K 1 g M Q + r p L T 2 T Q 3 0 1 f n 5 W i Y 6 h 5 V I F 1 e 5 i 6 Y c W 6 q t 2 6 y g 0 j o n w T v I E Y q N Q c f c Z Y e x C 2 w F C g F + f I 8 R p h P o j 3 D 9 m k 6 J I 7 B R L 7 l n o W w a q v l n Q 4 w 1 3 Z M R 1 c Y / I l Q o 7 D I m q c + j s p k 5 Z S G y B Z p i G t G i M 2 y 2 K S O w V w 2 k K o 7 g q Y o m J B t d c D i 0 P 3 S B M M p M j P U k a B 9 h b S E c R J j e z e b + H G A R h J g m g H o W o D T u G 9 6 X 6 z N Y x q G M j P 5 9 c 4 B 0 s l H S z r U m e 1 2 H Y T j B d m E j M M x v b d s w T N N 7 B p D u z H R r I G U 1 q f T G n s Y e u a G j C + K R z v R b F i C E u Q h Y q l X y 1 u 2 t E I y 6 C X Y z m X T S i M E g p n P Q 7 O v 9 K m 9 X Q p h u w s h 7 d E G A K J C K z p e + T i L K l L T H b J U q H v N H V T F S F y o q j d P b F X W n V h 6 F R r t W r U E Q Z b d r g U d + / E 3 t N M Q R 5 l s H I H g a j / B 8 N G c X M k V G U Q V s G 1 y m d E 6 Z n h / O h E V w b Q / + C 8 R F z K y J L B + q E J S S I x N 2 V C t D G a O 8 0 U 6 R l 2 W 4 I r t k W L W d i D H n c 7 9 d R J G h U Q a R q k X H p o Z 9 q n I 1 M h f O Q p c v f m G C v a 7 m + v k 4 J Q m 3 z b R V f e Q t z C 1 q h e m B 3 u F t g g c T h j 8 O l p K 5 c r X i d j B N Y x Z Q K X c K y V / m V C a C O v a E n X s O + z q 8 6 Y C w Q d d j L N S q F 1 1 R p M / D V o 6 2 J L W g Q q s c N g i P u I v D 5 S V N d l G A M 1 d d G X x 0 / S / K r 0 8 t i m 2 U R l X p p W m R o k l 9 8 k I O P K 3 m d m + 9 L 2 y N B A J g J 8 a Y B w x o F w + 3 N V a l M 3 x W h c Y Y P w c a K w N x s g w z e t s J 2 J Z q 1 Z d F K 9 q o t C 1 a y 7 9 s y t Z K d t m W + U 4 j w u 7 m d 7 0 S X 2 U 4 7 W y W l O G k z 7 0 Q r J r r h O y y 4 B H V S k + W t 5 n 7 T z / L 8 f F q J k j / l 9 h f 4 V 0 a K Z U f U 8 our training data with these numbers and use them as labels in a supervised training setup. This idea is realized in a few different algorithms, which mostly differ by the loss functions that they use: the SCANDAL method improves the training of neural surrogates for the likelihood function [58], while the RASCAL [7,58,60] and ALICES [61] techniques train neural surrogates of the likelihood ratio function more efficiently. After training the surrogates for the likelihood or likelihood ratio, we are again left with a neural network that can be evaluated for arbitrary events and parameter points and allows for amortized inference as before. The full workflow is schematically shown in Fig. 1.
Extracting the joint likelihood ratio and joint score during the simulation stage and augmenting the training data with it adds multiple orthogonal pieces of information to the training, as we illustrate in Fig. 2. In practice, this substantially reduces the number of simulated events that are necessary for a good performance-in some cases by multiple orders of magnitude [7, 60]! Some particle physics measurements have even more additional structure, for instance when we are trying to constrain the Wilson coefficients of an effective field theory and the squared matrix element is a polynomial of these parameters. Incorporating this additional structure in the inference workflow can improve the efficiency even further [7,62,63].
Active learning
Active learning here describes a sequential workflow that alternates simulation and inference stages. The theory parameters for which more events are generated are chosen such that they are expected to be most useful based Top left: in the likelihood ratio trick and the CARL inference method, a classifier decision function (red) has to be learned from binary labels that are zero or one (green dots). Top right: the joint likelihood ratio provides noisy, but unbiased labels (green) for the likelihood ratio function to be learned (red). Bottom left: the joint score adds noisy, but unbiased gradient information (arrows). Bottom right: the RASCAL and ALICES methods combine three orthogonal pieces of information (dots with arrows), allowing a neural network to learn the likelihood ratio function (surface) more efficiently.
on the observed data and the results of past iterations. Different algorithms have been proposed, some of which are based on neural surrogates for the likelihood [64,65], while others target the likelihood ratio [53,66]. While active learning is often phrased in a Bayesian framework, these methods can be applied equally well to frequentist inference [67][68][69]. Active learning maximizes sample efficiency for a particular observed data set. This is somewhat at odds with the goal of amortization, which aims to train a surrogate model that works well for multiple different data sets. While active learning can be very powerful in cases with few observed data points, it is less crucial in particle physics use cases with a large number of expected or observed events.
Inference with sufficient summary statistics
The methods discussed in the previous section tackle simulation-based inference by learning the likelihood or likelihood ratio function in the highdimensional data space. While these methods are powerful, they require an analysis workflow that is substantially different from the high-energy physics standards. This makes modifications to the usual software pipeline, careful cross checks, and some changes to the way that systematic uncertainties are handled necessary (more on this later).
A more incremental change to the current analysis workflow is to construct powerful summary statistics in a systematic way. After the highdimensional data x for an event is compressed to one or a few of these summary statistics, it can be analyzed in the usual, histogram-centric way described in Sec. 1.2. The analysis workflow remains largely unchanged, except that instead of kinematic variables (like the transverse momentum of a jet) more complicated variables (like the output of a neural network) are analyzed. No essential modifications to the software pipeline or the treatment of systematic uncertainties are necessary in this approach.
So how do we find these optimal summary statistics? There are two broad strategies. The first is to try to learn a summary statistic as an intermediate step in the end-to-end analysis of the data where the objective function is, for instance, an expected significance or expected limit as in INFERNO [70] or neos [71]. This is connected to the recent discussions around differentiable programming. Optimizing an experiment-level objective is computationally expensive, and not actually necessary since the data are independent and the likelihood factorizes as in Eq. 2. d d If we knew the full likelihood p(D|θ, ν) in Eq. (2), where θ are parameters of interest and ν are nuisance parameters, the final test statistic we would target would be the profile likelihood ratio λ(θ) = p(D|θ,ν)/p(D|θ,ν), whereθ andν are the maximum likelihood estimator (MLE) andν is the conditional maximum likelihood estimator (CMLE) [72]. The numerator and denominator of the likelihood of the likelihood ratio factorize across experiments, but the values for the MLE and CMLE couple all of the events in the dataset Alternatively, we can look for sufficient statistics that allow us to approximate the per-event likelihood, and there are many advantages to casting the learning problem in terms of individual events. While our exposition will focus on the parameters of interest, one can consider θ to also include nuisance parameters, and profiling the nuisance parameters would then happen down-stream in the statistical inference pipeline and after the amortized learning stage described below.
The key to learning optimal observables is to consider a local approximation of the likelihood function in the parameter space. In other words, assume that we are studying parameters θ that are close to some chosen reference parameter point θ ref (imagine this, for instance, to be the Standard Model). Then one can show [60,73,74] that the most powerful observable for measuring θ is the score This gradient vector contains one component per parameter of interest. In the neighborhood of θ ref , the score components are the sufficient statistics: analyzing just t(x) will yield just as much information about θ than analyzing the high-dimensional data x. By using the score as summary statistics, we are therefore not throwing away any information, at least as long we focus on parameters close to θ ref . Further away from the reference point, the score components may no longer be sufficient and a histogram-based analysis will no longer be optimal. Unfortunately, like the likelihood function itself, the score is in general intractable. In the following we will present two methods that allow us to estimate it.
An approximation: parton-level Optimal Observables
Remember that the Matrix-Element Method approximated the likelihood function by summarizing the effect of shower and detector with a transfer function. Parton-level Optimal Observables (OO) [75][76][77] use the same D. However, this coupling of events through the MLE and CMLE can be postponed and based on a learned surrogate for the per-event likelihood or likelihood ratio function as discussed in the previous section.
approximation to compute the score: dz pptf (x|z p ) p(z p |θ ref ) .
In practice, this method is usually applied to processes with easily identifiable final-state particles like leptons and photons. In that case, the reconstructed particle properties are simply identified with the parton-level four-momenta,p(x|z p ) = i δ 4 (x i − z p i ). While this approach elegantly uses our knowledge of matrix elements, it requires substantial approximations to the underlying process, and taking into account shower or detector effects in the observable detection leads to a large computational cost for each analyzed event.
Learning the score
The SALLY method [7,58,60] trains a neural network to learn the (intractable) score function t(x) including the full detector simulation. As in the methods discussed in Sec. 2.4, the first step is running the simulator chain a number of times, now using the reference parameter point θ ref as input. In addition to the observation x, the joint score t(x, z|θ ref ) defined in Eq. (9) is computed and stored for every simulated event. In a next step, a machine learning model like a neural networkt(x) is trained to minimize the mean squared error |t(x) − t(x, z)| 2 . It can be shown that the neural network will ultimately converge to the score function given in Eq. (10). After training, the neural network thus defines the locally most powerful observables for the measurement of θ and can be used in a standard analysis pipeline.
In addition to defining locally optimal observables, neural score estimators can also be used to compute the Fisher information, a versatile tool for sensitivity forecasting, cut optimization, and feature selection [78][79][80].
Diagnostics, calibration, and systematic uncertainties
The analysis methods described in the previous sections contain some parts, in particular neural networks, that are not always easy to interpret and can be harder to debug than a standard analysis based on histograms of traditional observables. It is important to make sure that we can trust the results and quantify any systematic uncertainties. This is very similar to basing the downstream statistical analysis on histograms of neural network outputs.
Mainly we have to correctly diagnose model misspecification. Inference is always performed within the context of a statistical model, but if that model is not correct for a task at hand, the inference results will be meaningless or misleading. In the simulation-based inference methods we discuss, two types of models appear, both of which are prone to misspecification: the simulator itself and machine learning surrogates. This is similar to the distinction between the full simulation and the use of an analytic function (surrogate) to model a smooth m γγ spectrum in a H → γγ analysis.
Misspecification of the simulator occurs when MadGraph, Pythia, Geant4 etc. do not model the physics of LHC collisions accurately enough. This problem also plagues classical histogram-based analyses, but may be easier to diagnose and calibrate when only a single variable is studied than in the multivariate analysis methods described here [81]. It is usually addressed by varying the parameters of the simulator, which introduces nuisance parameters α with unknown true values, and profiling over them in the statistical analysis. We can also use ideas from domain adaptation and algorithmic fairness to make the neural network less sensitive to variations in the nuisance parameters [70,82,83]. If possible, however, it is conceptually cleaner to explicitly include the effect of nuisance parameters in the likelihood modelp(x|θ, α) orr(x|θ, α) and to use well-defined and established statistical procedures like profiling to take them into account in the downstream statistical analysis.
Misspecification of the surrogate model occurs when the neural network does not approximate the true likelihood or likelihood ratio perfectly. This is analogous to a falling exponential for the m γγ spectrum not fitting the simulated data perfectly. Typical reasons are the limited number of training samples, insufficient network capacity, or an inefficient minimization of the loss function. A common issue is that the classifierŝ(x|θ) will be roughly one-to-one with the true likelihood ratio, but not exactly. This can be fixed with the calibration procedure used in CARL and described in Ref. [41]. One can protect against more severe deficiencies by calibrating the inference results with toy simulations from the simulator: for every parameter point, we can run the simulator to construct the distribution of the likelihood or likelihood ratio. Ultimately this leads to confidence sets with a coverage guarantee (assuming the simulator is accurate) as in the Neyman Construction, i. e. that will never overly optimistic [7,41]. This toy Monte Carlo approach can require a large number of simulations, especially for high-dimensional parameter spaces. For an in-depth discussion of calibration and the Neyman construction, see Ref. [7].
There are other, less computationally expensive tools to diagnose misspecification of the surrogate model. These include off-the-shelf uncertainty quantification methods for neural networks such as ensemble methods and Bayesian neural networks. In addition, one can train classifiers to distinguish data from the surrogate model and the true simulator [41], check certain expectation values of estimators of the likelihood, likelihood ratio, or score against a known true value [7], vary unphysical reference distributions that should leave the inference result invariant [41], and compare the distribution of network outputs against known asymptotic properties [72,84,85]. Passing these closure tests does not guarantee that a model is correct, but failing them is an indication of an issue. We will now switch gears and review probabilistic programming, a set of methods that are related to, but different from the simulationbased inference techniques discussed in the previous sections. Computer programs that involve random numbers and do not have deterministic input-output relationships can be thought of as specifying a probability distribution p(output|input). It is natural to think of simulators in this way, where the parameters of the simulator are identified with θ and the output of the simulator is identified with x. Furthermore, the values of the random variables and the other intermediate quantities inside the computer program can be thought of as latent variables z. The structure of the space of latent variables can also be complex. Consider the simple example in Fig. 3, where the list of latent variables is either (z1, z2t) or (z1, z2f, z3f) and depends on the control flow of the program.
Probabilistic programming
It can be useful to think of the latent space of such a program as the space of its stack traces along with the values of all the variables. Take a moment to think about the complexity of the typical simulation chain going from matrix elements to parton shower and hadronization through the detector simulation. These programs have enormous, highly structured latent spaces. The probability that the program returns x corresponds to integrating over all the possible executions of the program that could return x; as we argued in the introduction of this review, this is intractable for moderately complicated programs. We saw in Sec. 2 how we can use machine learning surrogates to approximate the likelihood p(x|θ) or likelihood ratio r(x|θ), where the dependence on z has been marginalized or integrated out. One of the advantages of those approaches is that the surrogate models don't attempt to capture the complexity of the latent state or the joint distribution p(x, z|θ). But what if we also want to infer something about the latent variables that describe what is going on inside the simulator?
In HEP it is common to inspect the Monte Carlo truth record (i.e. z) for some set of events that satisfy some cuts to gain insight into why something happens. For instance, we might want to know what happened inside the simulation of pp → jj events that led to very large missing transverse energy, or why a jet faked a muon. To study this, we often filter a large set of events (simulated with a particular parameter setting θ), filter those events that satisfy the cuts, and then look at histograms of some particular Monte Carlo truth quantities f (z) (for example, to inspect if there was a semi-leptonic b-decay or punch-through in the calorimeter). That familiar procedure is approximating the posterior distribution of f (z) given that the event generated with parameter θ passes the cuts, which we can write symbolically as p(f |cuts(x) = True, θ). Similarly, the unfiltered sample can be thought of as samples from the prior p(f |θ).
The problem with the traditional approach is that the filter efficiency can be very low, and very few of the prior samples may survive to estimate the posterior. This is similar to the inefficiency found in Approximate Bayesian Computation, which asks for the simulator to generate an a simulated x close to the observed x obs . This motivates an additional language construct that allows for conditioning on random variables, which characterizes probabilistic programming. Probabilistic programming languages (PPLs) extend general-purpose programming languages with constructs to do sampling and conditioning of random variables [86,87] e . e Often it is assumed that the quantity being conditioned on is directly sampled from a distribution with a known likelihood (conditioned on the latent state of the simulator The additional language constructs express the concept of sampling and conditioning, but they do not necessarily specify how that is implemented. It is best to decouple the model specification (the probabilitic program or simulator code) from the inference algorithm-much as we use a tool like HistFactory [88] to create a statistical model and then use RooStats [89] to provide generic statistical inference algorithms. Various inference engines have been developed implementing different inference strategies such as Importance Sampling [90] and specializations of Metropolis-Hastings [91] that are compatible with the complex latent space structure associated to stack traces. In general, the inference algorithms can be thought of as hijacking the random sampling inside of the simulator code to guide the simulator towards a certain output.
Early research in probabilistic programming required coding the simulator in special-purpose languages, which is not an attractive option for HEP as we have decades of work invested in our simulation code bases. Recently, however, the Etalumis project developed PPX, a cross-platform probabilistic execution protocol that allows an inference engine to control a simulator in a language-agnostic way [90,92]. The Etalumis team integrated PPX into the SHERPA simulator and a simplified calorimeter simulation to demonstrate probabilistic programming with a real-world simulator (see Fig. 4).
The bulk of the probabilistic programming literature is phrased in terms of Bayesian statistics. The posterior distribution p(z|x, θ) is of no conceptual problem for an ardent, frequentist particle physicists, because while z may be latent, it is a random variable and the joint distribution p(x, z|θ) is perfectly well defined. However, if one wanted to use probabilistic programming to infer the parameters of the simulation θ, then one would need to include a prior p(θ) and sample from that distribution at the beginning of the program. The result would be a probabilistic program for the joint model p(x, z, θ) = p(x, z|θ)p(θ), and one would then condition on x to obtain samples from the posterior p(θ, z|x) or the the marginal p(θ|x).
at that point in the execution). Sometimes this is reasonable, but sometimes this assumption is violated and we want to condition on some more complicated function of the random variables with an intractable density. In that setting, one typically needs to introduce some tolerance or kernel. In this way, probabilistic programming can be seen as a more sophisticated and computationally efficient way of implementing Approximate Bayesian Computation.
Software and computing
The methods described in this review are closely connected to the software and computing challenges of high-energy physics, particularly when we think about the high-luminosity LHC.
Initial results from phenomenological studies indicate that these new machine-learning based approaches provide substantial improvements in sensitivity to traditional approaches, but generating the training data is computationally expensive. However, with some additional work, the augmented data described in Sec. 2.4 can reduce the amount of simulated data needed by orders of magnitude. The Python library MadMiner [80] implements most of the machine learning-based algorithms discussed in Secs. 2 and 3. It wraps around MadGraph, Pythia, and Delphes and thus automates the entire pipeline for a typical phenomenological study f . The approach is compatible with full simulation like Geant4 as the necessary information can just be passsed through the detector simulation similar to the weights used to assess uncertainty in the parton distribution func-tions. However, this still requires a modest investment in the experiments' simulation software.
The use of the learned likelihood ratio for reweighting event samples has the potential for a significant reduction in simulation costs as the reweighting factor can often be learned on parton-level or particle-level data without running the full simulation or reconstruction on large samples of simulated data with varied parameter settings. The CARL technique is being explored within ATLAS and integrated into the ATLAS software framework for this purpose [55].
Probabilistic programming also has the potential to address the computational resources needed for simulation at the high-luminosity LHC. Signs of new physics typically would hide in tails of background distributions, which are computationally expensive to populate with naive sampling approaches. HEP collaborations regularly use a form of importance sampling where the parton-level phase space is sliced (e. g. slices in the transverse momentum of outgoing partons to fill the high-p T in the process pp → jj). In this case, one merges several individual samples of simulated events weighted by the N s /σ s , where N s is the number simulated samples and σ s is total cross-section for that slice. However, this approach does not work for efficiently sampling regions of phase space that do not correspond to simple regions in the parton phase space. For example, if we want to populate the regions of phase space where standard QCD jets fake a boosted top tagger based on a deep neural network [94] the fake rate is roughly 10 −3 and much of the relevant fluctuations happen in the parton shower and are not reflected in the parton-level phase space. Event generators instrumented with probabilitic programming constructs offers the potential to efficiently sample these complicated regions of phase space, which is being explored with a simplified parton shower known as Ginkgo [95].
In the long term, we should not treat the simulation chain as a black box, but open them and begin to integrate automatic differentiation and probabilistic programming capabilities in them as that will enable more powerful and sample-efficient inference algorithms [14].
Summary
Particle physicists have a suite of simulators at their disposal that can model essentially all aspects of particle collisions with impressive fidelity. These tools use Monte-Carlo methods to generate events, with the distribution of outputs depending on the parameters of the physics model. However, we cannot use these tools directly for inference because we cannot evaluate the probability the simulator to generate a specific observed event. Because the likelihood is intractable, we can not directly fit for the most likely parameter points or calculate exclusion limits from observed data. Historically, this challenge has been overcome by reducing the high-dimensional event data to one or two kinematic variables and to use histograms or analytic functions to model the distribution of these observables. This makes inference possible, but often degrades the sensitivity of the analysis.
Here we reviewed simulation-based (or likelihood-free) inference methods that allow us to infer parameters based on high-dimensional event data. These methods are closely connected to other important tasks in HEP and provide the ability to reweight events [47,54,55], tune shower and detector-simulation parameters to data [54], unfold distributions [56], and anomaly detection [57]. An important driver of these algorithms are the rapidly increasing capabilities of machine learning, which let us analyze high-dimensional data efficiently. In addition, extracting matrix-element information from the simulator and using it to augment training data can drastically reduce the number of simulations we need to run. We presented algorithms based on these two ideas in which a neural network is trained as a surrogate for the likelihood or the likelihood ratio function or defines optimal observables, which can then be used in a traditional histogram-based analysis.
In first phenomenological LHC studies, these algorithms have been applied to Higgs precision measurements in vector boson fusion [7], in W H production [96], and in ttH production [80], as well as ZW measurements [63] and the search for massive resonances decaying into dijets [97]. The new machine learning-based techniques consistently led to more sensitive analyses than traditional histogram-based approaches such as simplified template cross-section measurements [96]. With a range of diagnostic tools and ideas for uncertainty quantification available and software packages making the application of these methods easier, the application of these new simulation-based inference techniques to data collected at the LHC experiments seems imminent. Markus Stoye. This work was supported by the U. S. National Science Foundation (NSF) under the awards ACI-1450310, OAC-1836650, and OAC-1841471. We are grateful for the support of the Moore-Sloan data science environment at NYU.
|
2020-10-14T01:01:19.978Z
|
2020-10-13T00:00:00.000
|
{
"year": 2020,
"sha1": "37fe7be95906bf040e2d72b53453ce0c8d864a56",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "37fe7be95906bf040e2d72b53453ce0c8d864a56",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
262828448
|
pes2o/s2orc
|
v3-fos-license
|
Hyperonic uncertainties in neutron stars, mergers and supernovae
In this work we delve into the temperature-dependent Equation of State (EoS) of baryonic matter within the framework of the FSU2H$^*$ hadronic model, which comprehensively incorporates hyperons and is suitable for relativistic simulations of neutron star mergers and supernovae. To assess the impact of the uncertainties in the hyperonic sector on astrophysical observables, we introduce two additional models, namely FSU2H$^*$L and FSU2H$^*$U. These models cover the entire spectrum of variability of hyperonic potentials, as derived from experimental data. Our investigations reveal that these uncertainties extend their influence not only to the relative abundances of various particle species but also to the EoS itself and, consequently, have an impact on the global properties of both cold and hot neutron stars. Notably, their effects become more pronounced at large temperatures, owing to the increased presence of hyperons. These findings have direct implications for the outcomes of relativistic simulations of neutron star mergers and supernovae, emphasizing the need of accounting for hyperonic uncertainties to ensure the accuracy and reliability of such simulations in astrophysical contexts.
INTRODUCTION
Neutron stars (NSs) are natural laboratories for testing different models of nuclear matter under extreme conditions.The lack of terrestrial experimental data for matter at densities larger than ∼ 2 − 3 0 , with 0 being the saturation density at the center of nuclei, has motivated the development of theoretical predictions for the Equation of State (EoS) of such extreme matter.Since the pioneering work of Ambartsumyan & Saakyan (1960), many of the theoretical models include exotic hadrons such as hyperons in the NS core, which can reach densities of the order of several times 0 (see Chatterjee & Vidaña (2016); Burgio & Fantina (2018); Tolos & Fabbietti (2020); Logoteta (2021); Burgio et al. (2021); Kumar et al. (2023) and references therein).
In spite of the lack of experimental constraints for large densities, there exist several recent astrophysical constraints that have to be fulfilled.First, the maximum mass that the model predicts has to be larger than about 2 ⊙ Demorest et al. (2010); Antoniadis et al. (2013); Fonseca et al. (2016); Cromartie et al. (2019); Romani et al. (2022).Second, the dimensionless tidal deformability for a system with chirp mass M = 1.186 ⊙ has to be about Λ = 300 +420 −230 Abbott et al. (2019), which is a tighter constraint than the upper bound Λ ≲ 800 established by the first analysis of the GW170817 event Abbott et al. (2017).And, third, the model should reproduce the ★ E-mail: hriskoch@fqa.ub.edu † E-mail: ramos@fqa.ub.edu ‡ E-mail: tolos@ice.csic.esNICER measurements of a radius of ∼ 12 − 14 km for a NS of ∼ 1.3 − 1.4 ⊙ Riley et al. (2019); Miller et al. (2019) and, similarly, for ∼ 2 ⊙ Miller et al. (2021); Riley et al. (2021).The aforementioned astrophysical constraints thus set important restrictions on the intermediate and high density regions of the EoS.In fact, they significantly reduce the number of applicable hyperonic models for the core of NSs, as the EoS is softened when hyperons appear, leading e.g. to a reduction of the maximum mass the star can sustain.Additionally, a very stiff nucleonic part of the EoS is ruled out by the radius measurements which favor softer nucleonic interactions.
Furthermore, while a NS can be considered as a cold object, modelling the early stages in the evolution of a NS after a supernova explosion as well as the eventual merging of two NSs requires a finite temperature treatment of the EoS, for which the simple Γ-law approach, inspired on the thermal behavior of an ideal gas, is usually taken.Within this approach, the finite temperature effects enter through the so-called thermal index Γ th = 1 + th th , where th = () − ( = 0) indicates the thermal pressure and th = () − ( = 0) is the thermal energy density.A constant value of Γ th ∼ 1.75 is usually considered as it describes the finite-temperature behaviour of the nucleonic EoSs quite reasonably.However, the necessity of going beyond the usual Γ-law treatment in NS mergers has been shown Bauswein et al. (2010); Raithel et al. (2021), especially when hyperons become abundant in the hot and dense matter created Blacker et al. (2023).Additionally, constraints that are also sensitive to the finite-temperature EoS can be obtained from cooling of NSs Prakash et al. (1992); Page et al. (2004); Yakovlev & Pethick (2004); Raduta et al. (2018); Negreiros et al. (2018); Fortin et al. (2021); Malik & Providência (2022); Fortin et al. (2021).Therefore, it is of high interest to extend the EoSs including hyperonic degrees of freedom to the appropriate finite temperatures found in NS mergers and protoneutron star (PNS) evolution.
Some progress has been achieved over the last decade, with the development of some hyperonic EoS models that satisfy the astrophysical constraints for the mass, radius and tidal deformability of a NS as well as known experimental low-density constraints (see reviews of Oertel et al. (2017); Burgio et al. (2021); Typel et al. (2022), the recent works of Raduta et al. (2021); Raduta (2022) and references therein).However, the uncertainties in the hyperonic effective interactions at high densities and isospin asymmetries are still large and not properly covered by the existing models.Therefore, it is of upmost importance to develop EoSs of homogeneous matter that span a wide range of temperatures ( = 0 − 100 MeV), charge fractions ( = 0.01 − 0.6) and baryon densities ( = 0 /2 − 6 0 ), as expected to be achieved in NS mergers or supernovae Oertel et al. (2017) In Ref. Kochankovski et al. (2022) we developed the hyperonic FSU2H * model, as an extension of the FSU2H one Tolos et al. (2017b,a), which satisfies the aforementioned mass-radius constraints and at the same time is in agreement with the saturation properties of nuclear matter and finite nuclei, as well as with the constraints on the high-density nuclear pressure coming from heavy-ion collisions Boguta & Bodmer (1977); Jiang et al. (2007); Lattimer & Prakash (2007); Horowitz & Piekarewicz (2001).The model was then used to study -equilibrated matter at finite temperature for neutrino free and neutrino trapped matter.
In the present work we go beyond this previous analysis.On the one hand, we impose weak equilibrium only among hadrons, without taking into account leptons.This is due to the fact that simulations for NS mergers and PNS evolution are performed with cells at a given , and , while implementing the leptons and the transport equations explicitly.On the other hand, we explore how the uncertainties in the hyperon-nucleon and hyperon-hyperon interactions may influence the properties of the EoS.The hyperon couplings are obtained by fitting the hyperonic potentials to the experimental data, which at present are scarce and subject to big uncertainties.Hence, it is important to determine how these uncertainties propagate to the EoS and, relatedly, their influence on the global properties of the star, such as the composition, masses, radii, tidal deformabilities and moments of inertia.In order to do so, we build two additional parametrizations for the EoS, named hereafter FSU2H * L and FSU2H * U, which cover, respectively, the L(ower) and U(pper) limits of the hyperon potential uncertainties, hence determining a somewhat softer and stiffer hyperonic EoS than the nominal FSU2H * one.
The paper is organized as follows.First, in Section 2 we briefly explain the theoretical framework to construct the EoS at a given , and .Then, in Section 3, we focus on the properties of hyperonic matter, paying a special attention to the effect of the hyperonic uncertainties on the composition of matter and the EoS.Keeping these uncertainties in mind, in Section 4 the results for the global properties of both NSs and PNSs are presented, whereas a brief summary of our findings is provided in Section 5.
THEORETICAL FRAMEWORK
We consider matter made of baryons ( = , , Λ, Σ, Ξ) at a given , and .Within the covariant density-functional models, the interaction between baryons is modeled through the exchange of different mesons Walecka (1974).Our model takes into account the exchange of the scalar mesons and * , as well as the vector mesons , and .While the , and mesons mediate the interaction between any type of baryons in the octet, the * and mesons mediate the interaction only between particles with non-zero strangeness.The Lagrangian that describes that system can be written in the following form: which is split into baryonic (L ) and mesonic (L ) contributions.
The quantity indicates the mass of particle , Ψ is the baryon Dirac field, whereas are the mesonic strength tensors, and = − stands for the electromagnetic one.With ì we represent the isospin operator, while are the Dirac matrices and labels the coupling of baryon to meson .We note that, although the contributions of the electromagnetic potential are included in L, they do not play a role in the present work, as we are considering charge-neutral objects in the absence of magnetic fields.
As a consequence of the timescale of the weak interaction and the fact that the baryons are in thermal equilibrium in matter, a weak interaction equilibrium can be assumed: where labels the chemical potential of the baryon species , and 0 , − , + indicate neutral, negatively charged and positively charged baryons, respectively.As one can see from Eq. ( 2), only two chemical potentials are independent, and , which is a consequence of the fact that the strangeness changing weak reactions are in equilibrium and, hence, the strangeness chemical potential is zero.Note that we have employed only baryonic chemical potentials as our model only requires to worry about the baryonic EoS.The leptonic part is added explicitly in simulations of NS mergers and supernovae, as mentioned in the Introduction.
In order to obtain the composition and all thermodynamic properties, one needs to solve the Euler-Lagrangian equations of motion.Within the relativistic mean-field (RMF) approximation , the meson field operators are replaced by their expectation values.By labeling the following mesonic equations of motion can be obtained: and for the baryonic one.We use to label the effective mass of each particle.Note that the scalar and vector densities also appear in the equations of motion.At finite temperature, those are defined as where = 2 labels the spin degeneracy of baryons and is the Fermi-Dirac distribution for a baryon (antibaryon) with an effective mass * = * b, and effective chemical potential: with * b = − * .The equations of motion of Eqs.(3), (4) and the baryonic densities from Eq. ( 6), together with the relations between the different baryon chemical potentials due to weak equilibrium in Eq. (2), are coupled to the baryon conservation number and charge conservation laws given by defining a set of equations that fully determines the composition of matter.With we label the charge of each particle.Once the composition is obtained, it is straightforward to obtain all thermodynamic quantities of interest, such as pressure, energy density, entropy and free energy density, from the energy-momentum tensor (for details see Kochankovski et al. (2022)).
Uncertainties in the hyperonic sector: FSU2H*L and FSU2H*U models
In the framework of the RMF models, the coupling constants of nucleons to mesons are fitted to reproduce finite nuclei properties and/or bulk nuclear matter properties.When hyperons are also considered in the core of the star, the hyperonic coupling constants are fixed using symmetry principles and constraints derived from hypernuclear data.
The choice of the nucleonic parameters in the FSU2H model was discussed at length in Ref. Tolos et al. (2017b,a).Given that the present work employs the same parameters for the nucleonic part of the model, we do not repeat the description of these parameters here and just display the values of the coupling constants for nucleons and the meson masses in Table 1.
While the nucleonic sector is constrained with good accuracy, this cannot be claimed for the hyperonic one, owing to the limited amount of hypernuclear data.For this reason, the hyperon couplings to the vector mesons , and are obtained from flavor SU(3) symmetry, considering also the vector dominance model and ideal mixing for the physical and mesons, while we leave as free parameters the couplings of the hyperons to the meson ( Λ , Ξ , Σ ), and the coupling of the Λ hyperon to the * meson ( * Λ ).The hyperon- coupling constants are determined by reproducing the potential felt by the hyperon in symmetric nuclear matter, whereas the Λ − * one results from obtaining the ΛΛ bond energy extracted from double Λ hypernuclei.We recall that the potential felt by a hyperon in -particle matter is given by: In our previous work Kochankovski et al. (2022) the free parameters of the FSU2H * model were determined employing the central value of the most accepted determinations of the hypernuclear potentials.The Λ potential at saturation density is by far the best constrained value.According to Millener et al. (1988), the value that reproduces the bulk of Λ hypernuclei binding energies is Λ = −28 MeV.The potentials for the other hyperons in symmetric nuclear matter are not so well constrained.In Kochankovski et al. (2022) we considered the latest value obtained in Ref. Friedman & Gal (2021) for the Ξ potential of Ξ = −24 MeV, and we adopted Σ = 30 MeV Friedman & Gal (2007) for the Σ potential.Moreover, we considered a ΛΛ bond energy in Λ matter at a density = 0 5 of Δ ΛΛ ( 0 /5) = −0.67MeV) Takahashi et al. (2001); Ahn et al. (2013) to derive the However, in the present paper we want to explore the uncertainties in the hyperonic sector and how they propagate to the EoS and the global properties of compact stars.Therefore, as noted before, we construct two extreme models, FSU2H * U and FSU2H * L, that were obtained by taking the hyperonic potentials to be equal, respectively, to the upper and lower limits of their uncertainty bands.The band is pretty narrow for the Λ hyperon, Λ = (−30, −25) MeV, while the other hyperonic potentials have wider uncertainties, that is, Ξ = (−24, −10) MeV, Σ = (10, 50) MeV Gal et al. (2016), Δ ΛΛ = (−6, 0) MeV Gal et al. (2016); Guo et al. (2021); Friedman & Gal (2021).A more detailed justification for our choice can be found in Appendix A. In Table 2 we show the values of the hyperonic constants to the mesons for the three models that we will use hereafter: FSU2H * , FSU2H * U and FSU2H * L. In the case of the , and mesons, those hyperon couplings are given as a ratio to the corresponding couplings of the nucleon, namely = / with = (, , ), whereas for the * and the mesons we give 2. The ratios of the hyperon couplings to , and mesons with respect to the nucleon ones, as well as the ratios of the couplings of hyperons to * and with respect to the -nucleon and -nucleon ones, respectively, due to the OZI rule.Note that the couplings to the and * mesons depend on the model, so we list the original values of the FSU2H * model, together with those of the FSHU2 * U and FSHU2 * L models, the latter ones labeled with (U) and (L), respectively.The other ratios are the same for the different models. * = * / and = / , since their nucleonic couplings are zero due to the OZI rule.
COMPOSITION AND THE EQUATION OF STATE AT FINITE T
In the present section we aim at exploring the effect of the hyperonic uncertainties in the composition and the EoS of matter for the wide range of , , and described in the Introduction, using the two newly constructed parametrizations.Thus, we consider matter at three different temperatures: low ( = 1 MeV), moderate ( = 20 MeV) and high temperature ( = 80 MeV) and three different charge fractions: = 0.01, = 0.25 and = 0.5, and examine the composition and the corresponding EoSs as functions of .However, we note that the composition patterns are sensitive to the interplay between the different hyperonic potentials and, consequentily, the uncertainty region of each individual hyperonic species is only partially covered by the FSU2H * U and FSU2H * L models (see Appendix A for more details).
Composition
In Fig. 1 we show the different baryonic fractions as functions of the baryon density for = 1 MeV (upper panels), = 20 MeV (middle panels) and = 80 MeV (lower panels), and for constant charge fractions = 0.01 (left column), = 0.25 (middle column) and = 0.5 (right column).We note that, when nucleons are the only baryons present, the composition is trivially determined by the charge fraction, as for any temperature the partial densities of the neutrons and protons are = (1 − ) and = (hence = (1 − ) and = ).This is no longer the case when hyperons are included, and the abundance of the different particles is a function not only of the baryon density but also of the temperature.
From the figure, one can make the following observations.First, at low temperature, the baryon density at which a hyperon species appears strongly depends on the hyperonic potentials.This is especially important for the Σ hyperons since the uncertainty in their potential is the largest.For the charged fraction = 0.01, due to the big difference in the chemical potential of the neutron and the proton, the Σ − hyperon can appear even before Λ within the FSU2H * L model.
On the contrary, for the FSU2H * U model the Σ − hyperons start appearing at densities greater than 0.6 fm −3 due to the highly repulsive potential.Also, at small charge fractions, the increased abundance of negative hyperons with density induces the fraction of protons to increase as well.However, this process can happen only at small charged fractions.If one analyses the composition of matter at low temperatures ( = 1 MeV) and larger charge fractions ( = 0.25 and = 0.5), it can be noticed that the proton abundance does not change significantly and the appearance of negative hyperons is hindered.
Neutrons, on the other hand, can always be replaced with nondegenerate low energetic Λ at sufficiently large densities, so in all of the cases, their abundance is significantly reduced.However, the density at which this reduction begins depends on .When the matter is highly isospin asymmetric, this process starts before = 0.3 fm −3 , while when matter is isospin symmetric, it starts at densities greater than = 0.4 fm −3 .
The previous analysis at low temperature still holds for moderate temperatures ( = 20 MeV).However, there are two main differences.First, due to thermal effects, the hyperon fractions = / , where represents any hyperon, start to be significant ( > 10 −3 ) at lower baryon density, and the evolution of their abundances with density is smoother than at low temperatures.Second, for a given density, more hyperonic species can be present, especially when the hyperon potentials are lower.Still, at low temperatures, the Λ hyperon contribution does not change significantly with the EoS employed.The reason lies in the fact that the Λ potential is better constrained than that of the other hyperonic species.
The composition of baryonic matter significantly changes at high temperatures ( = 80 MeV).It is noticeable that all hyperons have an appreciable abundance at any density.Also, it is interesting to observe that, in all cases, the neutron abundance is no longer decreasing monotonically with increasing density as for lower temperatures.The difference in the abundance of the Σ hyperons obtained with the FSU2H * U or FSU2H * L models observed at lower temperatures is especially manifest for the Σ + at this high temperature of = 80 MeV.Being the only positively charged hyperon, its appearance is key for the reduction of the proton contribution.This effect is clearly seen in the = 0.5 case, where the Σ + abundance in the FSU2H * L model reaches that of the and the Λ at high densities.
Equation of State
In this subsection we present the EoS for the two models, FSU2H * U and FSU2H * L. In particular, we focus on the behavior of the pressure, , energy per particle, / and entropy per particle, /, as functions of the baryon density for the same and cases discussed above.All plots are combined in Fig. 2, where the different colors stand for different temperatures and the solid (dashed) lines refer to the results of the FSU2H * U (FSU2H * L) model.It is clear that the effects of the hyperonic uncertainties are more visible in the pressure.At high density, this effect can even be more important than the temperature corrections to the pressure.To illustrate this point, we focus first on the ( ) dependence at = 0.01.For densities larger than = 0.6 fm −3 , the = 80 MeV FSU2H * L pressure (dashed red line) is lower than the = 1 MeV FSU2H * U one (solid blue line).This is more evident as we move to more isospin symmetric matter, that is, to larger values of .As matter becomes proton richer, the abundance of hyperons decreases.This means that high density matter is more degenerate, leading to pressure-density curves for different temperatures that do not differ noticeably in the case of the stiffest model FSU2H * U.In contrast, as mentioned when describing the composition, the softest FSU2H * L model allows for a significant abundance of positively charged Σ + hyperons, which replace highly energetic protons.This reduction of the matter degeneracy makes the FSU2H * L pressure-density curves at different temperatures to be distinguishable.
As for the density dependence of the energy per particle, /, it is clear from the middle panels of Fig. 2 that the differences between the two models are larger for high densities and high temperatures.When the charge fraction is low ( = 0.01), more hyperons can appear, so the difference between the FSU2H * L and FSU2H * U models becomes more noticeable.
Finally, in the lower panels of Fig. 2 we show the entropy per particle, /, as a function of the baryon density.We observe that / does not show a clear visible dependence on the model.This means that the uncertainty of the hyperon couplings will not significantly influence the temperature profile of the early evolution of the star, as we will explicitly show in Section 4.However, we remind that the temperature profiles computed for stars at constant / change drastically, acquiring a plateau-like behavior, as soon as hyperons appear in matter, as discussed in Raduta et al. (2020); Kochankovski et al. (2022).These are not, however, contradictory statements.When it is energetically possible, the most degenerate nucleons are converted into hyperons and, therefore, a new Fermi sea is being filled.Thus, the presence of hyperons influences strongly the entropy, which acquires sensibly larger values than in their absence, and develops the plateau-like structure as a function of seen in Fig. 2.However, once hyperons are present, the effect of the hyperonic potential uncertainties in / turns out to be quite mild.
INFLUENCE OF THE HYPERONIC UNCERTAINTIES IN ASTROPHYSICAL OBSERVABLES
In this section we aim at showing how the hyperonic uncertainties propagate from the composition and EoS of NS matter to astrophysical observables, such as the mass, radius, tidal deformability and moment of inertia.
The masses () and radii () of PNSs and cold NSs are obtained from solving the TOV equations assuming − stable condi- tions for baryons and leptons (electrons and muons), with trapped or untrapped neutrinos, respectively.In particular, the maximum mass that the model predicts is especially sensitive to the appearance of the hyperons, as the EoS becomes softer.
In recent years, due to the newest gravitational wave detectors Lasky (2015) and the planned ones Bailes et al. (2021), the tidal deformability of the star has become a particularly important quantity.It measures the induced quadrupole moment in one star in response to the tidal field of the companion and can be obtained from the tidal Love number 2 as where is the gravitational constant and is the radius of the star.The 2 number is obtained from a first-order ordinary differential equation Hinderer (2008); Hinderer et al. (2010), solved selfconsistently with the integration of the TOV equations.It is useful to define the dimensionless tidal deformability (Λ) as The tidal deformability constraints are usually given for the chirp mass of a binary NS system, which is extracted from the masses of the individual stars 1 and 2 through the relation: Another interesting quantity is the moment of inertia of the star , that can be computed by solving the structure of a rotating NS using the Hartle-Thorne approach Hartle (1967); Hartle & Thorne (1968).Similar to the tidal deformability, the moment of inertia also puts simultaneous constraints on the mass and the radius of the star.Still, there is no independent measurement of this quantity at present.The first measurements with an accuracy of 10% can be expected in the next decade Lattimer & Prakash (2016).
Given that our goal is to study the effect of hyperons and their uncertainties, we also calculate the so-called strangeness number of the star.This number is related to the abundance of hyperons in the interior of the star, and it can be computed as where = is the strangeness density for each baryon species , with standing for the corresponding strangeness quantum number.As the strangeness number is large, we will present results for the ratio = S/ , which is known as the normalized strangeness number, where is total baryon number given by
Results at zero temperature
Since most of the astrophysical constraints that are available are obtained for already evolved cold NSs, we dedicate this subsection to present our = 0 results, while the next subsection will be devoted to the discussion of the finite temperature results that are especially important for the relativistic simulations of NS mergers and supernovae.
The masses, radii, tidal deformabilities and moments of inertia for neutrinoless − stable NSs at = 0 are computed with our three models, that is, FSU2H * , FSU2H * L and FSU2H * U. Our core EoSs have to be matched with an EoS for the inner crust and an EoS for the outer crust.In Ref. Providência et al. (2019) the EoS for the inner crust with the FSU2H interaction was computed, allowing us to have a unified EoS description of the core-inner crust of the star.For the outer crust, we adopt the widely used EoS of Baym-Pethick-Sutherland (BPS) Baym et al. (1971), which is well constrained by nuclear physics data.
In Fig. 3 we show the mass-radius () relation for the three models, i.e., FSU2H * (green line), FSU2H * L (blue line) and FSU2H * U (red line), together with the constraints for masses coming from 2021).We note that, as other hyperonic EoSs, the developed EoSs are in tension with the measurement of the heavy pulsar PSR J0952-0607 Romani et al. (2022).
We focus on the most massive NSs, with > 1.5 ⊙ , as the effect of hyperons is more pronounced.As previously discussed, the original FSU2H * can explain all other aforementioned - con-FSU2H*L FSU2H* FSU2H*U GW170817-InsensitiveEoS GW170817-SpectralEoS 4. Λ( ) relation obtained with the FSU2H * , FSU2H * L and FSU2H * U models along with the observational tidal constraint using the universal relations (EoS insensitive) and the spectral EoS obtained from GW170817 event.The filled boundaries represent the values at 95% confidence level, while the hatched ones represent the values at 68% confidence level.For details, see Ref. Kumar et al. (2023).
straints.However, the FSU2H * L falls short in describing NSs with 2 ⊙ masses.The reduction of the hyperonic potentials in this model softens the EoS significantly and the maximum mass the model can support is 1.92M ⊙ .On the contrary, the FSU2H * U model, due to the more repulsive hyperonic potentials, gives rise to a higher maximum mass of 2.06M ⊙ .Although the hyperonic uncertainties have a significant impact on the value of the maximum NS mass, the corresponding radii turn out to be quite similar.For example, the FSU2H * L model predicts its maximum star mass to have a radius of around = 11.9 km, while the FSU2H * U model predicts a radius of around = 12.1 km for its maximum mass configuration.However, the hyperonic uncertainties have a significant impact on the radius for a fixed mass star in the range 1.6-1.9⊙ .For instance, the difference between the radius of the FSU2H * L maximum star mass and that of a star with the same mass in the FSU2H * U model is larger than 1.2 km, which corresponds to an uncertainty of around 10%.
In Fig. 4 we show the dimensionless tidal deformability Λ of the star as a function of the mass of the star (note the logarithmic scale) along with constraints presented in Kumar et al. (2023).Even if at the upper edge, our EoSs are in agreement with the constraints, which so far cover the region of low-medium sized NS masses, where hyperons are not yet present.The trend of the tidal deformability for most of the region is exponentially decreasing.However, from a certain mass onwards the exponential trend breaks down and the tidal deformability starts decreasing at a faster rate.This corresponds to the mass in the - diagram from which a small increase produces a significant decrease in the radius, namely around 1.7M ⊙ in the FSU2H * L model and 1.9M ⊙ in the FSU2H * and FSU2H * L ones.As a consequence, we find the effect of the hyperon uncertainties on the tidal deformability to be significant for the most massive stars ≳ 1.8M ⊙ .More quantitatively, we observe that for a star with a mass = 1.92M ⊙ , the FSU2H * L model gives Λ ≈ 34, while the FSU2H * U predicts Λ ≈ 95.This large discrepancy for the tidal deformability is correlated to the different values of the radii for a given mass Postnikov et al. (2010).We thus conclude that hyperonic uncertainties have a significant impact on the tidal deformability for the most massive stars.
In Fig. 5 we show the moment of inertia as function of the mass of the star computed with our models, together with the two constraints for given in Ref. Landry & Kumar (2018), which do not correspond to direct measurements, but are derived from universal relations among NS observables employing the more restrictive and less restrictive range of values for the tidal deformability reported in Abbott et al. (2019) and Abbott et al. (2017), respectively.We note that our EoSs do not satisfy the tighter constraint.Within error bars, our results are almost compatible with the recent model-dependent Bayesian analysis of Ref. Miao et al. (2022).A direct, independent, measurement of the moment of inertia, which might become available in the next years from radio observations of the double pulsar PSR J0737-3039, would pose a more trustable constraint on the EoS models (Bejger et al. (2005) We also note that the moment of inertia increases linearly with the mass of the star in the region between 1.0 ⊙ and 1.6 ⊙ .For larger masses this trend is broken, with the stiffer FSU2H * U favouring this linear trend even for masses up to 1.8 ⊙ .Whereas the qualitative trend of the three curves is similar, their quantitative predictions for for the most massive stable stars can differ by around 10%.
We finish this subsection showing the normalized strangeness number as a function of the compactness of the star in Fig. 6.As expected, the strangeness number increases monotonically with the compactness for all different parametrizations.This is due to the fact that higher compactness results in both the average density and the central density of the star to be larger and, hence, strange particles can be produced more easily.It is also interesting to note that for, a given value of compactness, the softer the model is, the larger the strangeness number becomes.This is due to the fact that the central density for soft models is significantly larger with respect to the stiff ones.
Results at finite temperature
Here we present results for stars in − equilibrium at finite temperature.The focus will be on cases important for the evolution of PNSs.Here, the temperature ( = ( )) and electron/lepton ( = ( )) profiles are essential inputs to obtain the properties of the star.For the early evolution of the star, at the so-called deleptonization phase (approximately one second after the supernova explosion), we will adopt a similar approach to the one in Ref. Marques et al. (2017), where the temperature profiles are inspired by results of full relativistic simulations that also account for the neutrino transport.However, instead of assuming a constant lepton fraction throughout the star, we will also use lepton profiles inspired by simulations, so as to make the calculations more realistic, in particular, we will take profile inspired by the results in Ref. Raduta et al. (2021) ( = 1 s, Pascal) and a profile inspired from results obtained with the FSU2H EoS, with the code and microphysics as described in Fischer et al. (2010Fischer et al. ( , 2012)); Fischer (2016); Fischer et al. (2020a,b) ( = 1 s, Fischer).We will also perform calculations at fixed entropy per baryon (/ = 1), and at constant lepton fraction ( = 0.08) throughout the star ( = 5.1 s, Pascal).This profile accounts for conditions in the star of around five seconds after the explosion, and is motivated by the study of Ref. Pascal et al. (2022), which found that the convective motions in the star enabled the isentropic and isoleptonic profiles to be achieved much faster.Finally, for the later stage of the evolution of the star (approximately tens of seconds after the explosion), we will focus on cases where the neutrinos have already diffused out from the star and the entropy of the star slowly reduces, as it was recently shown in Ref. da Silva Schneider et al. ( 2020) that these stars are relevant for black hole formation in a failed core collapse supernovae.
In the present work, we will cover this later evolution situation by considering neutrino-free matter at three constant entropy profiles, / = 1, 2 and 3. Before showing the results, a word of caution is necessary.Similarly to the = 0 case, for temperatures lower than ≈ 15 MeV and densities below 0 /2 1 cluster structures are present in the star.The theoretical framework suitable under these conditions is the extended Nuclear Statistical Equilibrium (NSE) Furusawa et al. (2011Furusawa et al. ( , 2013Furusawa et al. ( , 2017b,a),a); Shen et al. (2010aShen et al. ( ,b, 2011a,b),b); Hempel & Schaffner-Bielich (2010); Typel et al. (2010); Gulminelli & Raduta (2015); Raduta & Gulminelli (2019).We will use the EoS described in Ref. Hempel & Schaffner-Bielich (2010) which is publicly available at the CompOSE database Typel et al. (2015); Oertel et al. (2017); Typel et al. (2022), smoothly matching it to the EoS of the homogeneous core.The small inaccuracies of this treatment can affect the radius of the star (specially of the lightest ones).Secondly, while at T=0 the radius is obtained at the point where the pressure in the star is equal to zero, this procedure is ill defined at finite temperature and another criterium needs to be set.Following the approach described in Raduta et al. (2020), we first identify the low density domain of the star where log() and log() can be considered linear functions of log( ) and then we employ this behavior to extrapolate the EoS at lower densities.This approach guarantees that, for low temperatures and / ≲ 3, the radii of the stars is computed with reasonable accuracy (< 20%).For a more elaborate discussion on these topics, see Fortin et al. (2016); Raduta et al. (2020).Furthermore, we note that the results shown here are by no means complete simulations of PN star evolution.Instead, they serve as an indication on how hyperon uncertainties propagate at finite temperature and how these uncertainties would affect the global properties of hot stars.In this sense, although the conditions are inspired by the ones met in PN stars, some of the conclusions can be applied to any phenomena where the matter conditions are similar to the ones described here.
In Fig. 7 we present the temperature and electron fraction profiles employed in the present work.We consider two early-time profiles, = 1 s, Fischer (black solid lines) and = 1 s, Pascal (purple solid lines) and one intermediate time profile, = 5.1 s, Pascal, that assumes a constant entropy / = 1 and constant lepton fraction = 0.08 (red lines).In these scenarios, the appearance of muons is suppressed due to the fact that the muonic lepton number is fixed to be = 0. We also consider three profiles of neutrino-free matter at constant entropy values / = 1 (blue lines), 2 (green lines) and 3 (yellow lines), representing later evolution stages of the PN star.The electron fractions in these cases are determined by solving the neutrinoless − conditions.Firstly, we discuss the neutrino-free profiles at constant /.We can see that the electron fraction on the right panel of Fig. 7 is correlated with the hyperonic potentials.Softer hyperonic models favour the appearance of hyperons, which in turn lowers the number of leptons, making the deleptonization process more efficient.For larger values of /, this process starts at lower densities, as the hyperon fraction can then be significantly larger due to the thermal effects.The differences seen in the lepton fraction profiles also reflect in the temperature profiles on the left panel of Fig. 7.We observe that a given / value can be achieved with lower temperatures in the case of softer models, due the earlier appearance of hyperon species and the corresponding loss of degeneracy.
The gravitational mass is shown in Fig. 8 as a function of the radius of the star .We can see that profiles of the star's early evolution, namely ( = 1 s, Pascal) and ( = 1, Fischer), favour a smaller compactness, as the stars end up having larger radii and the () curves are the rightmost ones in the figure.One can argue that this is a consequence of the stiffening of the EoS produced by the higher temperatures achieved at the low density part of the core in the early evolution times.We can better illustrate this point by comparing the () curves obtained with the ( = 1 s, Pascal) (purple curves) and the / = 3 one (yellow curves) in Fig. 8.Although their temperature profiles are similar only up to saturation density, the corresponding () curves are very much alike, regardless of the mass of the star and the stiffness of the hyperonic model that is chosen.This is also strengthened by the observation that, in spite of the temperature profiles of the curves / = 1 and ( = 1 s, Pascal) being very similar for densities higher than > 0.3 fm −3 , their () curves show a completely different behaviour, which is again an indication that it is the EoS at lower densities that governs the compactness of the star.It is obvious that the influence of the hyperonic uncertainties can only be seen when hyperons are significantly present in matter, namely when cores can achieve high density high temperatures, so essentially for massive stars ( > 1.7 ⊙ ).The radius of stable, massive (∼ 2 ⊙ ) and hot ( ≳ 50 MeV) stars predicted by the different hyperonic models can differ by up to 20%, as can be especially seen upon comparing the different purple or yellow lines in Fig. 8.In similarly cooler scenarios, such as those of the neutrinoless de-leptonized / = 1 profile and the constant lepton fraction ( = 5.1 s, Pascal) profile, we find that the () curves of the deleptonized, hyperon-richer case (blue lines) lie below those of the constant lepton fraction case (red lines), particularly when hyperons feel less repulsion, as in the FSU2H * L model.We can conclude that, as in the = 0 case, the uncertainties of the hyperon couplings affect the mass-radius relation only for the most massive stars, where the central density is the highest, and hence the impact of the hyperons is the biggest.These effects are further magnified by temperature, which accentuates the relative differences between the radii of the most massive stars calculated using the FSU2H * L and FSU2H * U models, leading to deviations of up to a few tens of percent.
Next, the tidal deformability of hot stars is shown in Fig. 9.We observe that the behavior of the tidal deformability is also to the low density behavior of the temperature profile.Higher values of temperature at low densities produce higher values of the tidal deformability.As for the effect of the hyperonic uncertainties, we observe that softer (stiffer) EoSs, computed with the FSU2H * L (FSU2H * U) model, tend to lower (increase) the tidal deformability, an effect that is more evident for higher mass stars ( > 1.7 ⊙ ).In the region of most massive stars ( ≳ 2 ⊙ ) the difference in the predictions of the models for the tidal deformability can be of an order of magnitude.
The dimensionless moment of inertia, /( 2 ), is shown in Fig. 10 as a function of the mass of the star.One can observe that colder stars tend to have higher dimensionless moment of inertia.This is a direct consequence of their much smaller radii for a given mass.It is noticeable that the dimensionless moment of inertia is not very sensitive to the hyperonic uncertainties, as only small differences can be observed between the predictions of the models.We finalize by presenting the normalized strangeness number as a function of the compactness of the star for the different temperature profiles explored in this section.The results clearly show that the normalized strangeness number, signalling the presence of hyperons in the star, is very sensitive to the temperature profile in the higher density region.As we can see in the figure, the model / = 3 pro- duces the largest values of for a given compactness.On the contrary, stars with lower temperatures need higher compactness to produce hyperonic matter.We also note that the profiles conserved and higher lepton fractions tend to have noticeably lower values of , as the high lepton fraction hinders the appearance of hyperons.Obviously, the normalized strangeness number is also sensitive to the hyperonic uncertainties, being larger (smaller) for the soft FSU2H * L (stiff FSU2H * U) version of the hyperon potentials models.
CONCLUSIONS
The main goal of this work is to study the uncertainties of the hyperon potentials on the finite temperature hyperonic matter EoS in a systematic way.Starting from the FSU2H * model, we develop two additional models, FSU2H * L and FSU2H * U, obtained by spanning the uncertainty range of the hyperon potentials.These uncertainties have an effect on the composition and the EoS for different conditions of density, temperature and baryon charge fraction, being especially manifest for ultra dense matter ( 0.7 fm when hyperons already achieve a abundance.This effect most noticeable for the pressure, as the differences arising from the hyperonic uncertainties start to be more important than the temperature corrections to the EoS.Since most of the constraints that we have from astrophysical measurements are obtained from cold NSs, we have first determined the global properties of NSs employing our three hyperonic models in a = 0 framework.We have seen that the main effect of the uncertainties on the EoS directly translates into the value of the maximum star mass predicted by the models.Quantitatively speaking, the uncertainty of the maximum mass is around 7%, while the radii of the maximum mass stars are less affected.However, for a given massive star, the predicted radius greatly differs between the models.As for the tidal deformability and moment of inertia results, we again find the hyperon uncertainties to start playing a role in stars with masses higher than > 1.5 ⊙ , with lower (higher) values of tidal deformabilities and moment of inertia for the softer (stiffer) FSUH * L (FSUH * U) model.The differences are sensibly enhanced in the case of very massive stars ( > 1.7 ⊙ ).We note that the information from the gravitational waves on the tidal deformability at this moment constrains the intermediate density region, which is not sensitive to the uncertainties in the hyperonic sector.
Secondly, we have used the FSU2H * L, FSU2H * and FSU2H * U models to obtain the global properties of stars at finite temperature.While the densities, temperatures and charge fractions explored in the present work are inspired by the typical values found in the evolution of PN stars, we note that the effects discussed here are much more general, and can be applied whenever similar conditions for the EoS are encountered.We find that, due to the increased hyperon abundances at high temperature, the uncertainties in this case play an even bigger role than at zero temperature.For example, the hyperonic uncertainties may produce differences in the radius of the most massive evolved stars of up to 20%, while the tidal deformability may differ by almost an order of magnitude.
We note that all our results are quite general and not restricted to the specific model used.Thus, similar outcomes are expected from other approaches that take into account hyperonic degrees of freedom.Our findings have a direct implication on relativistic simulations of NS mergers and supernovae, thus emphasizing the need for these simulations to consider hyperons and their uncertainties to ensure the accuracy and reliability of their results.
APPENDIX A: PROPAGATION OF THE HYPERONIC UNCERTAINTIES TO NEUTRON STAR OBSERVABLES
In the main part of the paper we have discussed the results obtained by two extreme model versions, FSU2H * L and FSU2H * U.They were built by making all hyperonic potentials to simultaneously take either the upper or the lower limit of their corresponding uncertainty band.In this appendix we give a proper justification for our choice.
It is clear that the hyperonic uncertainty space is not onedimensional.This means that different combinations of the softness or hardness of the three hyperonic potentials can in principle affect both the thermodynamical quantities and the observables of the star in a non-linear way.However, as most of the relativistic simulations of neutron star mergers and supernovae are computationally expensive, it would be practical to cover a wide range of the uncertainties with the smallest number of possible models.In order to check whether the two models considered in this work, FSU2H * L and FSU2H * U, can serve this purpose, we have compared their outcome to that of eighteen additional parametrizations.These new models are designed to favor/disfavor the appearance of a particular hyperonic species by taking the smallest/largest value of its corresponding hyperonic potential, while fixing the potentials of the other hyperonic species to their largest/smallest values.This is done for three diferent values of 0.0 0.2 0.4 0.6 0.8 1.0 ρ B (fm −3 ) Compositions obtained both with the FSU2H * L (dashed lines) and FSU2H * U (dash-dotted lines) parametrizations and the additional ones.For each species, the upper/lower abundance curve signals the density dependence of the maximum/minimum abundance obtained within all the models.Correspondingly, the shaded regions represent the composition uncertainty interval for each species.
the ΛΛ bond energy, namely 0, -0.67, -6.0 MeV, which controls the strength of the hyperon-hyperon interactions.With these eighteen parametrizations we have computed the equation of state, the composition, and both the () and Λ() relations of cold stars and of stars with constant entropy / = 2.All stars are assumed to be in − equilibrium.
The results for the composition of the star are shown in Fig. (A1).The solid lines, limiting the corresponding color-shaded regions, stand for the maximum and minimum abundances of each species attained within all parametrizations as functions of the density.Therefore, the shaded regions for each particle can be interpreted as an uncertainty in its abundance.For comparison, the results predicted by the FSU2H * L and FSU2H * U models are represented with dashed and dash-dotted lines, respectively.We can see that the abundance of a particular hyperonic species strongly depends on the interplay between the hyperon uncertainties.The extreme models only partially cover the allowed region.This is especially important for the Σ − and Ξ − hyperons since, as we discussed in the main part, the uncertainties in their potentials are significant.However, if one focuses on the non-hyperonic species, one observes that the FSU2H * L and FSU2H * U parametrizations do describe their uncertainties very well, both at zero and finite temperature, as the abundance curves predicted by these models lie at the edge of the uncertainty regions.As the total hyperonic content is tied to the non-hyperonic one, one can conclude that the uncertainty of the total number of hyperons in matter is well described with the two extreme models.
The conclusion in the main part of the paper about the FSU2H * L and FSU2H * U models describing well the range of uncertainties in the global properties of the star is further confirmed by examining the results of the new models for the thermodynamical quantities and the star observables.In Fig. (A2) we show the pressure of − stable matter as a function of the baryonic density.The dashed lines represent the results of the additional parametrizations, while the solid lines indicate the predictions from the two original models.It is clear that the uncertainty in the pressure of matter is well described by the FSU2H * U (red solid line) and FSU2H * L (blue solid line) extreme models.The conclusion is valid both at zero and finite temperature.This behavior is also found for the star observables, as can be seen in All in all we conclude that the FSU2H * L and FSU2H * U models can be used to explore the uncertainties in the thermodynamical quantities of neutron star matter and the neutron star observables, both at zero and finite temperature.While the uncertainty in the abundance of a particular hyperonic species is only partially described by these two extreme models, they do cover the uncertainty range of the total hyperonic content.Therefore, unless one finds an observable that depends on the specific hyperonic abundances, the FSU2H * L and FSU2H * U models can be safely used in relativistic simulations of neutron star mergers and supernovae to assess the effect of the hyperonic uncertainties in their outcomes.
Figure 1 .
Figure 1.Composition of baryonic matter for different charged fractions (columns) and temperatures (rows).Solid lines correspond to calculations with the FSU2H * U model, while dashed lines correspond to those with the FSU2H * L one.
Figure 2 .
Figure 2. EoS of hypernuclear matter for different charged fractions and temperatures.Solid lines correspond to calculations with FSU2H * U model, while dashed lines correspond to ones with FSU2H * L. Blue lines are calculations done at = 1 MeV, green lines are performed at = 20 MeV, while red ones are at = 80 MeV.The pressure at low densities up to 0.4fm −3 is shown with inset plots in the upper panels.
Figure
Figure Normalized strangeness number as a function of compactness for the FSU2H * , FSU2H * L and FSU2H * U models.Each line ends with a dot that indicates the maximum mass star configuration that is stable.
Figure 7 .
Figure 7. (left plot) and electron fraction profiles (right plot) of the stars used in our calculations.Solid lines represent the results with FSU2H model, dashed lines represent FSU2H * L ones and dashed-dotted lines are results with the FSU2H * U model.We note that profiles inspired by simulations at = 1 s are the same for all three models.
1
the exact transition point depends on both and as well as the model that is used (see Ref. Hempel & Schaffner-Bielich (2010))
Figure 8 .
Figure 8. () relations for the profiles described in the text.Solid lines represent the results with FSU2H model, dashed lines represent FSU2H * L ones and dashed-dotted lines are results with the FSU2H * U model.
Fig. (A3) and Fig. (A4), which show, respectively, the results for the () relation and the Λ() relation of − stable stars obtained by the different parametrizations.
Figure A4.Λ( ) relation obtained with the original parametrizations, FSU2H * L (blue solid curve) and FSU2H * U (red solid curve), as well as with the additional eighteen parametrizations (dashed lines).
Table 1 .
Parameters of the model FSU2H * for the nucleon coupling constants and mesons masses.The mass of the nucleon is equal to = 939 MeV.
() () * * () * () ) relations for the profiles described in the text.Solid lines represent the results with FSU2H model, dashed lines represent FSU2H * L ones and dashed-dotted lines are results with the FSU2H * U model./ 2 ( ) relations for the profiles described in the text.Solid lines represent the results with FSU2H model, dashed lines represent FSU2H * L ones and dashed-dotted lines are results with the FSU2H * U model.
Normalized strangeness number for the profiles described in the text.Solid lines represent the results with the FSU2H model, dashed lines indicate the FSU2H * L ones and dashed-dotted lines are results with the FSU2H * U model.Each line ends with a dot that indicates the maximum mass star configuration that is stable.
|
2023-09-27T06:43:15.117Z
|
2023-09-26T00:00:00.000
|
{
"year": 2023,
"sha1": "fac09bb956e1f4dbde97fb06385b0551a6c7ffe1",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stae231/56333922/stae231.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "01f01f0e1946f36c428cc1bcc25ffd3b6e8e473b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
259029494
|
pes2o/s2orc
|
v3-fos-license
|
House of Risk (HOR) Approach to Manage Risk involving Multi-stakeholders: The Case of Automotive Industry Cluster of Multifunctional Rural Mechanized Tool (MRMT)
The agricultural sector is one of the potential sectors for the economic development of the Indonesian nation and the improvement of rural community welfare. Therefore, it needs to be well-managed, including through the provision of adequate supporting facilities such as transportation equipment that is suitable for the characteristics of the agricultural sector. PT KMWI is one of the companies that produces specialized transportation equipment designed to support agricultural activities in rural areas, known as the Multifunctional Rural Mechanized Tool (MRMT). The production involves several businesses and industries that are established within an MRMT automotive industry cluster. Effective cooperation and collaboration among the stakeholders in the industry cluster significantly determine the efficiency and effectiveness of the products produced. Therefore, it needs to be well-managed, and risk management needs to be implemented to maintain its functional stability using the House of Risk (HOR) method. From the application of HOR 1, the multistakeholder approach yielded the Combined Aggregate Risk Potential (CARP), and 6 priority risk potentials were selected based on a Pareto chart, which resulted in the determination of 13 mitigation actions. The risk potential with the highest CARP value among the priority risks is risk factor (A4) inaccurate demand forecasting. Then, the 13 mitigation actions were assessed using HOR 2, the multistakeholder approach, to obtain the Effectiveness to Difficulty (ETD) value of each mitigation action for every stakeholder.
INTRODUCTION
The agricultural sector is one of the potential sectors for the economic development of the Indonesian nation and the improvement of rural community welfare. Therefore, it needs to be well-managed, including through the provision of adequate supporting facilities such as transportation equipment that is suitable for the characteristics of the agricultural sector. PT KMWI is one of the companies that produces specialized transportation equipment designed to support agricultural activities in rural areas, known as the Multifunctional Rural Mechanized Tool (MRMT).
The MRMT project is not the first automotive project initiated by the government. There have been many automotive projects by the government that were previously planned up to the testing phase but were stopped before entering mass production. This was influenced by several factors such as weak competitiveness with products that already exist in the market, government regulations that were less supportive, and others. However, compared to previous projects, MRMT has a greater chance of success up to the after-market stage. To strengthen competitiveness, an MRMT automotive industry cluster was formed.
With the increased competitiveness, local products will have a greater opportunity to have a wider market. Therefore, local products can be optimally utilized in the production of MRMT by the Ministry of Industry. The success in producing MRMT is also influenced by several factors such as managing uncertainty or potential risks that may become obstacles.
Risk is the uncertainty that may occur in the future (Verwire & Berghe, 2004). According to Monahan (2004), risk is the loss caused by an event or multiple events that hinder the achievement of a company's goal. Risk is also the possibility of an event occurring that will impact the achievement of a goal and can be measured by likelihood and consequences (AS/NZS 4360, 2004). In an industrial cluster, one of the processes that determines the effectiveness of the cluster is the quality of its supply chain management. Therefore, it is necessary to analyze and mitigate the risks that may occur.
Risk analysis in the supply chain needs to be conducted to develop a framework that can identify, assess, and mitigate supply chain risks not only within the company but comprehensively within a supply chain (Parenreng et al., 2016). Supply chain risk management is the process of identifying and managing risks across the supply chain map both internally and externally, using a coordinated approach among supply chain stakeholders to reduce overall supply chain liabilities.
According to Pujawan and Geraldin (2009), the HOR is a method that suggests that proactive supply chain risk management should pay attention to preventive measures by reducing the chances of a risk occurring. The HOR method is a combination of the FMEA (Failure Modes and Effect Analysis) method with the HOQ (House of Quality) model. The FMEA method in the HOR model is in the stage of analyzing the level of risk obtained from the calculation of the Risk Potential Number (RPN), which is the result of multiplying the chances of risk occurrence (occurrence), the impact of a risk (severity), and the chance of risk detection.
The House of Risk method is divided into two processes, namely HOR 1 and HOR 2. In the HOR 1 stage, the prioritization of the causes of risks that need to be prevented is conducted, while in HOR 2, the prioritization of actions that are considered the most effective from the cost factor and general resources is carried out (Pujawan & Geraldine, 2009).
The automotive industry has a complex flow of information and materials, making it quite susceptible to errors in the process. An automotive industry cluster with stakeholders who have different interests has a higher operational complexity and therefore carries higher risks. Therefore, the MRMT industry cluster needs to perform risk management on each activity that will be mapped based on the stakeholders involved in MRMT. This will determine how risk mitigation strategies will be applied to the stakeholders involved to prevent them from affecting other stakeholders and disrupting the performance of the industry cluster. This study aims to identify and map potential risks and formulate risk mitigation strategies for the MRMT automotive industry cluster.
LITERATURE REVIEW
Risks in the context of supply chain mat emerge from both globalization and rapid development of technology (Lin et al., 2006). Risk in supply chain could be a result of severe macroeconomic situation, problems in social systems, and political issues. In a micro context, risk could be also due to problems with the suppliers, internal processes, as well as problems from the demand side. Juttner et al. (2003) suggest that risk sources fall into one of three categories: 1) environmental risk sources, 2) networkrelated risk sources or 3) organizational risk sources. Ivanov and Dolgui (2021) distinguish between three levels of disruption propagation in the context of a supply chain, namely: network, process and control. Risk in supply chains, however, can also be the results of poor design of the supply chain itself (Wagner & Bode, 2006). This requires supply chain managers to always include risk factors when making supply chain decisions, be it for strategic, tactical, as well as operational decisions. In response, researchers are now revisiting the concept of supply chain vulnerability (Juttner, 2005;Papadakis, 2006;Wagner & Neshat, 2012).
A supply chain is a complex network involving multiple stakeholders operating within an organizational environment. It encompasses various parties, such as suppliers, manufacturing companies, logistics companies, distribution and sales agents, as well as other stakeholders like infrastructure operators, regulators, banks, and insurance companies. Risks and uncertainties are present at every stage of the activities involved in acquiring goods and services and delivering the final output to the customer (Harland et al., 2003), including support activities. Gheorghe & Mock (1999) propose that stakeholder analysis is an effective approach to studying risk management. The stakeholder approach recognizes diverse risk perceptions, which can impact how supply chain risks are managed. It is important to note that risks occurring within a specific supply chain member can be a consequence of problems in other members of the supply chain. For instance, a delay in material supply at a manufacturing company may be the result of production issues within the supply chain. This delay could also be triggered by a problem on the road, which falls under the responsibility of the government, as one of the stakeholders in the supply chain. Unfortunately, there is limited research addressing how risks are managed concerning different stakeholders within a supply chain system.
RESEARCH METHODOLOGY
Brainstorming with the MRMT industrial cluster stakeholders was conducted to determine the existing condition. Subsequently, identification of stakeholders in the MRMT automotive industry cluster, identification of value chain activities, classification of stakeholders, identification of potential risk events and risk agents, and mapping of the relationship between risk events and risk agents were carried out.
The risk analysis with HOR 1 Multistakeholder begins with assessing the impact of each risk event (severity) on each stakeholder and evaluating the likelihood of each risk event (occurrence) for each risk agent, as well as the relationship between the risk agent and the risk event.
The risk evaluation phase is conducted to determine which risk agents need to be mitigated first. Prioritization is based on the results of the Combine Aggregate Risk Potential (CARP) score for each risk agent. The risk agents that receive priority are those with high CARP scores, and this is also determined using Pareto Chart analysis.
The determination of risk mitigation actions for each classified risk agent based on priorities is done using the HOR 2 Multistakeholder method, where the relationship value of the action and the difficulty level of performing the mitigation are determined for each stakeholder. The prioritization of the mitigation actions is based on the ratio value of the difficulty level of the mitigation actions for each stakeholder (ETDs), starting from the highest to the lowest value (Asrol, 2017;Djunaedi, 2005;Gillbert, 2007).
Stakeholders of the MRMT Automotive Industry Cluster
The stakeholders of the MRMT Automotive Industry Cluster were identified using the general stakeholder model by Partiwi and Hanoum (2009) The next stakeholders identified for this research are the Ministry of Industry, IOI, and representatives from universities or academics. The selection of these stakeholders is based on the results of brainstorming and the assessment of the stakeholder attribute matrix, namely the level of interest and power of influence.
Value Chain of The Automotive Industry Cluster MRMT
In the process of mapping the value chain, there are several core processes, including input provision, production process, ordering and delivery, and consumption. Value chain of The MRMT Automotive Industry Cluster is shown in Figure 1.
Flow Process of the MRMT
The flow process for producing MRMT starts with input by managing inbound logistics, where most of the MRMT suppliers are local products, especially from small and medium enterprises (IKM). Meanwhile, machines and technology are obtained from the management of PT KMD, a subsidiary of Astra Otoparts. Then, the core activities include part sequencing, which is done to ensure that the required parts arrive in the production line correctly. Scheduling is used to adjust production volume with the demand forecasting schedule. MRMT will be delivered by PT KMD as a distributor to end consumers. The overview of the MRMT flow process is given in Figure 2.
Identifying Risks in MRMT Industry Cluster
In the initial stage of risk identification, several potential risks were collected from previous research references as the initial input for respondents to determine potential risks in the supply chain of the MRMT automotive cluster industry. After the initial potential risks have been collected, they are then grouped into risk agents and risk events. List of the risk agents and the risk events are shown in Table 1 and Table 2, respectively.
After grouping the risk agents and risk events, an initial mapping is then conducted to connect a risk agent with a risk event. Figure 3 illustrates the relations between risk agents and risk events The initial mapping of relationships will be confirmed through discussions with stakeholders.
Risk Assessment Using HOR 1 Multistakeholder
The severity and occurrence assessment are carried out using a rating scale based on Anityasari and Wessiani (2011). The rating scales for severity and occurrence are given in Table 3 dan Table 4, respectively.
In HOR 1 multistakeholder, there are three severity values obtained from each stakeholder, therefore, three ARP (Aggregate Risk Potential) values and a CARP (Combined Aggregate Risk Potential) value are obtained by summing each ARP value. The CARP value is used to indicate which risk factors should be prioritized for mitigation action because they have the potential to disrupt the performance of the MRMT automotive industry cluster. Figure 4 shows the CARP (Combined Aggregate Risk Potential) values from the three stakeholders, namely the Ministry of Industry, Academia, and PT KMWI. In this study, 6 priority risk factors will be taken based on the previous Pareto diagram, namely (A4) inaccurate demand forecasting, customer complaints about the product, and (A14) delayed delivery of products to customers.
A1
The material specifications provided by the supplier do not meet the standards. A2 Delayed delivery from supplier A3 The distributor is experiencing delays in picking up the finished goods. A4 The demand forecast is not accurate enough A5 There is a buildup of inventory in the form of finished goods A6 There is a disturbance in the transportation of products at the distributor A7 There is a damage in the production machine A8 There is a communication error in interpreting information A9 There is a labor strike that has resulted in the cessation of production A10 There is a shortage of skilled labor A11 Human error A12 There is a less-supportive regulation A13 There are limitations of credit service companies for consumers A14 There is a delay in delivering the products to the customers A15 Complaint from customers regarding the product A16 Media factor A17 There is an increase in inflation A18 There is a fire incident A19 There is a natural disaster The production process is hindered E3 There is a buildup of inventory in the form of finished goods E4 There has been a sudden change in production demand E5 The decrease of customer's satisfaction E6 Additional cost for calling the distributor E7 The distributor is experiencing delays in picking up or delivering products to customers E8 Additional costs have emerged. E9 There is a difference in the standard interpretation between the core actor and the supplier E10 Contractual violation against an institution has occurred E11 Workplace accident E12 The impression regarding the existence of hazards in marketed products. E13 Uncertainty in production costs E14 Sudden price increases (materials, transportation, etc.) E15 The factory is unable to operate or is undergoing a forced shutdown E16 There is an overstock in the storage warehouse E17 Errors in production planning E18 Complaints regarding the addition of the MRMT application
Selection of Risk Mitigation using HOR 2 Multistakeholder
Based on the identified priority risk causes, the next step is to determine the mitigation actions or preventive actions for the selected risk causes. The determination of risk mitigation actions is obtained from the results of brainstorming sessions with stakeholders who will execute those mitigation actions. The risk mitigation actions refer to the risk agents are presented in Table 7. Likely 50%-75% 5
Figure 3. Diagram of Risk Agents and Risk Events Relationships
Almost certain >75% Table 5. Assessment Scale of Correlation between Cause and Event Level Description 0 No correlation 1 Low correlation 3 Moderate correlation 9 High correlation To facilitate the assessment of mitigation actions, a bar chart is needed for each stakeholder. The implementation of risk mitigation actions starts with the highest ETD value to the lowest. This is because high ETD values are easier to implement than low ETD values. Based on Figure 5, mitigation action can begin with (PA9) imposing punishment on suppliers who have the highest ETD value.
RESULTS AND DISCUSSION
The selection of stakeholders can be determined by assessing them using a stakeholder attribute matrix. The attributes used are the level of interest and power of influence. The level of interest refers to how strongly the stakeholder is interested in the activities and other stakeholders within the industry cluster, while the power of influence reflects the stakeholder's strength in influencing the activities of the industry cluster (Fujita and Thisse, 1996;Djamhari, 2006;Ho, et al, 2015).
Mismatches in material specifications sent by suppliers, inaccurate demand forecasts and the occurrence of production machine breakdowns are the highest ranked risks that cause production quality not to meet expectations and or delays in completion. The different interests between MRMT cluster stakeholders need to be synchronized so that an effective risk mitigation can be developed. Based on the risk impact analysis generated through multistakeholder HOR 1, the CARP value is obtained which is an aggregate of the ARP values of each stakeholder and for further risk mitigation.
Accuracy in forecasting the need or demand for products will determine the planning and procurement of materials that form them; therefore, it must be done using the right method with accurate data. If the actual demand is greater than the forecast and the company cannot meet the existing demand, and vice versa, if the actual demand is greater than the forecast, there will be a buildup of finished goods in the warehouse. With the decline in customer satisfaction or the buildup of finished goods in the warehouse, the government will have to help solve the problem with various policies.
The causes of the risk of material specifications sent by suppliers not meeting standards include the fact that most local suppliers (70%) are SMEs (Small and Medium Industries), which generally have a low-quality control system. The material specifications in question can be in the form of quantity or quality that does not meet the standards. This can cause the production process to be hampered and cause losses for the company. Therefore, the development of SMEs needs to be the focus of development programs carried out by the government in collaboration with universities and related institutions, as well as by PT KMWI so that quality and consistency can continue to be improved and guaranteed in the future.
PT KMWI is an MRMT manufacturer selected by the Ministry of Industry (MoI) with an important function of carrying out the production process with a better understanding of the direct conditions regarding MRMT manufacturing processes. The success of MRMT production process is largely determined by PT KMWI. The next stakeholder is the Ministry of Industry, which is a stakeholder of the government agency. MoI has high interest and power as the business owner and has significant authority in making decisions. The Ministry of Industry is a government agency that starts planning the establishment of MRMT supported by several other associations. Starting from product planning to mass production and after-sales of MRMT, all are done under the supervision of MoI. The last stakeholder with high interest and power is the University or academia, which plays a role as one of the processes in the development and research of the formation of MRMT. The role of universities is important in the research and testing process, where the success of the feasibility of producing MRMT is determined, in part, by universities.
CONCLUSIONS
This paper is an attempt to analyse supply chain risks in the context of an enterprise that connects with other stakeholders, in particular the Government and Academics. We identified 36 risk potentials which were grouped into 19 risk causes and 18 risk events. From the assessment of severity and occurrence, as well as the correlation between risk causes and risk events, 6 priority risk causes were identified with a total cumulative CARP of 75%. Mitigation actions were carried out on priority risks through a brainstorming process with stakeholders in the automotive cluster industry. This paper enriches the literature in the involvement of other stakeholders in managing supply chain risks. The future research is expected to be more specifically involve different stakeholders in handling risks within a supply chain network.
|
2023-06-03T15:17:37.220Z
|
2023-05-31T00:00:00.000
|
{
"year": 2023,
"sha1": "4214fb57188d7d2cc30cfa2bd8c04715622f7582",
"oa_license": null,
"oa_url": "https://doi.org/10.31387/oscm0520378",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9285cbe76426bdb7591635d1fc6929069e3926af",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
244418536
|
pes2o/s2orc
|
v3-fos-license
|
Strengthening communities’ disaster resilience during COVID-19 time: A case of Muhammadiyah in Indonesia
Since the early of March 2020, COVID-19 pandemic which broke out in Indonesia, had a significant impact on various aspects of life. The pandemic not only has pushed the government to take a strategic decision but also has forced the communities to accommodate this condition immediately. Muhammadiyah is one of Indonesia’s largest religious organizations has played its role to overcome COVID-19 pandemic alongside with the government. This study aims to determine the contribution of Muhammadiyah to tackle the pandemic by strengthening the disaster resilience community. The study uses a qualitative approach and a descriptive method. The primary data were obtained by in-depth interviews with Muhammadiyah organization leaders and several working teams handling the pandemic. Literature study was conducted as the secondary data. To tackle the pandemic by strengthening communities’ disaster resilience, Muhammadiyah carried out several programs, including (1) strengthening da’wah networks massively to enhance the community awareness about COVID-19 pandemic, (2) establishing several working teams handling the pandemic to enhance disaster resilience communities, including Muhammadiyah COVID-19 Command Center (MCCC), Muhammadiyah Disaster Management Center (MDMC), and philanthropic body of Muhammadiyah (LazisMu), and (3) Muhammadiyah engagement with stakeholders and development agency, such as the collaboration with the Ministry of Health, BNPB, DFAT and USAID to COVID-19 preventive and curative action.
Introduction
Since the end of 2019, various countries across the world have to fight against the pandemic caused by the Novel Corona virus known as COVID-19. The virus was first discovered in the Chinese city of Wuhan in December 2019. It claimed the Huanan traditional market in Wuhan to be the site of the initial spread of the corona-virus (Aida, 2020b). One of the initial causes of COVID-19 is thought to be transmitted or spread through bats which are then consumed by humans. It spread in several countries indicated positive, can be cured, and died because has a high human-to-human transmission rate. After many cases were found, the Chinese government immediately took emergency steps as a preventive measure to prevent the virus from spreading further. The initial step taken by the Chinese government was to quarantine the city of Wuhan and several other cities in Hubei province. The government closed all transportation access and, for the first week, stopped 50 international flights to 20 countries (Aida, 2020b) On January 30, 2020, World Health Organization (WHO) declared corona-virus outbreak as a Public Health Emergency of International Concern (PHEIC) which is the WHO's highest warning level. WHO Director-General, Tedros Adhanom, reestablishes IHR Emergency Committee (EC). There were 98 cases in 18 countries outside China, without death cases. On March 11, 2020, WHO officially declared COVID-19 as a global pandemic. Within about 4 months after the first case appeared in China, COVID-19 continues spreading around the world with nearly 39 million confirmed cases in 189 countries. The United States, Brazil, Russia, Spain, Italy, Britain, and India were the countries with the highest cases. To prevent further transmission, governments in various countries have limited public mobilization and quarantined their citizen. Some countries that set quarantine policies include China, Italy, Spain, France, Malaysia, the Philippines, and New Zealand (Aida, 2020).
In Indonesia, the government confirmed the first two COVID-19 cases on March 2, 2020, then on March 6, 2020, the third and fourth cases were confirmed. Meanwhile, the Presidential Decree of The Republic of Indonesia Number 7 of 2020 concerning the formation of a Rapid-Response Team led by the Head of the National Disaster Management Agency (BNPB) was issued on March 13, 2020, when 69 people announced positive COVID-19 in Indonesia. The Head of BNPB further announced COVID-19 as a non-natural emergency, on the same day that the Minister of Transportation, Budi Karya, was announced to have infected COVID-19 on March 14, 2020, when 96 people announced positive COVID-19. The next day, the President and all the Cabinet underwent a test, when the number of positive cases had increased to 117 people (Vermonte & Wicaksono, 2020).
To overcome the pandemic, crosssector collaboration is needed. Besides the government agencies, community organizations, NGOs, or civil society also have a big role. Muhammadiyah is the secondlargest Islamic organization in Indonesia has an enormous network and big mass base. Therefore, the role of Muhammadiyah is very important in assisting the government in handling COVID-19 pandemic.
The initial response of Muhammadiyah Central Board after the Government confirmed the first case of COVID-19 was to form a special agency for handling the pandemic, namely the Muhammadiyah COVID-19 Command Center (MCCC). Muhammadiyah Central Board through General Health Supervisory Council (MPKU) and Muhammadiyah Disaster Management Center (MDMC) have jointly formed the . The purpose of the MCCC is to coordinate the implementation of programs and actions to deal with COVID-19. The MCCC also makes special guidelines related to preventive behavior, worship, new life habits, and how to stop the spread of COVID-19 (Dayasos, 2020).
Muhammadiyah turned out to become an example of building awareness in Indonesia to face COVID-19 pandemic. It has been clear in its strategic role in building and generating awareness of this non-natural disaster. Rahmawati Husen from MCCC said Muhammadiyah could take advantage of all donations aimed at overcoming the spread of COVID-19 with all Muhammadiyah communities collaboration providing assistance to tackle the pandemic (Subarkah, 2020). From the background described above, the focus of this research is to identify each step or program implemented by Muhammadiyah in helping the government overcome the pandemic, how Muhammadiyah strengthens its communities da'wah network, and how Muhammadiyah builds community awareness that is responsive and has disaster resilience. And the crusial question is what this social movement by Muhammadiyah means from a sociological perspective.
Method
The present research used a descriptive method. The descriptive method is the search for the correct interpretation of facts used to study problems in society, as well as the procedures that apply in society and certain situations, including the relations of activities, attitudes, views, and processes that take place and the effects of a phenomenon (Mohammad, 1998). According to Cassell and Symon, qualitative methods are social science research methods that describe and interpret the meaning of symptoms that occur in a social context accurately. Qualitative methods emphasize the importance of extracting data through written or spoken sources (Cassell & Symon, 2004).
By using qualitative methods, it was expected that this study could provide complex textual descriptions of how people experience a research issue. It provides information about the "human" side of an issue -that is, the often-contradictory behaviors, beliefs, opinions, emotions, and relationships of individuals. Qualitative methods are also effective in identifying intangible factors which can help us interpret and get a better understanding of complex reality. The qualitative method has been used to analyze how Muhammadiyah overcomes the pandemic by strengthening da'wah and building a disaster-resilient community.
The data collection method of the research includes primary and secondary data, namely the combination of in-depth interviews and literature review. The first category focuses on digging out the disaster-resilient works in each person in charge's responsibilities. In-depth interviews are optimal for collecting data on individuals' personal histories, perspectives, and experiences, particularly when sensitive topics are being explored. The researchers selected the informants based on the purposive sampling by selecting the person in charge in each division. In this research, in-depth interviews were addressed to three informants from Muhammadiyah COVID-19 Command Center (MCCC) on Central Board, Muhammadiyah Disaster Management Center (MDMC) and LazisMu in Kudus regency. Meanwhile, the second focuses on the aspects of the research through any relevant literature studies by examining related/previous studies. The researchers conducted a literature review by collecting data from journals, books, news, officially issued documents, and the official websites of MCCC, Muhammadiyah, and its councils. The combination of the data collecting technique aims to get validity on precise data, which can be confirmed by the informants.
To check the validity of findings, the researchers used triangulation methods. Triangulation according to Sugiyono (2011) is defined as a technique that combines various data collection techniques and existing data sources (Sugiyono, 2014). Using triangulation is to track dissimilarities between data obtained from one informant (the informant) and other informants. In this study, the triangulation done is 1) Triangulation of sources, namely by comparing what said subject with said informant with the intention is that the data obtained can be trusted because it is not only obtained from one source only namely the research subject but the data is also obtained from some other source such as several working teams of Muhammadiyah; 2) Using reference materials. This reference material is a supporting tool for proving the data found by the researcher. Like data, interview results need to be supported by recordings. In this study, the researcher used the tools recorder to record the results of interviews with informants. Shepard (2004) categorizes Muhammadiyah as an Islamic modernism movement that focuses attention on the da'wah movement in society than on state affairs. It means that this modernist Islamic organization with a non-political-cultural nature concentrates more on education and social welfare. Muhammadiyah was declared to differ from Islamic Revivalism, which was engaged in the political sphere but also did not become like the modernism movement with a secular and radical view. Wasathiyyah, by Muhammadiyah, is used as a moderate ideology that presents a richer view and becomes an alternative point of view. It cannot be interpreted as an unclear or fickle understanding, because Muhammadiyah in practicing wasathiyyah (middle wing community) is believing, understanding, and implementing Islamic values so that everything carried out is always actual and becomes a religion for civilization throughout the ages (Nashir, 2019).
Muhammadiyah as an Islamic movement and its contribution to social development
Muhammadiyah, which was born amid colonialism by KH. Ahmad Dahlan had a subjective view that the condition of society needed to develop goals under Islamic values practice to all areas of life (aqidah, worship, morality, and mu'amalah). In addition, the issues brought by KH. Ahmad Dahlan was also the development of life, which was reflected in the establishment of organizations and institutionalization of education and health facilities. Because of these two, the founder of Muhammadiyah (KH. Ahmad Dahlan) built this multi-sector organization to be both orthodoxy and orthopraxy. Muhammadiyah in particular will form councils or bodies adapted to its needs to serve the problems of the ummah. This decision will be inseparable from the contextualization of the problems faced by the community.
According to Baidhawy (2016: 90), the concept of the ummah in the Qur'an comprises (1) nation, (2) group/society, (3) religion, (4) a certain period, (5) human being, and (6) religious communities. It underlined the six meanings that ummah has a value factor so that it can be interpreted that there is an attachment to certain values that are believed. Besides to the value factor, the time factor is aimed at our attachment to life in a certain period. Finally, there is the spatial factor which is referred to as the scope in which the process occurs, namely at the smallest level of religious groups, communities, nations, and the ummah (Baidhawy, 2016).
Because its movement relies on offering solutions to the problems of the ummah, Muhammadiyah, According to Baidhawy (2016: 99), is also classified as a civil society which can play a role in the social interaction space between politics and the economy. In this social space, Muhammadiyah moves from the family (usrah), community (qaryah), society to the scope of the state (baldah). This role is not directly in the vortex of political stakeholders and economic power holders, but appears more in alternative activities in the life of democratic associations and in the cultural public sphere (Baidhawy, 2016).
As an Islamic organization and civil society, Muhammadiyah places its ideological contextualization development according to its period. It is reflected in several agreements from the highest deliberation. One of those is the Preamble of Muhammadiyah Articles (Muqaddimah Anggaran Dasar Muhammadiyah) due to the importance of controlling the development of the spirit in the organization and strengthening the ideology from other external influences. The first formulation of the organizational ideal was the "True Islamic Society". The correlation of the ummah concept, Muhammadiyah ideology and its target in civil society can be seen in table 1 (Baidhawy, 2016): In the economic term, Muhammadiyah actualizes the contribution to independence, upholding justice and driving the economy in society. For its function, Muhammadiyah has a trusted philanthropic institution to manage zakat, infaq, and sadaqah in the practice of interpreting Al-Ma'un. Muhammadiyah also defends the rights of the mustadh'afin (the oppressed) in life. For good institutionalization, Muhammadiyah can also build business institutions that distance themselves from the goals of capital. As for the cultural term, Muhammadiyah can play a role in the intellectual and moral space and strengthen ideology in its alignments. It can be done by bringing renewal (tajdid) in thought and movement while transcendence to face the emerging moral problems. In addition, Muhammadiyah can control the government policies, which can both be a partner when the government does what it should and also can be an alternative movement or a critic when the state cannot carry out its duties.
As for the political term, Muhammadiyah plays a role in the political sphere by playing in the public sphere as a power of public voice that continuously discusses the public interest. It has the intention of absorbing public opinions and organizing them into control over government institutions. In addition, there is a function of influencing the policies implemented by the state indirectly.
To bring on the Muhammadiyah ideals, Muhammadiyah built several particular councils and bodies to overcome its specific goals, for instance, Public Health Advisory Council (Majelis Pembina Kesehatan Umum) and Social Services Council (Majelis Pelayanan Sosial) that develop and expand the strength of the movement basis so that it becomes a big coverage to the general health services and social services in an integrated and broader way, Tarjih and Tajdid Council that turning on tarjih, tajdid, and Islamic thought in Muhammadiyah as a dynamiccritical renewal movement in people's lives and proactive in carrying out problems and challenges of socio-cultural development, Tabligh Council that increase the quantity and quality of Muhammadiyah's role as a social da'wah movement that has a direct influence on creating an Islamic society, Higher Education, Research and Development Council (Majelis Diktilitbang), Community Empowerment (Majelis Pemberdayaan Masyarakat) as a solid foundation for the pioneering and development of empowerment activities as well as encouraging the process of social transformation in society, Muhammadiyah Disaster Management Center (MDMC), and Islamic philanthropic funds management (LazisMu) (P. P. Muhammadiyah, n.d.).
The contribution of Muhammadiyah to build disaster-resilience community during the pandemic era
The United Nations Office for Disaster Risk Reduction (UNDRR) defines resilience as the ability of a system, community, or society to encounter the disaster by resist, absorb, accommodate, adapt, transform, and recover from effects and hazards in a timely and efficient manner, including to preserve and restore the essential basic structuresand function through risk management (Community Engagement for Disaster Resilience, 2000). Meanwhile, society is defined as a system of habits, procedures, authority and cooperation between various groups, classification, and control of human behavior and habits. Society is living together for a long enough period to produce a custom. Selo Soemarjan defines society as people who live together who produce culture and they have a common area, identity, have habits, attitudes, traditions, and feelings of unity that are bound by similarities (Soekanto, 2006).
The definition of community resilience can be understood as: a) the capacity to absorb crushing pressures or forces, through resistance or adaptation; b) capacity to manage, or maintain certain basic functions and structures, during future events catastrophe; c) capacity to recuperate or 'bounce back' after the incident disaster (Twigg, 2004) The terms 'resilience' and 'vulnerability' are basically relative terms. Therefore, it is necessary to examine the individual, which societies and systems are vulnerable or resistant to disaster. A 'disaster-resilient society' is something more desirable. No society is completely safe from disasters or hazards associated with human activities. Thus, a society that endures disaster or disaster resilience can be imagined as the society with the highest level of security can design and build in an environment that contains a risk of disaster, which minimizes vulnerability by maximizing the implementation of Disaster Risk Reduction measures (Indiyanto & Kuswanjono, 2012). To improve community disaster resilience, five forms of capital are needed, as described in table 2 (Mayunga, 2007) Putnam (1995) defines social capital as the characteristics of social organizations such as networks, norms, and social trust that facilitate coordination and cooperation to achieve mutual benefits. In community resilience, social capital reflects by the quality and quantity of social cooperation. Social networks are beneficial because they allow individuals to use the resources in their social communities and increase the likelihood that those communities will address their collective problems (Mayunga, 2007). During the recovery phase of a disaster, social capital serves as a resource frequently employed by local, regional, and national governments. The populated communities often provide more chances as a larger bridge for social capital, which can contribute to the formation of new ideas and innovations. Community engagement and the power of social networks may help identify objectives and solutions that are more suitable, long-lasting, and supported by influential communities. (Jewett et al., 2021).
Social capital of Muhammadiyah to build communities' disaster-resilience
Because of COVID-19 rapid spread, Muhammadiyah assists and protects other communities in efforts to break the dissemination chain and deal with pandemics. Since its emergence in Indonesia, the immediate step of Muhammadiyah to respond to the pandemic is to form a special body, namely Muhammadiyah COVID-19 Command Center (MCCC). MCCC is competent in managing crises from the pandemic warning period to the beginning of increasing the case numbers until currently this July, with diverse programs directed in all areas. It has been over 2 million confirmed cases out of 41,000 daily cases. Based on the person in charge as Coordinator of Information Dissemination and Communication Division of MCCC Central Board, Budi Santoso, stated in an interview: "MCCC urges Muhammadiyah internal and interfaith public to take promotive, preventive, curative and rehabilitative steps. Promotional examples are education through educational car services, etc. Preventive by socializing the prevention of the spread of COVID-19 as posters and explanations on 65 radio networks, lastly the provision of shelters. Curative by serving COVID-19 patients at Muhammadiyah-'Aisyiyah Hospital ... There is a vaccination program, collaboration with 28 PTMA (Muhammadiyah 'Aisyiyah Colleges) and doctor support for handling, public health for epidemiology, ulama for ease of worship during a pandemic and educational support while studying at home. For rehabilitation, there are psychosocial services for religious services, psychological and health support." The COVID-19 pandemic is forcing people to adapt to new life habits based on restrictive policies from the government, including to carry out worship and religious activity. This restriction aims to prevent the transmission of the virus and minimize the death rate. Restrictions on community activities cause unrest in the community. Some Indonesians still believe that COVID-19 is just a conspiracy, therefore they don't want to implement health protocols and new habits. This issue is a major concern for Muhammadiyah. Muhammadiyah uses da'wah to build an understanding of the jama'ah and the general society regarding the importance of raising awareness that COVID-19 is a non-natural disaster that must be faced together. In response to the pandemic, the Central Board Muhammadiyah The existence of its internal network helps Muhammadiyah to reach the wider community, even to the grassroots in efficient time. Its systematized communication also helps the effectiveness of the prevention and treatment programs from the national to the district level. To improve performance in overcoming the pandemic, Muhammadiyah also expands cooperation with external parties, comprising the government, civil societies, NGOs, media, development aid, and others. Several collaborations were with the Ministry of Health to establish COVID-19 emergency hospitals, BNPB and DFAT to build call centers and several webinars in the COVID-19 prevention, and USAID to distribute medical devices (P. P. . The following is the partnership between Muhammadiyah and other institutions in handling COVID-19 pandemic can be seen in figure 1 (MCCC, 2020b).
The second aspect of social capital is the norms. During the pandemic, Muhammadiyah set norms to regulate the activities of Muhammadiyah members and society. Muhammadiyah publishes laws provided by the tarjih assembly on the attitude during a pandemic, including varied policies on certain adjusting worship regulations that can be carried out during the pandemic. It can provide a warning to the Muhammadiyah members and the community to always follow the rules established in order to decrease the pace of growth in instances and minimize the hazards that can develop. These norms are massively socialized through various media, one of which is through a special sermon during a pandemic carried out virtually. This sermon was named "Muhammadiyah with You". This sermon provides a variety of different themes, including health themes (every Wednesday and Saturday), psychology themes (every Tuesday and Friday), religious themes (every Monday and Thursday), and stories from COVID-19 survivors (every Sunday). Besides online sermons, norms are also socialized through Muhammadiyah social media, printed media, television, radio, posters and others (Mediamu, 2021).
The third aspect of social capital is social trust. Trust in this context refers to how the community trusts its fellow community members (community leaders), public trust in the government, and community trust in other organizations involved in disaster response (Fraser & Aldrich, 2021). Social trust in Muhammadiyah is strengthened by involving the role of Ulama to increase public awareness. Ulama actively guides the community in carrying out activities during the pandemic by always actualizing religious values and benefiting others. Ulama has a crucial role is to strengthen the ummah throughout the epidemic of establishing a resilient community. As on the person in charge in MCCC Central Board, Budi Santoso, stated in an interview: "Muhammadiyah can be a best practice for other civil society because the application of Islamic values is the basis for comprehensive handling during the pandemic."
The economic capital of Muhammadiyah to build communities' disaster-resilience
Economic capital refers to the financial resources that people use to achieve their livelihoods. The contribution of economic capital to community resilience is to increase the ability and capacity of individuals, groups, and communities to absorb the impact of disasters and the recovery process (Mayunga, 2007) To increase the capability of individuals, groups, and communities to absorb the impact of disasters and the recovery process, Muhammadiyah and its Philanthropic funds' management, LazisMu, has disbursed Rp 347.801.832.234,00 to 32.052.238 total of beneficiaries since the first COVID-19 case announced in Indonesia. The funds are excluded from the cost of COVID-19 patients' treatment in Muhammadiyah -'Aisyiyah hospitals and volunteer operational cost.
One of several effects of economic issues on the pandemic era is unable to consume food for vulnerable persons during the pandemic. It happened because of the decreasing income in the family, the one who has been laid off, the one who didn't receive any financial aids from the government or any private institutions, or a family with a lot of members. To help the community access an adequate diet is to provide a family food security movement called GETAPAK (Gerakan Ketahanan Pangan Keluarga). It is the collaboration project of Muhammadiyah Community Empowerment council, MDMC and DFAT Australian Government reached for 4,381 people in 4 regencies in 15 different cities per 3 July 2021 (P. P. Research and Development Council also had 40 universities taking part in COVID-19 recovery process (P. P. . The council provided tuition fee cut and internet quota for its students. These tuition fee cuts reached Rp 57.975.000.000,00 while the internet quota has been used by 187, 667 students in Muhammadiyah universities/higher education in Indonesia. The purpose of this program is the students can learn optimally from their own home without direct physical interaction with each other so that the learning process in the pandemic can be held successfully. Alongside with food security and education program, Muhammadiyah also disbursed Rp 7.839.666.500,00 for the cash assistance with continued assistance so that the recipients of the program can manage the fund optimally (MCCC and Its Commitment to Flatten The Curve, 2020).
In addition, several regions provided financial aids for self-isolation persons. As the person in charge in LazisMu Kudus, Latif Muhtadin, stated in an interview: "The food security for self-isolation is held recently, 2021. Because it is urgent until the Central Board asked. For the recent disbursement, it costs 1,2-1,4 million depends on the assessment from the number of family members in each house... MCCC informed the assessment form for self-isolation, then it is shared with Muhammadiyah members and jama'ah to estimate the funds (by LazisMu)." 'Aisyiyah, a semi-autonomous organization of Muhammadiyah, in collaboration with MDMC has a family resilient program called in the first year of COVID-19 in Indonesia (Aisyiyah Jateng, 2020). The orientation of this program is to optimize the financial burden on the nearest neighborhood. The financial-support program comprises (1) charity fundraising which not only financial aids but also groceries and vegetable seeds, (2) home gardening as planting easy to grow vegetables in the backyard, and (3) strengthening family economic resilience as managing financial matters and buying at the nearest neighborhood, because of COVID-19 case has become worsen, the Katavid program was replaced by the funeral ceremony process called Kamboja team. As the person in charge of MDMC Kudus, Satriyo Yudo, stated in an interview: "As the first year (of COVID-19) we focused on Katavid, then saw the situation and condition (the confirmed patients died in several hospitals), we focused more on the funeral ceremony process because it cannot be carelessly (based on Islamic values)."
The physical capital of Muhammadiyah to build communities' disaster-resilience
Physical capital refers to the built environment, such as physical infrastructure, transportation system, shelters, etc. It also includes critical infrastructure, such as hospitals. Physical capital is one of the most important resources in building community capacity in dealing with disasters which can support the community during an emergency (Mayunga, 2007).
Health care services always become a priority on Muhammadiyah agendas. It is the best way to propagate Islamic values, empowering the Islamic community, and enhancing the Muslims social conditions (Fuad, 2002). Since it was established in 1912, Muhammadiyah had a strong orientation on the community health care so that soon after the first Indonesian COVID-19 case was announced, Muhammadiyah in Central Board Information Letter prepared its health enterprise to align with the Indonesian Ministry of Health protocol to tackle COVID-19 cases (Aisyiyah Jateng, 2020). From the total of 114 hospitals (Data Rumah Sakit Muhammadiyah Aisyiyah, n.d.), 86 hospitals of Muhammadiyah and 'Aisyiyah are available to provide COVID-19 patient treatments in Indonesia (P. P. and most of them are in Central Java and East Java. The number is projected to increase because there are 29 more hospitals being ready to open their services. The centralized data system in MCCC website applied to ensure every member gets information of prevention programs and relatable actions in order to tackle COVID-19. COVID-19 patients who have been treated by Muhammadiyah health enterprise per 3 July 2021 were 3,773 ODPs (People Under Monitoring), 3,366 PDPs (Patients Under Supervision), and 22,080 persons were confirmed positive COVID-19. It also has several probable cases accounted for 3,119 and suspected cases for 15,900 cases, so the total number of all patients was 43,488 people.
When the hospitals across the country were collapse following an immediate surge in bed occupancy and a sharp increase of severe cases, Muhammadiyah has prepared 317 rooms in several parts of Jakarta, Central Java, East Java and Yogyakarta regency to serve as a shelter for those who infected COVID-19 with moderate illness. The shelter, called Pesantren COVID-19, has the standby health workers, basic health care service equipment, COVID-19 symptom medicines, and programs to speed up the recovery. The total amount of patients who have been treated in the shelter was 867 persons. In Kudus, MCCC did not cover all the self-isolation costs, but the person treated was asked to pay as they can afford. As the person in charge in LazisMu Kudus, Latif Muhtadin, stated in an interview: "(The shelter) was built in the first June. It is still being operated on the first process is screening in 'Aisyiyah hospital. In the shelter, the cost of isolation is not purely borne (by MCCC), the family (of the patient/s) will be charged as they can afford. So, if the isolation is 10 days, per person, nearly 170,000 per day." Along with the health care center, Muhammadiyah has provided an ambulance service. It provides patients to travel from the hospital to their house or vice versa. It also serves funeral delivery out of the town. The driver was also equipped with Personal Protective Equipment for the safety of both parties. Meanwhile, for prevention, Muhammadiyah has facilitated public places with decontamination chambers and portable sink aids. The body chambers have been used by 211.025 persons while 26,500 people have used the portable sinks (P.P. .
The human capital of Muhammadiyah to build communities' disaster-resilience
Human capital is often associated with education, including knowledge and skills accumulated through the forms of education, training, and experience providers. In disaster resilience, human capital also includes individual knowledge and skills about hazards, hazard history, and hazards (Mayunga, 2007).
In this pandemic era, to increase the individual knowledge of Muhammadiyah members and the public, Muhammadiyah has compiled various educational programs about the COVID-19 Pandemic. The wider the knowledge of the community, the level of awareness will also be higher. Some of the educational programs implemented include mobile education cars, webinars, distribution of posters/leaflets, preparation of COVID-19 guidebooks, Covid-Talk programs, and radio broadcasts. Educational themes that socialized to the public included how to carry out self-isolation properly; how to protect families from COVID-19 transmission; how to manage and circulate COVID-19 corpses; what are the benefits of vaccines; and many others.
This educational program started by MCCC has been running consistently since the beginning of the pandemic until now. By consistency, the understanding of the community is increasingly well-formed, although there are many challenges, especially at the grassroots level. Grassroots communities have heterogeneous characteristics, both in terms of education, work, and sociocultural aspects. Many people at this level do not believe in the COVID-19 Pandemic or the assumption that their bodies will not be attacked by COVID-19 because they are used to working hard so that the body is always strong. To deal with a society with these characteristics requires a different approach, namely with the habits and patterns that they understand.
Another aspect of human capital is skills capital. In COVID-19 pandemic, the efforts made by Muhammadiyah to improve community disaster resilience are to equip the community with skills in dealing with the pandemic. Besides having excellent knowledge related to COVID-19, skills in dealing with a pandemic are also needed. One program started by MCCC is the COVID-19 Resilient Family (Katavid) program. The Katavid program is run on the synergy of MCCC with Aisyiyah -an autonomous Muhammadiyah organization. Katavid is a family that has resilience as awareness, knowledge, skills that are continuously developed to reduce the impact COVID-19 for families.
Katavid is also expected to optimize family functions, namely religious, social and cultural functions, love, protection, reproduction, education and socialization, economy, and environmental development. Its programs comprise of 5 types of activities: charity raising, making live stalls, socializing and educating clean and healthy lifestyles, making masks fabrics, strengthening family economic resilience, and rapid COVID-19 response movements. To guide the running of Katavid, Aisyiyah's leaders have compiled technical guidelines and implementation guidelines which are distributed to various regions in the province (MCCC, 2020a).
Besides to the Katavid program, Muhammadiyah also recruits volunteers from various Muhammadiyah autonomous organizations to work under the coordination of MDMC and MCCC. As stated by Satriyo as the MDMC and MCCC Coordinator of Kudus Regency who stated that: "We recruited volunteers from various autonomous organizations, some from the Muhammadiyah Student Association, Hizbul Wathan, Pemuda Muhammadiyah, Nasyiatul Aisyiyah, Tapak Suci, and Aisyiyah. We have previously provided these volunteers with training, even before that there were volunteers for disasters in Indonesia, such as the earthquake in Palu. Most of the volunteers are students. Their enthusiasm is very high. We involve these volunteers in activities such as evacuating COVID-19 patients, curing COVID bodies, funerals for COVID-19 bodies, and others."
The natural capital of Muhammadiyah to build communities' disaster-resilience
Natural capital refers to natural resources which are important in supporting human life-nature balance (Mayunga, 2007). The environment or ecology, assets with all water quality, natural resources, can supply materials for human survival from nature (Belle et al., 2017). In the COVID-19 pandemic, which is categorized as a nonnatural disaster, the natural capital aspect is how to cultivate plants and fish to support food security independently. It can be formed cultivation called budikdamber. Budikdamber (fish farming in buckets) is an aquaponics system that cultivates fish and vegetables in a single bucket (poly-culture of fish and vegetables). This technique of growth is thought to be more efficient than aquaponics, which requires power, a big plot of land, and pricey and intricate costs (R. . According to Satriyo, the regional MCCC coordinator, Muhammadiyah in implementing the use of natural resources during the current pandemic, each region is given a notification by the centre for each family to carry out Budikamber (fish farming in buckets) activities in order to meet their respective food needs in order to further minimize activities outside the house. MCCC reported its dispersion of total of 20.000 vegetable seeds.
The social movements and humanitarian mission of Muhammadiyah during the pandemic era
Society is always moving, developing, and changing. The dynamics of society can happen because of internal factors inherent in the community itself, and it could also be because of external environmental factors (Goa, n.d.). Meanwhile, the COVID-19 pandemic that has attacked since the end of 2019 has brought major social changes in people's lives. The community is required to adapt quickly to the pandemic conditions. Almost all aspects of life are affected by the pandemic. In the process of social change, the community needs support from various parties, both government and nongovernment.
Muhammadiyah as one organization that has a large mass base, has a major contribution to bring about social change in society, especially in terms of community resilience in facing the pandemic. Muhammadiyah maximizes the capital to build resilient communities, such as social capital, economic capital, human capital, physical capital, and natural capital.
Historically, Muhammadiyah's attention to humanitarian missions actually started with the idea of HM Syuja' which he conveyed at the "Friday Night Recitation/Association" in 1917. The humanitarian mission was then institutionalized into the General Affair Assistance, which is now known by the popular name PKU Hospital. However, the humanitarian mission carried out by the PKU was still limited to serving the poor in a centralized way in PKO. The humanitarian mission carried out by Muhammadiyah does not look at the background and differences in religion, ethnicity, and other differences, which sometimes become obstacles to providing assistance (Falahuddin, 2020).
Muhammadiyah views that all people affected by disasters, whoever they are, from any background, must be assisted and must be empowered so that they can live decently. According to Budi Setiawan, Chairman of the Muhammadiyah Disaster Management Center (MDMC), the beginning of Muhammadiyah's concern for humanitarian missions for disaster victims was when Mount Kelud erupted in 1919, which claimed 5000 lives. Then in 1963 there was an eruption of Mount Agung, which coincided with Tanwir Muhammadiyah. In the Tanwir Session, it was agreed that a Task Force be formed to deal with disaster victims. Since then, until now, Muhammadiyah has been actively involved in humanitarian missions, not only at the local level but also at the global level. 301 At the local level, Muhammadiyah has been involved in humanitarian missions (Falahuddin, 2020).
From a sociological perspective, Muhammadiyah's efforts to realize communities' disaster resilience have brought about social change, including encouraging people to adapt to new habits during a pandemic. This social change is also accompanied by a humanitarian mission that is realized by providing assistance in various fields such as education, health, and the economy for people affected by this pandemic. Not only for internal Muhammadiyah members, but also for all members of the community.
Conclusion
Muhammadiyah has played the role of handling COVID-19 pandemic alongside the government by strengthening the disaster resilience community for Indonesia. It showed in the five forms of capital of the disaster resilience which has been managed by Muhammadiyah specific working teams. To enhance the community awareness about COVID-19 pandemic, Muhammadiyah take da'wah as the social and human capital to escalate COVID-19 spreading prevention massively. As in economic and natural capital, Muhammadiyah and its several collaborations increased the capability of individuals, families, groups, and communities to overcome the impact of COVID-19 and the recovery process. Muhammadiyah had a strong orientation on the community health care. As in the first Indonesian COVID-19 case was announced, Muhammadiyah built facilities to support physical capital with several partnership.
One of the previous studies on Muhammadiyah's efforts to overcome the pandemic was written by Falahuddin entitled "Respons Muhammadiyah Menghadapi Covid-19". The approach taken in this research is the humanitarian mission carried out by Muhammadiyah. Responses made by Muhammadiyah in dealing with the pandemic include: 1) making social distancing more effective; 2) establish MCCC; 3) synergize with the government and all parties. The novelty element in this study is to use the communities' disaster-resilience approach. The researcher identified a conceptual framework on the relationship between capital domains and communities' disaster-resilience, comprising: social capital, economic capital, human capital, physical capital, and natural capital.
Declaration of Ownership
This article is our original work.
Conflict of Interest
There is no conflict of interest to declare in this article.
Ethical Clearance
This study was approved by the institution.
|
2021-11-20T16:18:17.012Z
|
2021-11-18T00:00:00.000
|
{
"year": 2021,
"sha1": "f35542b93f5c7608d7ab2ca658ebbcc1486c8946",
"oa_license": "CCBYNCSA",
"oa_url": "https://journal.trunojoyo.ac.id/simulacra/article/download/11952/6108",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cf827e9cdc7029792b7c071c9fa9a2df88b87ca1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266372578
|
pes2o/s2orc
|
v3-fos-license
|
Mutual communication between radiosensitive and radioresistant esophageal cancer cells modulates their radiosensitivity
Radiotherapy is an important treatment modality for patients with esophageal cancer; however, the response to radiation varies among different tumor subpopulations due to tumor heterogeneity. Cancer cells that survive radiotherapy (i.e., radioresistant) may proliferate, ultimately resulting in cancer relapse. However, the interaction between radiosensitive and radioresistant cancer cells remains to be elucidated. In this study, we found that the mutual communication between radiosensitive and radioresistant esophageal cancer cells modulated their radiosensitivity. Radiosensitive cells secreted more exosomal let-7a and less interleukin-6 (IL-6) than radioresistant cells. Exosomal let-7a secreted by radiosensitive cells increased the radiosensitivity of radioresistant cells, whereas IL-6 secreted by radioresistant cells decreased the radiosensitivity of radiosensitive cells. Although the serum levels of let-7a and IL-6 before radiotherapy did not vary significantly between patients with radioresistant and radiosensitive diseases, radiotherapy induced a more pronounced decrease in serum let-7a levels and a greater increase in serum IL-6 levels in patients with radioresistant cancer compared to those with radiosensitive cancer. The percentage decrease in serum let-7a and the percentage increase in serum IL-6 levels at the early stage of radiotherapy were inversely associated with tumor regression after radiotherapy. Our findings suggest that early changes in serum let-7a and IL-6 levels may be used as a biomarker to predict the response to radiotherapy in patients with esophageal cancer and provide new insights into subsequent treatments.
INTRODUCTION
Esophageal cancer is the seventh most common cancer and the sixth most common cause of cancer-related death worldwide [1].Radiotherapy has become an important treatment option for patients with this disease; however, resistance leads to treatment failure and cancer relapse [2].DNA damage is the central mediator of the effects elicited by radiotherapy.In particular, ionizing radiation induces DNA damage by causing direct high-energy damage to the sugar backbone of the DNA molecule and by generating free radicals in cells [3].Moreover, inhibition of DNA repair, as mediated by DNA damage response inhibitors, increases the sensitivity of cancer cells to radiotherapy, whereas an increase in DNA repair activity causes radioresistance [2,4].
Bulk tumors consist of cancer cells harboring distinct genetic and epigenetic molecular signatures with different levels of sensitivity to anticancer therapies.This heterogeneity results in either a non-uniform distribution of genetically distinct tumor cell subpopulations across, and within, disease sites (i.e., spatial heterogeneity) or temporal variation in the molecular composition of cancer cells (i.e., temporal heterogeneity) [5].Tumor heterogeneity induces resistance to anticancer therapy.In the simplest scenario, pre-existing resistant cancer cells that survive anticancer treatment may develop and ultimately give rise to resistance and cancer relapse [5].
Cell-cell communication coordinates organismal development, homeostasis, and single-cell functions.In addition to contactdependent cell-cell communication, the transfer of extracellular vesicles or other secretory factors between cells has been consistently shown to mediate functional communication [6][7][8].Cell-cell communication plays pivotal roles in cancer development.For example, at the initial stage of carcinogenesis, normal epithelial cells can recognize the neighboring transformed cells and actively eliminate them from epithelial tissues [9].Moreover, cancer cells can secrete extracellular vesicles to reprogram stromal cells to support pre-metastatic niche formation and subsequent metastasis [10].Cell-cell communication is also involved in mediating the effects of radiotherapy.For example, the abscopal effect occurs when radiotherapy at one site leads to the regression of metastatic cancer at distant sites [11].Evidence accumulated during the last decade has revealed that radiation-induced exosomes contribute to the non-targeted abscopal effect [12].In addition, radiation-induced exosomes can cause radioresistance [13].
Although our understanding of the role of cell-cell communication in radiosensitivity has progressed, the communication between radiosensitive and radioresistant cancer cells remains to be elucidated.Thus, in this study, we investigated whether radioresistant esophageal cancer cells can decrease the radiosensitivity of radiosensitive cancer cells.Additionally, we sought to determine whether radiosensitive esophageal cancer cells can increase the radiosensitivity of radioresistant cancer cells.
Cell-cell communication between radiosensitive and radioresistant esophageal cancer cells mutually modulates their radiosensitivity
The radiosensitivity of different esophageal cancer cell lines, including KYSE-150R, COLO680N, TE15, KYSE30, OE21, and KYSE-150 cells, was determined using a clonogenic assay for cell survival and a comet assay for DNA damage.We found that KYSE-150R, COLO680N, and TE15 cells were more resistant to radiation than KYSE30, OE21, and KYSE-150 cells (Fig. 1A, B).To investigate the potential interaction between radiosensitive and radioresistant esophageal cancer cells, radiosensitive and radioresistant cells were co-cultured using a contact-independent co-culture system.We found that co-culture increased radiosensitivity in radioresistant cells and decreased radiosensitivity in radiosensitive cells (Fig. 1C and S1A).
To examine the interaction between radiosensitive and radioresistant cells in vivo, we established three types of tumor xenograft mouse models.For the radiosensitive xenograft mouse model, both the left and right flanks of the nude mouse were subcutaneously inoculated with radiosensitive cells (either KYSE-150 or OE21 cells).For the radioresistant xenograft mouse model, both the left and right flanks of the nude mouse were subcutaneously inoculated with radioresistant cells (either KYSE-150R or COLO680N cells).For the radioresistant/radiosensitive xenograft mouse model, the left and right flanks of the nude mouse were subcutaneously inoculated with radioresistant cells and radiosensitive cells, respectively (Fig. 1D
Let-7a expression is downregulated in radioresistant esophageal cancer cells
To investigate whether the interaction between radioresistant and radiosensitive esophageal cancer cells is mediated by exosomal miRNAs, we quantified the levels of the top 20 miRNAs highly expressed in esophageal cancer tissues (Table S1) in the culture media of KYSE-150 and KYSE-150R cells.The culture medium of the KYSE-150R cells displayed decreased levels of miR-21 and let-7a and increased levels of miR-22 when compared with the culture medium of the KYSE-150 cells (Fig S2A ).
Moreover, the intracellular levels of miR-21 and let-7a in KYSE-150R cells decreased, while those of miR-22 increased compared with KYSE-150 cells (Fig S2B).Previous studies demonstrated that the downregulation of miR-21 expression and the upregulation of miR-22 expression could sensitize cancer cells to radiotherapy [14][15][16][17], excluding the possibility that the downregulation of miR-21 and the upregulation of miR-22 can cause radioresistance in KYSE-150R cells.Therefore, in the subsequent analyses, we focused on investigating the role of let-7a in regulating the radiosensitivity of esophageal cancer cells.
Downregulation of let-7a induces radioresistance by increasing Dicer expression
Consistent with a previous study [18], dual-luciferase assays confirmed that let-7a downregulated the expression of luciferase reporters bearing different fragments of the Dicer 3′-UTR in KYSE-150R cells (Fig S2C , D).Transfection of a let-7a mimic decreased the Dicer protein levels but not the mRNA levels (Fig S2E -G).Conversely, transfection of a let-7a inhibitor increased the Dicer protein levels but not the mRNA levels (Fig S2H -J).Given that Dicer is required for DNA damage repair and that Dicer expression levels in cancer tissues are associated with chemosensitivity in patients with colon cancer [19][20][21], we speculated that let-7a downregulation could induce radioresistance by increasing Dicer expression.To test this hypothesis, we first investigated the regulatory effects of Dicer on the radiosensitivity of human esophageal squamous cancer cells.Dicer expression levels were higher in radioresistant cell lines (KYSE-150R, COLO680N, and TE15) than in radiosensitive cell lines (KYSE30, OE21, and KYSE-150) (Fig. 2A).Furthermore, radiation-induced Dicer expression was observed in human esophageal cancer cells in a dosedependent manner (Fig S3A).Dicer knockdown in radioresistant KYSE-150R cells increased radiosensitivity and decreased cell proliferation (Fig. 2B-E), whereas its overexpression in radiosensitive KYSE-150 cells reduced radiosensitivity and decreased cell proliferation (Fig S3B -E).Dicer overexpression also reduced the radiosensitivity of the KYSE-150 xenografts (Fig S3F ), whereas its knockdown increased the radiosensitivity of the KYSE-150R xenografts (Fig. 2F).
Analysis of Dicer expression in 45 esophageal cancer tissues further revealed that Dicer protein levels in radioresistant esophageal squamous cancer tissues were higher than those in radiosensitive cancer tissues (Fig. 2G-I).Furthermore, the Dicer expression levels were inversely correlated with tumor regression, as assessed by changes in the longest tumor diameter after 2 months of radiotherapy (Fig. 2J).Taken together, these findings indicate that Dicer regulates the radiosensitivity of human esophageal squamous cancer cells.
Collectively, these findings indicate that let-7a downregulation induces radioresistance and promotes tumor growth.
IL-6 secreted by radioresistant cells decreases the radiosensitivity of radiosensitive cells
Radiotherapy induces IL-6 expression, which in turn promotes DNA repair, thereby causing radioresistance [25].IL-6 is a target of let-7a, which can repress IL-6 expression, either directly or indirectly [26,27].However, other studies have reported that let-7a promotes IL-6 expression via different mechanisms [28,29].We found let-7a inhibited IL-6 expression in different human esophageal squamous cancer cell lines (Fig. 5A).Moreover, radioresistant cells (KYSE-150R, COLO680N, and TE15) expressed higher levels of IL-6 compared with radiosensitive cells (KYSE30, OE21, and KYSE-150) (Fig. 5B).Therefore, we investigated whether radioresistant cells could reduce the radiosensitivity of radiosensitive cells via the secretion of IL-6.Incubation with the conditioned medium from the KYSE-150R cells reduced the radiosensitivity of the KYSE-150 cells, but this activity was blocked following the addition of an anti-IL-6 antibody to the culture medium (Fig. 5C, D).Moreover, the addition of an anti-IL-6 antibody to the culture medium prevented the reduction in the radiosensitivity of KYSE-150 cells when co-cultured with radioresistant KYSE-150R cells (Fig. 5E, F).Finally, intravenous injection of serum collected from KYSE-150R xenograft mice decreased the radiosensitivity of KYSE-150 xenografts, whereas co-injecting with anti-IL-6 antibody blocked this effect (Fig. 5G-I).Taken together, these results indicate that radioresistant KYSE-150R cells can decrease the radiosensitivity of radiosensitive KYSE-150 cells via the secretion of IL-6.
Changes in serum let-7a and IL-6 levels are associated with radiosensitivity in esophageal squamous cancer patients To determine whether the levels of let-7a and IL-6 correlate with clinical parameters in patients with esophageal squamous cancer, we analyzed data from 184 patients in the TCGA database.Increased levels of let-7a in esophageal squamous cancer tissues were associated with increased median survival (Fig S6A ), and increased levels of IL-6 mRNA were associated with decreased median survival (Fig S6B).
To determine whether the levels of let-7a and IL-6 were correlated with radiosensitivity in patients with esophageal squamous cancer, we compared the levels of let-7a and IL-6 mRNA in the cancer tissues of 31 patients exhibiting a complete response and 13 patients presenting with progressive disease.As shown in Fig S6C and S6D, the levels of let-7a and IL-6 mRNA in the esophageal cancer tissues of patients with a complete response did not differ significantly from those of patients with progressive disease.
We then examined whether the serum levels of let-7a and IL-6 correlated with radiosensitivity in a cohort of 70 patients with esophageal squamous cancer, of which 28 were with a partial response and 42 with stable disease.Pre-radiotherapy serum samples were collected from all 70 patients, whereas postradiotherapy (after 60 Gy radiotherapy) serum samples were collected from 28 patients with a partial response and 37 patients with stable disease.Although the serum let-7a and IL-6 levels before radiotherapy did not differ significantly between these patients, the levels of post-radiotherapy serum let-7a were higher in patients with a partial response than in those with stable disease.Meanwhile, the levels of post-radiotherapy serum IL-6 were lower in patients with a partial response than in those with stable disease (Fig. 6A, B).Interestingly, the effect of radiotherapyinduced alterations in serum let-7a and IL-6 levels was more profound in patients with radioresistant disease than in those with radiosensitive disease (Fig. 6C, D).Moreover, radiation led to a greater decrease in let-7a levels and a greater increase in IL-6 Fig. 1 Cell-cell communication between radiosensitive and radioresistant esophageal cancer cells.A Survival analysis of KYSE-150R, COLO680N, TE15, KYSE30, OE21, and KYSE-150 cells based on clonogenic assays after 4 Gy of irradiation.B DNA breaks in different esophageal squamous cancer cell lines were measured using comet assays 1 h after 4 Gy of irradiation.C The radiosensitive KYSE-150 cells alone, the radioresistant KYSE-150R cells alone, the KYSE-150 cells co-cultured with KYSE-150R cells for 48 h, and the KYSE-150R cells co-cultured with KYSE-150 cells for 48 h were exposed to different doses of radiation.Subsequent cell survival was determined using clonogenic assays.Data .Furthermore, the percentage decrease in serum let-7a levels after 60 Gy radiotherapy was inversely correlated with tumor regression, as measured by changes in the longest tumor diameter 2 months after radiotherapy (Fig. 6E), whereas the percentage increase in serum IL-6 levels after 60 Gy radiotherapy was inversely correlated with tumor regression (Fig. 6F).
To determine whether the changes in serum let-7a and IL-6 levels in the early stage of radiotherapy were associated with tumor regression after radiotherapy, we collected serum samples from 27 out of the 70 patients on the 7th day after radiotherapy initiation, when patients had received 10 Gy of radiotherapy.The percentage decrease in serum let-7a levels and the percentage increase in serum IL-6 were inversely correlated with tumor regression (Fig. 6G, H).
DISCUSSION
Tumor heterogeneity drives resistance to radiotherapy since preexisting resistant cancer cells survive and further develop, ultimately causing cancer relapse [5].The present study revealed a more complex picture of the contribution of tumor heterogeneity to radiosensitivity.We found that mutual communication between radiosensitive and radioresistant esophageal cancer cells modulates their radiosensitivity.Specifically, radiosensitive cells increase the radiosensitivity of radioresistant cells, while radioresistant cells decrease the radiosensitivity of radiosensitive cells.Mechanistically, radiosensitive cells secrete more exosomal let-7a and fewer IL-6 than radioresistant cells.The exosomal let-7a secreted by radiosensitive cells could be taken up by radioresistant cells to increase their radiosensitivity.In contrast, IL-6 secreted by radioresistant cells decreased the radiosensitivity of radiosensitive cells.These findings have implications for cancer therapies.That is, the surgical removal of radioresistant tumor cells might increase the radiosensitivity of residual tumor cells, while the surgical removal of radiosensitive tumor cells might reduce the radiosensitivity of residual tumor cells.Moreover, the administration of a let-7a mimic and an anti-IL-6 antibody has the potential to modulate the communication between radioresistant and radiosensitive cells and enhance the efficacy of radiotherapy for esophageal cancer.
The identification of biomarkers that can be used to predict the response to radiotherapy and prognosis of patients with cancer might facilitate the development of a treatment plan.Although the low expression of let-7a in esophageal cancer tissues is associated with a poor prognosis, neither the levels of let-7a in esophageal cancer tissues nor in serum before radiotherapy differed significantly between patients with the radiosensitive and radioresistant disease.However, radiotherapy resulted in a more prominent decline in serum let-7a levels in patients with radioresistant esophageal cancer than in those with radiosensitive disease.The percentage decrease in serum let-7a levels at the early stage of radiotherapy was inversely associated with tumor regression after radiotherapy.Accumulating evidence has indicated that IL-6, a target of let-7a, confers radioresistance to different types of cancers and that serum IL-6 levels can be used to predict treatment responses and outcomes for patients with esophageal squamous cell carcinoma [25][26][27][30][31][32].Although neither IL-6 mRNA levels in tumor tissues nor basal serum IL-6 levels correlated with sensitivity to radiotherapy in esophageal cancer patients, the percentage increase in serum IL-6 levels after radiotherapy was inversely correlated with tumor regression.
These findings suggest that changes in the serum levels of let-7a and IL-6 during the early stages of radiotherapy can be used as a predictive biomarker for the response to radiotherapy.
Dicer is required for DNA repair and decreased Dicer expression increases the susceptibility of cancer cells to DNA-damaging treatment, while increased Dicer expression leads to increased resistance to such treatment [19][20][21]35].Here, we found that radiation-induced less DNA damage in cells expressing higher levels of Dicer than in cells expressing lower levels of Dicer.These observations reveal that radiation-induced Dicer upregulation causes radioresistance by increasing DNA repair.Moreover, a substantial number of miRNAs are reported to positively or negatively participate in DNA repair [36].Therefore, Dicer upregulation may affect radiosensitivity by regulating the expression of these miRNAs.Furthermore, radiotherapy induces metabolic reprogramming, which affects DNA repair and radiosensitivity [37,38].Dicer upregulation may affect radiosensitivity via reprogramming the metabolic status in cancer cells via different miRNAs [39][40][41].
In summary, we demonstrated that let-7a, and its target IL-6, mediate mutual communication between radiosensitive and radioresistant esophageal cancer cells, and this cell-cell communication modulates the radiosensitivity of both cell types.Moreover, we found that radiotherapy downregulates let-7a expression in esophageal cancer cells and that decreased let-7a expression leads to increased Dicer and IL-6 expression, thereby reducing the sensitivity of esophageal cancer cells to radiotherapy.Interestingly, the percentage decrease in serum let-7a levels and the percentage increase in serum IL-6 levels during the early stage of radiotherapy were inversely associated with tumor regression after radiotherapy, suggesting that the treatment plan for esophageal cancer can be adjusted according to alterations in serum let-7a and IL-6 levels during the early stage of radiotherapy.
This study has certain limitations.First, the small sample size utilized may have caused bias.Second, the serum and tissue samples were not from the same cohort of patients with esophageal cancer.In future studies, we intend to determine the expression levels of let-7a and IL-6 in serum and tumor tissues Fig. 4 Let-7a secreted by KYSE-150 cells increases the sensitivity of KYSE-150R cells to radiotherapy.A, B Real-time RT-PCR quantification of let-7a levels in the exosomes (A) and cell culture medium (B) of different esophageal squamous cancer cell lines.C, D KYSE-150R cells were incubated with either basal medium, KYSE-150-conditioned medium (culture medium collected from KYSE-150 cells), or basal medium supplemented with KYSE-150 exosomes for 48 h and exposed to different doses of radiation.Cell survival was determined using clonogenic assays (C), and DNA breaks were measured using comet assays (D).E, F KYSE-150R cells were incubated for 48 h with either basal medium, KYSE-150-conditioned medium, or basal medium supplemented with KYSE-150 exosomes.Intracellular levels of let-7a (E), Dicer, and IL-6 (F) were determined.G Real-time RT-PCR quantification of intracellular let-7a levels in KYSE-150R cells, KYSE-150 cells, and KYSE-150R cells cocultured with KYSE-150 cells for 48 h.H-J KYSE-150R cells were transfected with either let-7a inhibitors or control inhibitors and incubated with either basal medium or basal medium supplemented with KYSE-150 exosomes for 48 h and exposed to different doses of radiation.Intracellular let-7a levels were quantified using real-time RT-PCR (H), cell survival was determined using clonogenic assays (I), and DNA breaks were measured using comet assays (J).Data (A-E, G-J) are expressed as the mean ± SD of the values from three independent experiments.**P < 0.01 (two-sided Student's t test).K-M Subcutaneous xenografts of KYSE-150R cells were intratumorally injected with either KYSE-150 or KYSE-150R exosomes and exposed to 10 Gy of radiation, followed by an intratumoral injection of antagomir-let-7a or antagomir-Con.Xenograft tumors were photographed (K), the average volume was determined (L), and the volumes of irradiated xenografts relative to those of unirradiated xenografts were measured (M).Data (L, M) are expressed as the mean ± SD of the values obtained from five xenografts.*P < 0.05, **P < 0.01 (two-sided Student's t test).from the same patient cohort.We will also collect serum samples from additional esophageal cancer patients on day 7 after radiotherapy initiation and develop models to predict the radiotherapy response based on early changes in the circulating let-7a and IL-6 levels observed during radiotherapy.
Cell culture
The following human esophageal squamous cancer cell lines were procured: KYSE-150 from the Japanese Collection of Research Bioresources (Osaka, Japan); COLO680N from Deutsche Sammlung von Mikroorganismen und Zellkulturen (Braunschweig, Germany); TE15 from the Cell Resource Center for Biomedical Research (Tohoku University, Sendai, Japan); OE21 from the European Collection of Cell Cultures (Salisbury, UK); and KYSE30 from the Cell Bank, Pasteur Institute of Iran (Tehran, Iran).The radioresistant cell line KYSE-150R was established from KYSE-150 cells through fractionated irradiation [42].All cell lines were grown in RPMI 1640 medium (Biological Industries, Kibbutz Beit Haemek, Israel) supplemented with 10% fetal bovine serum, 100 U/mL penicillin, and 100 mg/mL streptomycin.For co-culture, radiosensitivity and radioresistant cells were seeded into two separate compartments of an Ibidi culture insert (Ibidi, Martinsried, Germany) and incubated in the same medium for 48 h.The cells were cultured at 37 °C in a humidified incubator with 5% CO 2 .All cell lines were mycoplasma-free and authenticated based on polymorphic short-tandem-repeat loci after passage in our laboratory for > 6 months.
Mouse xenograft tumor model
Female BALB/c athymic nude mice (6-8-week-old; Vital River Experimental Animal Center, Beijing, China) were randomly allocated to different groups (n = 5/group) and subcutaneously injected with 3 × 10 6 cells to generate tumor xenografts.The resultant tumors were measured using calipers, and the tumor volume was calculated as follows: V = L × W 2 × 0.5236 (L, long axis; W, short axis).When tumor masses became visible ( ~100 mm 3 ), the mice received a 10 Gy dose (2 Gy/day for five consecutive days) of radiotherapy at a dose rate of 1 Gy/min and were sacrificed 21 days later.To determine the effect of let-7a on radiosensitivity, let-7a agomir or antagomir (RiboBio, Guangzhou, China) was injected intratumorally (0.5 mM/kg every three days for a total of five injections).To determine the effect of exosomes on radiosensitivity, KYSE-150 exosomes were intratumorally injected into the KYSE-150R xenografts (1 × 10 11 particels/ mice every three days for a total of five injections).To investigate the effect of IL-6 on radiosensitivity, an anti-IL-6 antibody (Affinity Bioscience, Cincinnati, OH, USA) was injected intraperitoneally (10 mg/kg/week for a total of three injections).All animal procedures were approved by the Animal Care Committee of Wenzhou Medical University.
Patient specimens and immunohistochemical analysis
To determine the association between Dicer expression and radiosensitivity, esophageal squamous cancer tissue samples were collected from 45 patients who had received a 60 Gy dose (2 Gy/day, 5 days/week) of radiotherapy at the First Affiliated Hospital of Wenzhou Medical University (Wenzhou, China).The level of Dicer protein in tumor tissues was detected by performing immunohistochemical analysis and scored as described previously [21].To investigate the correlation between serum levels of let-7a or IL-6 and radiosensitivity, we recruited another cohort of 70 patients who had esophageal squamous cancer and had received a 60 Gy dose of radiotherapy at the First Affiliated Hospital of Wenzhou Medical University.Preradiotherapy serum samples were collected from all 70 patients, whereas post-radiotherapy serum samples were collected from 65 patients.Additionally, we collected serum samples from 27 of the 70 patients on day 7 after the start of radiotherapy.The response to radiotherapy was evaluated as described previously [43].The inclusion criteria for patients were as follows: (a) those who were pathologically diagnosed with esophageal cancer, (b) those who did not receive any prior systemic treatment or malignant tumor resection, and (c) those who were willing to undergo radiotherapy.Informed consent was obtained from all patients for the collection and use of clinical samples, and the study was approved by the Scientific Ethics Committee of the First Affiliated Hospital of Wenzhou Medical University.
Radiation-associated clonogenic cell-survival assay
Cells displaying exponential growth were seeded into 6-well plates and irradiated with different doses of radiation (0, 2, 4, 6, 8, or 10 Gy) at an average rate of 100 cGy/min, followed by culture for up to 14 days.The surviving cells were stained with 0.1% Crystal Violet, and colonies containing > 50 cells were scored.The survival fraction was calculated as follows: (number of clones/number of cells plated) × plating efficiency.The plating efficiency was calculated as follows: number of clones/number of seeded unirradiated cells.
Enzyme-linked immunosorbent assay (ELISA)
IL-6 levels in serum samples and culture medium were quantified using a human/mouse IL-6 ELISA kit (MultiSciences, Hangzhou, China) according to the manufacturer's instructions.
Plasmids, miRNAs, and transfection
Cells were transfected with miRNAs or plasmids using Lipofectamine 2000 (Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions.The intracellular let-7a levels were quantified 48 h after transection with miRNA mimics or inhibitors.miRNA mimics and inhibitors were purchased from RiboBio.Human Dicer knockdown and control shorthairpin RNA plasmids have been described previously [44].The human Dicer-overexpression plasmid pDESTmycDICER was obtained from Addgene (Cambridge, MA, USA) [35].
Comet assay
The cells were exposed to 4 Gy radiation and harvested 1 h later.Comet assays were performed as described previously [20], and comet images were visualized using a fluorescence microscope (Leica DMI3000 B; Leica, Wetzlar, Germany) and the Comet Assay Software Project (CASP, Wrocław, Poland).
Cell proliferation
Cells were seeded into 96-well plates, and cell proliferation was assessed using the Cell Counting Kit-8 (CCK-8; MedChemExpress, Monmouth Junction, NJ, USA) according to the manufacturer's instruction.
Dual-luciferase assays
Cells were co-transfected with 0.4 μg firefly luciferase reporter plasmids (including Luc-Dicer-3′-UTR1 and Luc-Dicer-3′-UTR3), 0.02 μg pRL-CMV Fig. 5 IL-6 secreted by KYSE-150R cells decreases the sensitivity of KYSE-150 cells to radiotherapy.A IL-6 protein levels in let-7a mimictransfected human esophageal squamous cancer cell lines (KYSE-150R, COLO680N, TE15, OE21, KYSE30, and KYSE-150) were determined using western blotting.B ELISA-based quantification of IL-6 protein levels in the cell culture medium of different esophageal squamous cancer cell lines.C, D KYSE-150 cells were incubated in either basal medium or KYSE-150R-conditioned medium (culture medium collected from KYSE-150R cells) along with either an anti-IL-6 or IgG control antibody (10 ng/mL) and exposed to different doses of radiation.Cell survival was determined using clonogenic assays (C), and DNA breaks were measured using comet assays (D).E, F KYSE-150 cells were co-cultured with KYSE-150R cells in the presence of either the anti-IL-6 or IgG control (10 ng/mL) antibody and exposed to different doses of radiation.KYSE-150 cell survival was determined using clonogenic assays (E), and DNA breaks were measured using comet assays (F).Data (B-F) are expressed as the mean ± SD of the values from three independent experiments.*P < 0.05, **P < 0.01 (two-sided Student's t test).G-I KYSE-150/KYSE-150 model mice were intravenously co-injected with either KYSE-150R serum (serum from mice bearing KYSE-150R xenografts) or KYSE-150 serum (serum from mice bearing KYSE-150 xenografts) and either the anti-IL-6 or IgG control antibody, as indicated, and exposed to 10 Gy of radiation.Xenograft tumors were photographed (G), the average volume was determined (H), and the volumes of irradiated xenografts relative to those of unirradiated xenografts were calculated (I).Data (H, I) are expressed as the mean ± SD of the values obtained from five xenografts.*P < 0.05, **P < 0.01 (two-sided Student's t test).
The Cancer Genome Atlas (TCGA) dataset and statistical analyses TCGA dataset analysis was performed as described previously [46].All experimental data were presented as the mean ± standard deviation (SD) of at least three independent experiments.Continuous variables were expressed as the mean ± SD, and categorical values were expressed as absolute and relative frequencies.The differences in continuous and categorical variables were analyzed using the Student's t test or the Mann-Whitney U test and the chi-square test, respectively.The correlation between two variables was assessed using Spearman's correlation analysis.Statistical analyses were performed using SPSS (v.22.0;IBM Corp., Armonk, NY, USA).All statistical tests were two-sided, and the statistical significance was set at P < 0.05.
Statistical analysis
Data are presented as the mean ± standard deviation (SD) of at least three independent experiments.Statistical analyses were performed using Student's t test for two groups and two-way ANOVA with Bonferroni's post hoc test for multiple groups.Normal distribution was confirmed using the Shapiro-Wilk normality test, and homogeneity of variance was tested using Levene's test.In all experiments, the significance level was set as α = 0.05, and P < 0.05 indicates significant intergroup differences.Statistical analysis was performed using GraphPad Prism 8 (GraphPad Software Inc., San Diego, CA, USA).
Fig. 6 Changes in serum let-7a and IL-6 levels are associated with radiosensitivity in esophageal squamous cancer.A Pre-and postradiotherapy serum let-7a levels were determined using real-time RT-PCR.B Pre-and post-radiotherapy serum IL-6 levels were quantified using ELISA.C The percentage decrease in serum let-7a levels was determined 7 days following initiation of radiotherapy and after the completion of radiotherapy.The percentage decrease in serum let-7a levels was calculated as follows: (pre-radiotherapy serum let-7a level − serum let-7a levels at 7 days after the initiation of radiotherapy or after the completion of radiotherapy)/pre-radiotherapy serum let-7a level × 100%.D The percentage increase in serum IL-6 levels was determined 7 days following initiation of radiotherapy and after the completion of radiotherapy.The percentage increase in serum IL-6 levels was calculated as follows: (serum IL-6 levels at day 7 following initiation of radiotherapy or after the completion of radiotherapy-pre-radiotherapy serum IL-6 level)/pre-radiotherapy serum IL-6 level × 100%.*P < 0.05, **P < 0.01, ns, not significant (two-sided Student's t test).E The percentage decrease in serum let-7a levels induced by 60 Gy of radiotherapy was inversely correlated with tumor regression.The percentage decrease in serum let-7a levels induced by 60 Gy of radiotherapy was calculated as follows: (pre-radiotherapy serum let-7a level-post-radiotherapy serum let-7a level)/pre-radiotherapy serum let-7a level × 100%.The percentage decrease in the tumor diameter 2 months post radiotherapy was calculated as follows: (the baseline longest diameter of the tumor-the longest diameter of the tumor at 2 months post radiotherapy)/the baseline longest diameter of the tumor × 100%.F The percentage increase in serum IL-6 levels induced by 60 Gy of radiotherapy was inversely correlated with tumor regression.The percentage increase in serum IL-6 levels induced by 60 Gy of radiotherapy was calculated as follows: (post-radiotherapy serum IL-6 level − preradiotherapy serum IL-6 level)/pre-radiotherapy serum IL-6 level × 100%.G The percentage decrease in serum let-7a levels induced by 10 Gy of radiotherapy was inversely correlated with tumor regression.The percentage decrease in serum let-7a levels induced by 10 Gy of radiotherapy was calculated as follows: (pre-radiotherapy serum let-7a levelserum let-7a level after 10 Gy radiotherapy)/pre-radiotherapy serum let-7a × 100%.H The percentage increase in serum IL-6 levels induced by 10 Gy of radiotherapy was inversely correlated with tumor regression.The percentage increase in serum IL-6 levels induced by 10 Gy of radiotherapy was calculated as follows: (serum IL-6 level after 10 Gy of radiotherapy-re-radiotherapy serum IL-6 level)/pre-radiotherapy serum IL-6 level × 100%.
and Fig S1B).As expected, the tumors in the radiosensitive xenograft mouse model displayed a higher sensitivity to irradiation compared to the tumors in the radioresistant xenograft mouse model (Fig S1C).Simultaneous inoculation of radiosensitive and radioresistant cells in the same mouse reduced the radiosensitivity of radiosensitive cells and increased the radiosensitivity of radioresistant cells.Specifically, the radiosensitive xenografts in the radiosensitive/ radioresistant xenograft mouse model displayed decreased sensitivity to irradiation compared to those in the radiosensitive xenograft mouse model, whereas the radioresistant xenografts in the radiosensitive/radioresistant xenograft mouse model displayed higher sensitivity to irradiation compared to those in the radioresistant xenograft mouse model (Fig. 1E, F and Fig S1D, E).Taken together, these results indicate that radiosensitive cells can increase the radiosensitivity of radioresistant cells, while radioresistant cells can decrease the radiosensitivity of radiosensitive cells.
Fig.1Cell-cell communication between radiosensitive and radioresistant esophageal cancer cells.A Survival analysis of KYSE-150R, COLO680N, TE15, KYSE30, OE21, and KYSE-150 cells based on clonogenic assays after 4 Gy of irradiation.B DNA breaks in different esophageal squamous cancer cell lines were measured using comet assays 1 h after 4 Gy of irradiation.C The radiosensitive KYSE-150 cells alone, the radioresistant KYSE-150R cells alone, the KYSE-150 cells co-cultured with KYSE-150R cells for 48 h, and the KYSE-150R cells co-cultured with KYSE-150 cells for 48 h were exposed to different doses of radiation.Subsequent cell survival was determined using clonogenic assays.Data (A-C) are expressed as the mean ± SD of the values from three independent experiments.D Schematic displaying the establishment of the radioresistant (KYSE-150R/KYSE-150R) tumor xenograft mouse model (left), the radioresistant/radiosensitive (KYSE-150R/KYSE-150) xenograft mouse model (middle), and the radiosensitive (KYSE-150/KYSE-150) xenograft mouse model (right).E KYSE-150 xenografts on the right flank of the KYSE-150/KYSE-150 mouse model and the KYSE-150R/KYSE-150 mouse model were either exposed to 10 Gy of irradiation or left unirradiated.Xenograft tumors were photographed (left), the average volume was determined (middle), and the volumes of irradiated xenografts relative to those of unirradiated xenografts were calculated (right).F KYSE-150R xenografts on the left flank of the KYSE-150R/KYSE-150R mouse model and the KYSE-150R/KYSE-150 mouse model were either exposed to 10 Gy of radiation or left unirradiated.Xenograft tumors were photographed (left), the average volume was determined (middle), and the volumes of irradiated xenografts relative to those of unirradiated xenografts were calculated (right).Data (E and F) are expressed as the mean ± SD of the values obtained from five xenografts.**P < 0.01, *P < 0.05 (two-sided Student's t test).
Fig. 2
Fig. 2 Dicer knockdown promotes esophageal cancer cell sensitivity to radiotherapy.A Representative western blotting images of Dicer expression in KYSE-150R, COLO680N, TE15, KYSE30, OE21, and KYSE-150 cells.B Representative western blotting images of Dicer expression in the control and Dicer-knockdown KYSE-150R cells.C Control and Dicer-knockdown KYSE-150R cells were subjected to different doses of irradiation, and cell survival was determined using clonogenic assays.D Control and Dicer-knockdown KYSE-150R cells were subjected to 4 Gy of irradiation, and DNA breaks were determined using comet assays 1 h post-irradiation.E The proliferation of control and Dicer-knockdown KYSE-150R cells was determined using CCK-8 assays.Data (C-E) are expressed as the mean ± SD of values from three biological replicates.**P < 0.01, *P < 0.05 (two-sided Student's t test).F Subcutaneous xenografts of either Dicer-knockdown or control KYSE-150R cells were established in athymic nude mice and then subjected to 10 Gy of irradiation.Xenograft tumors were photographed (left), the average volume was determined (middle), and the volumes of irradiated tumors relative to those of unirradiated tumors were calculated (right).Data are expressed as mean ± SD of the values obtained from five xenografts, **P < 0.01 (two-sided Student's t test).G Analysis of dicer expression using tumor tissues from patients with esophageal squamous cancer based on immunohistochemistry. Representative images of Dicer expression in cancer tissues from patients with a complete response, a partial response, stable disease, and progressive disease.H, I Semi-quantitative evaluation of Dicer expression in esophageal cancer tissues.**P < 0.01, *P < 0.05 (two-sided Student's t test).J Correlation between Dicer expression and tumor regression (i.e., a decrease in the tumor diameter 2 months after radiotherapy), which was calculated as follows: (the baseline longest diameter of the tumor − the longest diameter of the tumor at 2 months post radiotherapy)/the baseline longest diameter of the tumor × 100%.
Fig. 3
Fig.3Let-7a increases the sensitivity of esophageal cancer cells to radiotherapy by regulating Dicer expression.A Real-time RT-PCR quantification of let-7a levels in different esophageal squamous cancer cell lines.B, C KYSE-150R cells were co-transfected with a miR-Con/let-7a mimic and pcDNA3.1/pDicer,as indicated, and treated with different doses of radiation 48 h post transfection.Cell survival was determined using clonogenic assays (B), and DNA breaks were measured using comet assays (C).D, E KYSE-150 cells were co-transfected with an inhibitor-Con/let-7a-inhibitor and shCon/shDicer, as indicated, and treated with different doses of radiation 48 h post transfection.Cell survival was determined using clonogenic assays (D), and DNA breaks were measured using comet assays (E).Data (A-E) are expressed as the mean ± SD of the values from three independent experiments.**P < 0.01 (two-sided Student's t test).F Subcutaneous xenografts of KYSE-150R cells were irradiated, followed by intratumoral injection of either agomir-let-7a or agomir-Con.Xenograft tumors were photographed (left), the average volume was determined (middle), and the volumes of irradiated tumors relative to those of unirradiated tumors were calculated (right).G Subcutaneous xenografts of KYSE-150 cells were irradiated, followed by intratumoral injection of antagomir-let-7a.Xenograft tumors were photographed (left), the average volume was determined (middle), and the volumes of irradiated tumors relative to those of unirradiated tumors were calculated (right).Data (F, G) are expressed as the mean ± SD of five xenografts.**P < 0.01 (two-sided Student's t test).
|
2023-12-21T06:17:42.091Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "03db260b5053a27d725abd8cc795c3fbdccac45e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9d9e9c8d3ce8fb26e2dce5da0033ca3668531477",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253698715
|
pes2o/s2orc
|
v3-fos-license
|
Purchase Intention of Second-Hand: A Case Study of Generation Z
. a lower price than new products. The increase in second-hand purchases cannot be separated from Generation Z consumers, who also turn out to be consumers who buy second-hand products where they started buying second-hand three years ago. Generation Z, generally Women, like purchasing second-hand goods, especially clothes, with online purchases. In terms of Generation Z's purchase intentions, orientation toward low prices, the desire to appear unique, nostalgia, and trust are the determining factors for Generation Z to make second-hand purchases. Meanwhile, the bargaining factor usually done in buying and selling is not a factor for Generation Z to buy used goods.
Introduction
A bank is an entity that raises funds and distributes them Second-hands are products that the first user no longer uses, and the goods will be thrown away, resold, or given to others over time. Second-hand items can be found anywhere in the purchase cycle [18]. Secondhand is also known as thrift goods which lately are in great demand by the public. Various variations of second-hand that can be found are widely sold in the second-hand market, both those sold directly by the first user and through intermediaries. The types of secondhand commonly sold in the market include clothing, electronics (TV, Mobile Phone, Laptop, Computer, and others), Cars, Motorcycles, Furniture, and others that are still able to be used by the second, third, and so on users. Second-hands are not only items that have been used but can also be new items that will not be reused. For example, second-hand clothes, which are imported clothes that are sometimes not used, have labels and are the same as clothing models that are rarely found [18].
The rise of second-hand sellers encourages researchers to research factors that affect the purchase intention of second-hand but are specifically in Generation Z. Generation Z is the generation born from 1995 -2010 [24]. Generation Z prefers conveniences that are identified with the ease of comparison of products and prices, and the ease of obtaining the right product [14].
Second-hand Purchases
Second-hand products are often synonymous with vintage products [2]. People choose to use second-hand, not throw away items that can still be used [11]. With the increasing concern for the context and development of e-commerce, the development of online commerce has made second-hand products into businesses [23].
Purchase second-hand by consumers in response to purchasing products from other consumers that are no longer wanted [22]. People concerned for the environment show a positive attitude towards purchasing second-hand products at thrift stores [19]. The purchase of second-hand provides many benefits such as protection against environmental sustainability, unique and original products, economic advantages associated with the option to buy at a low price, a pleasure to hunt for shopping items, and even ethically profit [2]. The segmentation of second-hand buyers has a varied character of motivation based on the preferred mode [6] Consumers are reducing spending at traditional fashion retailers and buying more at thrift stores [12] The purchase of brand second-hand products will affect the next purchase in the new goods market [1]. The desire to make a second-hand purchase can be a good alternative to fast fashion consumption [22]. Consumers who buy second-hand because of fashion motivations who shop at thrift stores but with a frequency that is rarely called a rare Fashionista, Fashionable Hedonist is a character who pays attention to prices when shopping. In contrast, consumers who make frequent and critical purchases pay attention economically, pleasure, and fashion, and at the highest price awareness is called Treasure Hunting Influencer. [6]
Factors of Purchasing Second-hand
Various studies have found various literature on the factors affecting the purchase of second-hand. Price reductions, the power of suppliers, the greater value of branded goods, nostalgic pleasures, uniqueness, and convenience make clear restrictions between new goods markets and the [15]. Personal reasons for purchasing second-hand, such as the ability to bargain or a feeling of nostalgia, motivate them in determining the first step of purchasing goods used [12]. Second-hand spending is becoming a consumer habit, mainly for financial reasons [12]. Economic motivation is in the consumption of second-hand users [3].
Quality and durability are closely related to the economy and critical dimensions of carrying out second-hand spending [12]. The motivation for the desire to spend on second-hand products is economical, hedonistic, recreational, and critical [22]. Items that motivate to buy second-hand online based on price reductions, increased bargaining power, availability of goods at low prices (economic motivation), purchase of goods anywhere with less effort and time (motivation of convenience), and purchase of goods antique to evoke memories of the past, fulfilments of uniqueness, comfort, and guarantee (Ideological motivation) [15]. Buying used clothes is based on affordability, product quality, and brands [13].
The motivation for purchasing second-hand is hedonistic and recreational, including pleasure in nostalgia, the need for uniqueness, social contact, and treasure hunting [22]. Price and Product quality simultaneously positively and significantly influence the purchase decision of imported used clothing [17]. The relationship between critical and economic results between reason and financial concepts, quality, endurance, and critical and ethical consumption in purchasing second-hand [12]. The purchase of secondhand is related to the desire for uniqueness and critical consumption [12]. The motivation for spending on second-hand online comes from items, comfort, and ideological motivation [15].
Pricing Orientation
People with simple living habits choose to reuse resources and spend money [22]. An affordable price will provide a perception in the selection and use of goods or services [18]. Consumers wanting to get quality goods at low prices is a financial reason to make purchases in stores selling used clothing [12]. Affordable prices can make consumers interested in buying new goods according to their expectations [18]. Most of the motivation to get second-hand at a low price is to buy them in the used clothing market [11]. From an economic point of view, consumers realize that they can buy higher quality and long-lasting products at lower prices in used clothing stores than buying them in traditional markets [12] In this study, researchers will evaluate low price factors that affect the purchase intention of second-hand in Generation Z. Evaluation of low and affordable prices include prices that are cheaper than new goods. Purchasing second-hand is cheap because, according to income, second-hand tend to be cheaper than other goods. [15] H1. Pricing Orientation has a positive effect on the purchase intention of Second-hand
Bargaining Ability
Consumer decision-making starts from consumer knowledge about the product to be purchased to compare alternatives to two or more products with a competitive advantage so that consumers can determine which product is suitable and decide to purchase [13]. The quality of the second-hand product will be invaluable in the price ethics purchased during the bidding process [12]. The offer of getting a lower or affordable price to buy something is part of the wellbeing that results in the purchase and strengthens the frugal attitude of the buyer of the goods used [12]. The factor of purchasing second-hand is directly and indirectly influenced by the intermediary of the bargaining power hunting mediation [3]. In this study, researchers will evaluate low price factors that affect the purchase intention of second-hand in Generation Z. Evaluation of low and affordable prices include prices that are cheaper than new goods. Purchasing secondhand is cheap because, according to income, secondhand tend to be cheaper than other goods. [15] In this study, researchers will evaluate the factor of bargaining ability to influence the purchase intention of second-hand in Generation Z. The bargaining ability seen includes the opportunity to bargain, the ease of bidding, and the price that is obtained as desired by the bargaining process [15] H2. The ability to bargain positively affects the intention to purchase second-hand
Uniqueness
The Uniqueness Variable is a variable that looks at how consumers show their real personality through what they use to highlight their friends and others [8]. Unique item search is how consumers use it to establish their identity through that item that can provide an identity for those who use it [12]. Uniqueness and style are important for those who make purchases of used clothing [11]. Consumer behavior "requires uniqueness," which means affirmations from individuals to highlight different qualities [8]. The ambition to find unique items makes consumers shop in thrift stores [22]. The consumer's motivation becomes unique and very different when purchasing second-hand [3].
In this study, researchers will evaluate the uniqueness factor affecting the purchase intention of second-hand in Generation Z. Uniqueness factors include the ability to express themselves, differences with other people, personality, communication, and new creations. [15] H3. Uniqueness positively affects the intention of purchasing second-hand.
Nostalgia
The feeling of nostalgia arises particularly during purchasing vintage items [22]. Second-hand purchases are motivated by the desire to use the goods and the concern for forensic goods [10]. Nostalgic relationships greatly impact luxury second-hand search activities [10]. Nostalgia affects directly and indirectly through the hunt for second-hand and the intention to purchase vintage goods [3].
In this study, researchers will evaluate the Nostalgia factor influencing the purchase intention of second-hand in Generation Z. The Nostalgia factor in second-hand items seen includes an interest in old items, memories of the past, and vintage nuances. [15] H3. Nostalgia positively affects the purchase intention of second-hand.
Trust
Quality is something that consumers prioritize when purchasing second [13]. Consumers realize that buying used clothes is the right way to get a quality product over a long period [12]. Trust has a relationship with purchase intention followed by commitment, quality of service, and shopping satisfaction [9].
In this study, researchers will evaluate the Trust factor influencing the purchase intention of second-hand in Generation Z. The Confidence Factor in the secondhand seen includes Appropriate function, expectations, reliability, quality, and comfort. [15] H4. Trust positively affects the purchase intention of second-hand.
Research Methods
This study aims to determine the factors that motivate Generation Z to make online second-hand purchases. The population in this study is Generation Z, who have purchased second-hand a range of 18-26 years. The total sample that filled out the questionnaire was 105 respondents.
The source of data in this study is primary data derived from the object of study. The data collection technique is to distribute questionnaires (questionnaires) to respondents. The variables seen are made in a questionnaire in the form of a google form with a Likert scale of 5 to agree strongly to 1 to disagree strongly.
This research is focused on the variables of price orientation, bargaining ability, uniqueness, nostalgia, and trust, which are free variables. Meanwhile, the purchase intention factor is a bound variable influenced by a free variable.
Data analysis techniques use Multiple Linear Regression Analysis and SPSS software to process existing data.
Demographics of Generation Z Consumers of Second-Hand
Demographic Results in table 1 of respondents taken at 18-26 years old show that most second-hand buyers are women. The distribution of expenses from Generation Z respondents tends to be evenly distributed from less than 1 million to more than 5 million. Generation Z started buying second-hand three years earlier, meaning that second-hand is not new to Generation Z. The location where Generation Z purchases second-hand, most of them buy online.
Data Validity Test
Appendix 1 shows the Correlated Item Total Correlation values for the variables Price orientation, Bargaining ability, uniqueness, nostalgia, trust and purchase intent greater than the table r value of 0.1918. This shows that all questions are valid.
Reliability Test
Based on Appendix 2, reliability testing obtained the value of Cronbach's Alpha > 0.7. So, it can be concluded that all items in the variables price orientation, bargaining, uniqueness, nostalgia, trust, and purchase intention can be reliable. Based on figure 1, the results of the normality test graph have spread around the diagonal line and followed the direction of the diagonal line, thus showing if the data is normally distributed and has met the normality test
Heteroscedasticity Test
Based on the scatterplot results in figure 2, the dots are spreading out and not forming a specific, clear pattern. So, it can be concluded that there is no problem with heteroskedasticity. Table 2 shows that all independent variables have a tolerance value of > 0.10 and a VIF value of < 10.0. It can be concluded that all independent variables consisting of orientation price, bargaining, uniqueness, nostalgia, and trust do not occur multicollinearity.
Coefficient of Determination Test (R2)
The regression calculation results in table 3 show that the coefficient of determination (Adjusted R2) obtained is 0.415. This means that 41.5% of the variation of the Purchase Intention variable can be explained by the variables of price, offer, uniqueness, nostalgia, and trust.
The remaining 58.5% is explained by other variables not filed in this study. nostalgia, and trust are zero, then the purchase intention has a value of 2,298 The price coefficient of positive value means a positive relationship exists between price and purchase intention, meaning that the more the orientation towards low prices, the more the purchase intention will increase by 0.257 for each point of the price orientation variable.
The negative bargaining coefficient means that there is a negative relationship between bargaining power and purchase intentions, meaning that the more the bargaining ability increases, will decrease the intended purchase by 0.006 for each bargaining ability variable point The uniqueness coefficient of positive value means that there is a positive relationship between uniqueness and purchase intention, meaning that the more uniqueness increases, will increase the purchase intention by 0.176 for each variable point of uniqueness.
The coefficient of nostalgia is positive in value, meaning that there is a positive relationship between nostalgia and purchase intention, meaning that the increasing nostalgia will increase purchase intention by 0.144 for each variable point of nostalgic orientation.
The coefficient of trust of positive value means that there is a positive relationship between trust and purchase intention, meaning that the increase in the trust will increase the purchase intention by 0.217 for each point of trust variable.
Discussion
The results of research from second-hand buyers for Generation Z in Manado city and its surroundings produced based on demographics are that Generation Z who purchase second-hand are generally women. Women generally own the ability and pleasure of shopping. Women prefer to use second-hand clothing over men making women more experienced in the second-hand market [7]. The purchase of second-hand by Generation Z started three years ago, meaning that the purchase of second-hand for Generation Z is not new. The most purchased type of second-hand by Generation Z is clothing. Thrift and clothing stores are easy to find [21]. This is because clothing is a basic need for society, and Generation Z feels that one way to collect clothes is by purchasing second-hand clothing. Collecting second-hand clothes is an act that has inadvertently become a habit for a long time [20]. The location purchasing second-hand, the Generation Z uses online platforms to make second-hand purchases because it is considered easier and can be done anytime. Generation Z is active social media users who have a lot of contacts and generally live daily online [4]. They prefer to search for everything on the internet [4]. Generation Z was born in a digital world with computers, mobile phones, and the internet [4]. Online platforms carry infrastructure for sellers or replacement of second-hand, search support, and contact and settlement phases [22]. Generation Z prefers online shopping because of lower time consumption, low prices, and convenience [14].
Based on the results of the variable test, it is produced that price orientation has a positive and significant effect on the purchase intention of secondhand for Generation Z. Second-hand associated with Generation Z are goods that can still be used and obtained at a lower price either compared to the same new product or similar if purchased in stores. The clothes in used clothing stores are the same as those in fast fashion that has only been used and are cheap [21]. Compared to conventional stores where the price of the product is higher than the production value (due to the brand value as an example), the price of the second-hand product is often based on its quality value [22].
The resulting bargaining ability variable is insignificant and negatively affects the intention of purchasing second-hand. The bargaining ability factor is not significant for Generation Z when purchasing second-hand because Generation Z is a generation that does not like prolonged processes. They tend to like everything instant and fast. So, Generation Z does not care about the bargaining process when purchasing second-hand. Generation Z is more concerned about ease and speed in their daily lives [14].
Uniqueness or wanting to appear unique has a significant and positive effect on the intention of purchasing second-hand. One of the factors for Generation Z looking for second-hand is because second-hand can display uniqueness that they can show as their identity. They have a preferred brand that is extremely important to define themselves [4]. The clothes' model can affect the consumer's appearance, so the model can influence the desire to purchase mostly unique second-hand [18]. The need for uniqueness can be combined with the need for individual uniqueness, where the first characteristic of consumer perception is from there, and the second is the perception of what they use [8].
Nostalgia has a significant and positive effect on the purchase intention of second-hand by Generation Z. When Generation Z makes a second-hand purchase, they tend to see second-hand as items that can bring out memories of childhood and nostalgia at the time they were produced. Consumers can continuously become second-hand users with a feeling of nostalgia as a child or with their family and friends [12]. Vintage products return memories of when they were created and produced [22].
Trust in second-hand has a significant and positive influence on purchase intentions by Generation Z. Generation Z believes that even if it is second-hand, they can provide what consumers need. The durability of used clothing items is the reason that determines the use of second-hand [21]. The Quality Factor influences consumers to make purchases of used clothing, and buyers show that good quality and quality can affect consumers' desire to make purchases online [18].
Conclusion
Based on the research results, the purchase of secondhand by Generation Z is no longer a new thing to be bought by Generation Z with the purchase of most online clothes. This is an input for second-hand sellers that consumers can also come from Generation Z. Second-hand clothing sellers should make this a marketing strategy to carry out marketing activities, especially online, to attract Generation Z consumers. Making it younger to promote domestically and more broadly selling second-hand through Facebook and Instagram is to update new items, provide discounts and provide buyer complaint services [18]. Online stores should start providing greater services through their networks to consumers, such as product resilience and positive assessments of sellers and products by providing durations to reduce defective products that online thrift companies often overlook to attract secondhand consumers [15]. The factors found to be input for marketers that in purchasing second-hand Generation Z is looking for a low price, second-hand that sold should highlight the uniqueness that can express the style of Generation Z, vintage second-hand that are loved by Generation Z and even second-hand however it has good quality and can be trusted. So second-hand sellers can improve quality by highlighting affordable prices and unique and attractive models or styles of second-hand. Bargaining for Generation Z is not what encourages them to buy second-hand, so second-hand sellers are advised not to carry out the bargaining process in Generation Z and directly at the specified price. Convenience, speed of shopping, product selection, and product information can affect the spending of the future for Generation Z [16]
|
2022-11-20T16:20:00.460Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "95f892796a98e160ece987b5735fb912a51ebea0",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/19/shsconf_icss2022_02026.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5878beefa2ab19a79dca8c1ab00aab360e01fbc0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
204047232
|
pes2o/s2orc
|
v3-fos-license
|
Are signs of burnout and stress in palliative care workers different from other clinic workers?
Objectives: Palliative care workers are in continuous contact with the patients causing emotionally draining effects of pain, suffering, death, grief and mourning. Burnout syndrome is common among palliative care workers, as these individuals accompany the patients on the way to death. Methods: A total of 47 individuals working in palliative care units or in internal disease and neurology clinics participated in the study. The participants were divided into two groups, being: palliative care workers (Group P) and workers in internal disease and neurology clinics (Group A). All of the participants filled out the Maslach burnout inventory, BECK anxiety-depression scale and Stress Appraisal Measure. Results: A total of 47 healthcare workers agreed to complete the scales.Emotional burnout and desensitization scores were found to be elevated, while personal success scores were found to be decreased in both groups.The BECK anxiety depression scale revealed findings of moderate anxietyin both groups, while cognitive-sensorial, physiological and pain complaints, as well as signs of stress, were more pronounced in Group A. Conclusion: Burnout is significant problem among healthcare workers and signs of stress and cognitive-sensorial, physiological and pain complaints are particularly common among healthcare workers in palliative care units. Structural arrangements aimed at addressing the causes of burnout will positively affect the well-being of healthcare workers.
Introduction
The continuous face-to-face interactions between healthcare workers and patients require additional emotional and physical capabilities, and this may have various challenging effects on healthcare workers. This is particularly important among healthcare providers working in palliative care units where the quality of life of patients is a primary concern. The main aim of palliative care is to build close relations with the patients and their relatives, and to ease their physical and emotional symptoms. [12] The holistic nature of palliative care (along with the psychosocial, PAIN A RI physical and spiritual dimensions) requires gaining knowledge and abilities to deal with such factors as death and grief. [22] As the number of patients requiring palliative care increases day by day, the challenges and stress experienced by healthcare workers grows even larger, and this leads to physical, psychological and emotional challenges among healthcare workers that if left untreated, may result in burnout. [16] Burnout syndrome was first described by Herbert Freudenberg, [8] and Maslach and Leiter later extended the description to define three dimensions of burnout, being: emotional burnout, desensitization and lack of personal success in the workplace. [13] In addition to the negative effects on the professional, family and social lives of workers, burnout syndrome can also have negative consequences on physical health, and in this regard, it represents a social and economic problem that is associated with high costs. This syndrome is particularly common among healthcare professionals working in clinics with a high rate of mortality. [19] Negative effects have been reported in nurses who provide care to moribund patients, who have reported feeling unprepared, particularly due to a lack of knowledge. [6] Burnout is one of the most common problems among healthcare providers working in oncology clinics, and managers of oncology units should be aware of the potential for the development of burnout syndrome among their workers. [4] The Maslach Burnout Inventory evaluates burnout in three dimensions, including Emotional burnout (EB), reflecting personal overload; desensitization (DS), defined as an insensitive and careless approach to those in care; and personal success (PS), related to the sufficiency and success in problem solving. [15] The Maslach Burnout Inventory consists of 22 questions, with answers reported on a scale of 0-4. [21] The validity and reliability of the Turkish scale havebeen demonstrated previously. [5] The BECK anxiety depression scale is used to determine the frequency of anxiety signs. There are 21 categories of signs, each consisting of four options, and each item is scored on a scale of 0-3. [2] The Turkish adaptation of the scale has been assessed, and validated. [10] The Stress Appraisal Measure (SAM) is a four-item Likert-type measurement tool consisting of 38 questions (1= Never, 2= Occasionally, 3=Frequently, 4= Always). Based on a factor analysis performed by Hovardao-glu, the scale assesses three factors: cognitive-sensorial, physiological and pain complaints. [11] The present study aimed to reveal the differences in the signs of burnout and stress between palliative care workers and those working in neurology and internal disease clinics.
Materials and Methods
The study was initiated after approval was obtained from the local ethics committee (registration no: 2017/103), and was conducted as a single-center study including individuals aged between 18 and 60 years, who were employees in the palliative care units, internal disease clinics or neurology clinics of Balikesir Ataturk City Hospital. All subjects participating in the study provided written informed consent. Those who refused to participate in the study, those who worked in the relevant clinics for less than two months, and physicians were excluded from the study. The study included nurses, midwifes, housekeeping personnel and healthcare personnel. Physicians responsible for palliative care were excluded from the study, as they were involved in the planning of the study. Data regarding the age, gender, profession, years at work and regular medication use of the participants was recorded.
The subjects participating in the study were divided into two groups. Group P included palliative care workers, and Group A consisted of workers in the internal disease and neurology clinics. The subjects who agreed to participate in the study completed the Maslach burnout inventory, the BECK anxiety depression scale and SAM. The Group P consists of 25 personnel and Group A consists of 22 personnel.
The statistical analysis was performed using SPSS for Windows Release 22.0 software. Qualitative data was compared with a Chi-Square test, and parametric data was checked for normal distribution with a Kolmogorov-Smirnoff test. Normally distributed data was compared with a Student t-test, and a Mann-Whitney U-test was used to compare the abnormally distributed data. A repeated measures variance analysis or Friedman test was used to compare measurements recorded continuously from the baseline. Parametric data was presented as mean and standard deviations, and non-parametric data was presented as percentages (%). P values of <0.05 were considered statistically significant.
Results
In total, 47 healthcare workers were included in the study. In Group P, five of the 30 workers refused to participate in the study, while all workers in Group A agreed to participate. The participants were given no direction; no interventionswas made and they were allowed to complete the questionnaires on their own.
The mean age, gender distribution, duration of employment in relevant clinics and medication use were not significantly different between the two groups ( Table 1). In total, three individuals in Group P and two individuals in Group A were taking antidepressants. Of the 25 subjects in Group P, four were housekeeping personnel, six were midwives, eight were nurses and seven were caregivers, while Group A included four housekeeping personnel, 10 midwives and eight nurses.
The mean EB, DS and PS scores measured from the Maslach Burnout Inventory were not statistically significantly different between the two groups. The mean EB, DS and PS scores in Group P were 30±9, 11±4 and 30±5, respectively, and in Group A, the mean EB, DS and PS scores were 33±8, 10±3 and 30±4, respectively.There were no statistically significant differences between the two groups.
The mean scores from the BECK anxiety depression scale in Group P and A were 19±12 and 17±11, respectively, and was not statistically significant.
Among the factors measured by SAM, the cognitivesensorial complaint scores were 30±8 and 25±6, the physiologic complaint scores were 18±4 and 15±3, and the pain complaint scores were 17±4 and 14±3, in Groups P and A, respectively. The total scores of SAM were calculated as 74±18 and 62±13 in Groups P and A, respectively. The cognitive-sensorial, physiological and pain complaint scores and the overall SAM score were significantly higher in Group P than in Group A (Fig. 2).
Discussion
EB scores were elevated and DS scores were at moderate level in both groups of workers, and the differ-ences were not statistically significant. PS scores, on the other hand, were found to be increased in both groups. The scores of the BECK anxiety depression scale indicated a moderate level of anxiety in both groups, while cognitive-sensorial, physiologic and pain complaints, as well as signs of stress, were more pronounced among the palliative care workers.
Burnout is generally considered to be a syndrome that presents with emotional burnout, desensitization and a decrease in the sense of personal success. [13] Workers suffering from burnout and a lack of compassion may have a negative influence in units providing care to patients who require compassionate care, and this may result in symptoms of emotional burnout anddecreases in job performance. [12] Signs of burnout, which can manifest as interpersonal sensitivity, depression, hostility and paranoid thoughts are common among healthcare professionals. Global Severity Index and Positive Symptom Total index scores among healthcare workers was elevated. [7] Burnout syndrome may lead to such physical symptoms as chronic exhaustion and gastrointestinal complaints; emotional signs like apathy, desensitization and irritability; social effects, such as isolation and loss of interest in previously enjoyed activities; an absence of spiritual awareness; reluctance to provide care to certain patients, a reduced willingness to come to work and low performance. [3] Due to the increase in life expectancy with medical improvements, the number of patients with advanced incurable diseases is continuously on the rise, which increases the need for palliative care. [16] The expectations and demands from healthcare providers can be rather high in palliative care units, and the risk of burnout is known to be higher in workers interacting with moribund patients. [18] Rizo-Baeza reported previously that working for more than eight hours a day, having a single parent, moderate or high levels of workload, occupational quality of life deficiency and low level of self-care were factors that increase the prevalence of burnout syndrome among palliative care workers. [14] Psychological, emotionaland spiritual relief is also decreased if the workers perceive their workplace as being over-challenging and demanding. [17] Previously reported incidences of burnout syndrome among palliative care workers have varied between 16 and 62% in literature, [20,1] and this wide range may be attributed to differences in the investigated sub-groups, working conditions and cultural differences.
Görgülü et al. Examined the burnout levels of all the staff of the hospital (doctors, nurses, contract staff, cleaning, cooking, and maintenance). The EB is 25.79, the DS is 11.40, PS is 24.30 calculated. [9] The scores in our study were higher than in this study. This shows that burnout is more severe. A correlation has been identified between emotional overload and the BECK depression scale. [23] In the present study, signs of burnout were noted in both groups, although the scores of cognitivesensorial complaints, physiologic complaints, pain complaints and total scores obtained from SAM were found to be increased among palliative care workers. Moreover, according to the BECK Anxiety-Depression Scale, signs of moderate anxiety were present in both palliative care workers and the workers in internal disease and neurology clinics.
In a study comparing the level of burnout among intensive care unit and palliative care unit workers found higher levels of burnout among intensive care unit personnel. Additionally, the risk of burnout among palliative care personnel was found to be lower than among intensive care unit personnel. [20] While the reduced level of desensitization was a rather positive finding of thisstudy, the same cannot be said for emotional burnout. The difference in these findings may be attributable to the differences in cultural values and judgments, as well as the personal lives of the participants or their working conditions. The relatively high scores in personal success, on the other hand, may be related to the willingness of the workers to carry out their jobs and their selfperception as being successful.
Moreover, the fact that palliative care workers provide care to patients with various long-term and difficult-to-treat diseases in the fields of internal diseases and neurology may have led to the similar Maslach and BECK results, and these outcomes may also be affected by the continuous on-the-job training and experience-sharing meetings undertaken by palliative care workers.
The causes and outcomes of burnout should be evaluated on an individual basis, and personnel who experience burnout should be supported both institutionally and personally. Patient-specific treatment should be planned that may include stress management, medical treatment or psychotherapy.
In addition to these options, regular training andmedical treatments are applied in our hospital; whereas Cognitive Behavioral Therapy (CBT) and EMDR (Eye Movement Desensitization and Reprocessing) have not been tried to date. For the following steps, studies can be planned to reduce burnout and stress through EMDR.
Despite the negative findings, it was satisfying to note that personal success scores were high in this study, and this may be attributable to the fact that the study participants performed their jobs with compassion and considered themselves successful.
This study has a number of limitations. First of all, the number of subjects included in this study was limited, and they may provide biased answers when filling out thequestionnaires. That said, this may have little influence on the findings, as the questionnaires were not applied during face-to-face interviews. Additionally, this study did not categorize the participants based on their occupations, meaning thatthe burnout levels of nurses and other healthcare workers were not evaluated separately. Moreover, different results would likely be obtained if palliative care workers were compared with workers in other clinics, or with staff who provide care to patients in different clinics.
Conclusion
Signs of burnout were to be high in workers in both internal diseases and neurology service departments and in palliative care workers. Signs of stress were also more common in these groups when compared to workers in other departments. The elevated levels of stress among palliative care workers highlights a need to provide stress management for this personnel group.
It is obvious that employment in palliative care units can have a negative effect on the personnel working in this field, as well as the patients. Palliative care workers should be part of a structure in which emotional and cognitive awareness are increased, where they can express their feelings and recognize that they have control. Symptoms of burnout can be reduced by supporting teamwork,ensuring commitment within teams,building a regularly working system for conflict management, recognizing the importance of personal development and rest,and providing a sufficient level of high-quality communication. In this regard, it is obvious that regular training and experience-sharing meetings can provide significant benefits to such workers.
Conflict-of-interest issues regarding the authorship or article: None declared.
Financial Disclosure: The authors declared that this study has received no financial support.
|
2019-10-03T09:02:48.105Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "9e0703546c8d028381b0eeaebf9211f1105779ce",
"oa_license": "CCBYNC",
"oa_url": "https://jag.journalagent.com/z4/download_fulltext.asp?pdir=agri&plng=eng&un=AGRI-14880",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cb0a06ffd1467f7d2c7dbaae9ebbdd16f62bf1aa",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56010319
|
pes2o/s2orc
|
v3-fos-license
|
The Missing Link: Teacher Learning for Diversity in an Area-based Initiative in Portugal
• As an attempt to promote educational success in socio-economically disadvantaged contexts, area-based initiatives are often launched in Europe, such as the programme Territórios Educativos de Intervenção Prioritária in Portugal. Given the importance of teaching quality to enhance student learning and considerable student diversity in the schools included in this initiative, this article explores opportunities for teacher learning for diversity within the programme. Documentary analysis was conducted on 188 school documents from 95 school clusters in the programme. The following research questions were asked: Does the programme promote teacher learning? In what ways is teacher learning promoted? Does teacher learning address diversity? The findings suggest that some interventions provide the possibility of teacher learning, such as the processes of learning together, pedagogical supervision, reflection, and attending professional development courses. However, diversity seemed to be largely missing from these initiatives. Furthermore, two cornerstones of teacher learning for diversity were absent: teachers’ critical reflection on students’ inclusion/exclusion; and learning from/with students, families, and communities. Additionally, most professional development opportunities were organised around and measured by students’ academic results, thus positioning teacher learning as instrumental in raising school success, rather than a core of transforming education for diversity. These results call for policies within the Territórios Educativos de Intervenção Prioritária programme to include teacher learning that engages with and fosters critical thinking around diversity and to involve communities’ and students’ voices in order to truly tackle social exclusion. The findings can contribute to the debate on the approach to diversity in area-based initiatives in Europe.
Introduction
Education is a fundamental right for all; however, schools are generally not geared to respond to the growing student diversity in a way that guarantees equity. Student diversity has become one of the most significant challenges for schools in the 21 st century (Ainscow, 2016). This challenge seems to be more present in socio-economically disadvantaged areas where area-based initiatives (ABIs) (Kerr & Dyson, 2017) are often launched (Dyson, Raffo, & Rochex, 2014). Student diversity is sometimes overlooked in ABIs, where 'areas' are treated as administrative units rather than human ecologies (Lupton, 2010). On reviewing the results of ABIs, Dyson et al. (2014) indicate the need to collect data that allows controlling for the impact of gender, ethnicity and social class (Dyson, Raffo, & Rochex, 2014) and not only focus on the impact on students' school results in general. Teachers are a crucial component in students' success (Darling-Hammond, 2017;Hattie, 2003). However, teachers working in such policy contexts that define success narrowly in terms of school results, often experience tensions between the claimed values of inclusion and diversity and the priorities with which they must comply (Dyson, Gallanaugh, & Millward, 2003). Little is known about how teacher learning for diversity is promoted in TEIP, an ABI in Portugal.
TEIP, an area-based initiative in Portugal
Area-Based Initiatives (ABIs) are often initiated in order to tackle underachievement, disadvantage, and social exclusion in Europe. Zones d' éducation prioritaires in France, Excellence in Cities in England, or the Territórios Educativos de Intervenção Prioritária in Portugal are examples of ABIs. Dyson et al. (2014) suggest that educational priority policies can target individuals (e.g., SEN), groups (e.g., Portuguese as non-mother-tongue language (PNML)), schools (e.g., establishing better leadership), and geographical areas (ABIs, e.g., TEIP). ABIs may combine these three targets.
The TEIP policy was first introduced in Portuguese educational policy in 1996. It was interrupted for a number of years and then was restarted in 2008 as TEIP2 and in 2012 as TEIP3 (Ferraz et al., 2014). Presently, TEIP3 includes 137 school clusters and aims at reducing school dropout, truancy, and indiscipline, and promoting educational success. The TEIP programme covers diverse contexts (Abrantes et al., 2013): socially excluded urban areas, diffuse peripheral zones, heterogeneous urban zones with social inequalities and conflicts, and poor rural areas. Abrantes, Mauritti, and Roldão (2011) identified that TEIP documents were mostly concerned with preventing school failure, indiscipline, and improving school results and family-school relations. Their research also reviewed the areas of TEIP actions: organisational interventions, pedagogical practices, integrating and monitoring students, as well as extracurricular and community activities. The detected pedagogical strategies predominantly provided more individualised teaching and learning in or outside the classroom, facilitated by a co-teacher or support staff, broadly aiming at improving teaching, reinforcing the subjects Mathematics and Portuguese, experimental Science teaching, and valuing achievement. However, they concluded that measuring TEIP outcomes through the students' school results at national exams might 'reduce the conceptions and practices that guide these actions, limiting the potential of pedagogical and organisational innovation' (p. 28). Although previous studies on TEIP acknowledge the decrease in dropout and indiscipline, the improvement of academic results and educational management, and that some responses have been given to socio-cultural diversity, several authors still question its overall success in transforming education (Abrantes et al., 2011(Abrantes et al., , 2013Dias, 2013;Rolo, Prata, & Dias, 2014;Sampaio & Leite, 2015;Silva, da Silva, & Araújo, 2017).
Additionally, Canário (2004) shed light on the TEIP schools' deficit approach to the ABIs' communities and students' families. The author problematises the perspective that longs for homogeneity, undervalues students' experiences, and considers pupils and their families as the sources of problems in schooling. Given the importance of teacher learning to student learning, teachers' learning for diversity could be a crucial driving force in TEIP. However, despite the number of studies conducted on TEIP, less is known about how teacher learning for diversity is framed by the TEIP intervention.
Teacher learning
Teacher effectiveness has rapidly risen to the top of the education policy agenda, as many nations have become convinced that teaching is one of the most important school-related factors in student achievement (OECD). And teacher preparation and development are key building blocks in developing effective teachers. (Darling-Hammond, 2017, p. 291) Several authors regard teacher learning as conducive to student learning (Guerriero, 2017;Hattie, 2009Hattie, , 2015Vermunt, 2014). Teacher learning can be an overarching concept 'that sees teachers as lifelong learners and includes teachers' formal learning in initial teacher education, induction and continuing professional development, as well as informal learning such as professional collaborations or networking' (Révai & Guerriero, 2017, p. 65). Teacher learning is conceptualised as ongoing, social, situated, distributed, and actively constructed (Borko, 2004;Putnam & Borko, 2000;Webster-Wright, 2009). According to Vermunt and Endedijk (2011), teacher learning is a dynamic interaction between learning and regulating activities, teachers' knowledge, beliefs and learning motivation, and is affected by several contextual and personal characteristics. In this article, we focus on teacher learning in terms of in-service professional development for teachers in TEIP schools.
Changes to the school population and educational reforms require considerable changes in classroom practices. According to Borko (2004), these changes can only happen through supported and guided teacher learning (p. 3). However, Darling-Hammond (2017) states that professional development remains inadequate and 'in-service seminars and other forms of professional development are fragmented, intellectually superficial' (p. 465). In contrast, teachers also learn in their school context by doing, experimenting, reflecting, and interacting with others (Bakkenes & Vermunt, 2010;Meirink, Meijer, & Verloop, 2009). Consequently, teachers should be given opportunities for embedded forms of professional learning and sharing their experiences and expertise in various ways in order to continue developing, learning and being enthusiastic about the teaching profession. These opportunities can include teachers' work with curriculum development through collaborative planning, lesson study, and action research of various kinds (Darling-Hammond, 2017, p. 303). Caena (2011) outlined seven forms of effective professional learning: analysing school culture, peer observations, classroom studies about students' assignments, analysing student data, forming study groups, being involved in development and improvement processes, and studying students' classroom behaviour.
However, often teachers still struggle or resist learning new knowledge and changing practices (Bakkenes & Vermunt, 2010), especially in the field of diversity (Gay, 2013). In order to enable in-service teacher learning and professional development, a variety of learning experiences and activities explicitly aiming at developing in-service teachers' knowledge, skills, practices, abilities, and values (Cordingley & Buckler, 2014;Day & Sachs, 2004;Guskey, 2000;Timperley, 2008;), must be provided. Professional development programmes are most adequate when competency-and reflection-based approaches are integrated in a dynamic model, including the acquisition of specific skills connected to teachers' practice, content responsive to teachers' needs and contexts, teachers' active participation, collaboration, formative evaluation, and sufficient timeframes for development (Creemers, Kyriakides, & Antoniou, 2013). This integrated vision of professional development is important in order not to simply acquire a set of prescribed contents and skills, but to develop moral responsibility to contribute to socially just schooling (Creemers et al., 2013).
Professional development can occur at a variety of sites, such as in school, in networks of schools or in partnerships with other institutions (Day & Sachs, 2004). Current understandings of professional learning point to the need for professional development to be continuous, related to teachers' actual needs and practice, collaborative, based on research evidence, engaging and empowering for teachers, and aimed at enhancing student learning (European Commission, 2013;Gilbert, 2011;Gimmert, 2014;Menter, 2010). To meet these demands, the school itself is an essential platform for professional development. Situated in the boundary of school contexts, teachers' co-learning and collaboration have been found to be an effective form of teacher learning (Avalos, 2011;Burbank & Kauchak, 2003;Vescio et al., 2008). According to Vangrieken et al. (2015), teachers' collaboration can occur in teacher teams, learning groups, communities of practice, professional learning communities, and other forms, such as critical friends or networks. Additionally, effective forms of learning together can also be, among others, lesson study (Murata, 2011), mentoring (Kemmis et al., 2014), and peer coaching (Cordingley & Buckler, 2014). Specifically, in Portugal, 'pedagogical supervision' has been promoted as an effective form of professional development (Carlos et al., 2017).
Regarding teachers in Portugal, Flores (2005) found that teachers undervalued formal learning opportunities provided by teacher education institutions; instead, they preferred school-based activities, such as reflecting on their own practices, analysing students' reactions, and trying out new strategies on a trial-and-error basis. Professional development in Portugal is regulated by the Decree-Law 22/2014, stating that improving the quality of education is one of the key challenges and, with a view to reaching this target, professional development is considered a priority. It furthermore requires professional development to take into account the contextual and individual needs of teachers, and to be based on a needs assessment, followed by a 'Professional Development Plan' . DL22/2014 portrays professional development as a way to support teachers in developing educational and curricular projects, improving their performance, quality, and efficacy and thereby adding to the quality of education and of student results at large.
Teacher learning for diversity
Diversity, a traditionally somewhat overlooked area of teacher learning, has been gaining more attention in European policy guidelines (European Agency for Development in Special Needs Education, 2012;European Council, 2009European Commission, 2015OECD, 2010) over the last decade. In this article, the concept of diversity is understood as multifaceted, socially constructed, and dependent on context. This holistic view on diversity (Cardona Moltó et al., 2010;Essomba, 2010;Timperley & Alton-Lee, 2008;Waitoller & Artiles, 2013) covers several aspects such as culture, ethnicity, language, disability, social and economic status, religion, gender, sexual orientation, and so on. These dimensions do not stand in isolation, but their possible intersections as manifested in student populations (Waitoller & Artiles, 2013) and in the demographics of ABIs are taken into account. Consequently, teacher learning for diversity in this article refers to professional development for inservice teachers addressing a wide scope of student diversity, thus tackling all types of marginalisation and exclusion (Unesco, 2005).
Although diversity has become a regular component in initial teacher education, teachers still report unpreparedness in responding to diversity across Europe (Arnesen et al., 2008;Burns & Shadoian-Gersing, 2010;European Commission, 2015 and in Portugal (Flores & Ferreira, 2016). A few frameworks have been developed for professional learning for diversity. Timperley and Alton-Lee (2008, p. 342) outlined a model that embeds professional development in the socio-cultural and learning environment and regards contents for diversity, teachers' learning activities and learning processes as an interactive system, leading to responding to diversity, and eventually, impacting diversity. Florian (2012) argues that professional development for diversity has to cover three main themes: understanding learning where difference is taken into account, understanding social justice, and becoming active professionals in developing new ways of working together. Waitoller and Artiles' (2013) review on professional development for inclusion described that the main forms of professional development were formal courses (university, on-site or online) and action research projects or collaborations with universities, researchers, and specialists. These projects involved teacher inquiry, as well as several forms of observation, coaching, and collaboration by and with external experts. However, more studies are necessary in order to understand professional development for diversity as it is organised by schools. Some investigations support the idea of lesson studies (Messiou et al., 2016;Simon, Echeita, & Sandoval, 2018), professional learning communities (Read et al., 2015;Torrico et al., 2016), and coaching (Teemant, 2015); with a specific focus on listening to or engaging with students' voice in these approaches (Messiou et al., 2016, Schultz, 2003Simon et al., 2018). It also has been found fruitful for teachers to work with and learn from communities in diverse contexts (Coffey, 2010;Lees, 2016). Ainscow (2005), when referring to teachers' 'levers for change' , considers 'policy documents, conferences and in-service courses' as low leverage activities, since they do not necessarily create 'interruptions that help to 'make the familiar unfamiliar' in ways that stimulate self-questioning, creativity and action' (p. 116). Ainscow (2005) also states that deeply rooted assumptions about diversity might undermine pedagogical innovations when teachers believe that students are 'disadvantaged and in need of fixing, or, worse, as deficient and, therefore, beyond fixing' (p. 117).
Consequently, drawing on these theoretical perspectives, professional development for diversity includes 1) developing subject content and pedagogical content knowledge in which diversity is included; 2) learning from/with students, families and communities; 3) reflecting critically on one's own beliefs and assumptions about diversity, as well as on how teaching practice contributes to socially just schooling. As previous research has shown, it is advisable that these learning processes occur in collaborative environments. There are several possible factors influencing teacher learning due to its situated nature (Avalos, 2011;Borko, 2004;Hoban, 2002;Putnam & Borko, 2000;Vermunt & Endedijk, 2011). In this article, teacher learning is approached as represented in school documents in the TEIP policy context.
This study aimed at describing how teacher learning is promoted in the proposed TEIP actions. Furthermore, it was explored how diversity was situated within those teacher learning possibilities. The following research questions were asked: 1.
In what ways is teacher learning promoted? 3.
Does teacher learning within TEIP address student diversity?
Method
To study teacher learning for diversity, contextualised in TEIP, a microlevel policy analysis was conducted including publicly available documents produced by TEIP schools. Documents were regarded as sources revealing distinct aspects of social realities, in this case, how teacher learning for diversity was shaped in the local TEIP interventions. Following the vision of Atkinson and Coffey (2011) that documents are not '[…] transparent representations of organisational routines, decision-making processes or professional practice' (p. 79), it was assumed that documents create a particular 'documentary reality' . Three types of documents were identified as insightful in answering the research questions: the schools' TEIP Improvement Plan, Educational Plan, and Professional Development Plan. These documents were obtained through a manual online search on the websites of the 137 TEIP school clusters. School clusters that did not display TEIP Improvement Plans were excluded from the search (42 school clusters). A typology was developed for delineating the documentation types of the school clusters 1) schools with only TEIP Improvement Plan (17 school clusters); 2) schools with Improvement Plan and Educational Plan (59 school clusters); 3) schools with Improvement Plan and Professional Development Plan (4 school clusters); 4) schools having all three documents (15 school clusters). A total of 188 documents from 95 TEIP schools were analysed.
The analysis was guided by the principles of contextual policy analysis (Ritchie & Spencer, 1994) that identifies the forms and nature of a certain existing phenomenon in policies. The analysis followed the procedures of inductive content analysis (Mayring, 2014) using NVivo version 11 software. Mayring's (2014, p. 80) steps of inductive category development includes 1) focusing on theories and research questions; 2) defining selection criteria; 3) initial category formulation; 4) revision and definition of categories half-way through the material; 5) coding the rest of the material; 6) building main categories; 7) intra/ inter-code agreement; 8) final results, interpretation. Following these steps, an initial selection protocol was developed, guided by theoretical perspectives and research questions. The material was divided between the two researchers and was analysed independently. After an analysis of around 50% of the whole data, the initial codes and categories were reviewed, discussed, and agreed upon between the researchers, and the rest of the data was analysed independently. The researchers then reviewed their individual analysis for refinements and main category building. The final step involved an audit of each other's analysis and inter-coding agreement. The first level analysis investigated the forms of teacher learning and identified four main themes that were used as the structure of the Findings section. The second level analysis explored the contents of each form of teacher learning, specifically focusing on if and how diversity was situated within those forms of teacher learning.
Results
The first level analysis revealed four main potential forms of teacher learning 1) learning together; 2) pedagogical supervision; 3) reflection; 4) professional development courses. These themes will be described, coupled with the second level analysis of how diversity was situated within each theme.
Learning together
A strong theme emerging was 'teachers working together' to reach the TEIP goals. However, this initiative took a variety of forms and was signalled by mixed terminology, often using 'working together' , 'cooperation' , and 'collaboration' interchangeably or simply as a list of descriptive words without theoretical distinction. Therefore, we will refer to this category as 'learning together' , while acknowledging its many levels and modes existing in the documents. Despite the variability, there are some trends based on the programmes 'Mais Sucesso Escolar/ TurmaMais' and 'Fenix' , which mainly target literacy-and numeracy-related 'school failure' . For example, a set of interventions recruited an extra support teacher. The work of the two teachers included: splitting classes into two groups and teaching students separately with or without prior teacher cooperation; the support teacher helping students within the classroom, individually or in small groups, following the main teacher's curriculum; and coteaching with two teachers planning and conducting lessons collaboratively. The TEIP documents are consistent in arguing that such actions were necessary to provide differentiated, individualised teaching and learning as a means of responding to students' needs.
Other forms of 'learning together' consisted of establishing working groups between teachers, departments and disciplines; teamwork (including interdisciplinary teams), working in partnerships/networks with external experts, and the idea of a learning community was also mentioned. These forms usually aimed at teachers developing together assessment instruments, lesson plans, teaching strategies and materials, and projects. Another aim of these forms was to share good practices, experiences, instruments and resources among the teaching team. Almost no reference was made on how these forms of collaboration address diversity, and only a few documents mentioned that teachers should work together with a Special Education teacher.
The actors of 'learning together' were teachers' horizontal (subject) and vertical pairs and groups, teachers with support staff, leadership and external experts. Even though TEIP aims to strengthen school-community relations, which could easily imply teachers learning with/from students and families, community-related actions were mostly associated with support staff (social workers, psychologists, and animators) rather than teachers.
Pedagogical supervision
Even though pedagogical supervision could be clustered under the 'learning together' theme, as it implies teachers observing each other's lessons and reflecting on them, it emerged in the data as a standalone activity that leads to modifying practices. 'Pedagogical supervision' , also referred to as 'intervision' in some schools, meant either middle leadership or peers to observe lessons. Pedagogical supervision was regarded as an effective form of professional development that contributed to the improvement of teaching practice, and eventually, student learning. Lesson study, critical friends, and coaching were also mentioned a few times, meaning a similar type of action. The aims and contents of these dynamics were kept on the level of 'improving practices' or 'modifying strategies' without specifically targeting diversity.
Reflection
Reflection was often mentioned in school policy documents, with schools stating that reflecting was a significant cornerstone to the refinement and adaptation of teaching strategies and TEIP actions, ultimately raising school results. Thus, reflection meant both 'reflecting on practices' and 'reflecting on results' mostly related to TEIP targets: academic results, the frequency of indiscipline, absenteeism, and dropout. In other words, reflection was often associated with the schools' self-assessment and monitoring of the development and impact of TEIP actions, which was performed by teams of teachers but was not centred on teachers' self-development. Critical reflection was mentioned only a few times, and there was a lack of specifying reflection on practices in terms of responding to diversity, or in relation to mechanisms of exclusion and inclusion in society and in school. Apart from what was mentioned regarding 'pedagogical supervision' , there was no clear sign of encouraging teachers to reflect on themselves, their own assumptions, beliefs, and attitudes towards students, families, and communities.
In-service courses
The majority of the TEIP school policies analysed provided a list of courses available for teachers. These courses or workshops differed in contents, lengths, and ways of being organised. The courses were mostly provided through the centres for continuing professional development connected to the schools. The teachers were allowed to visit other schools and institutions for professional development, and the TEIP 'external expert' also seemed to provide in-school workshops. Regarding content, courses were mostly provided around three main areas: subject-specific contents, transversal knowledge and classroom management, and knowledge and skills related to the specific TEIP actions.
Regarding subject-specific courses, contents were mainly related to Portuguese, Mathematics, and experimental teaching of Sciences. Transversal knowledge and skills involved a variety of contents, such as differentiated curriculum and pedagogy, using ICT, active teaching, assessment methods, and classroom management. Classroom management was generally related to student behaviour and to the TEIP target of 'Preventing Indiscipline' . Courses related to TEIP actions, such as specific teaching strategies (TurmaMais, Fénix, tutoring), monitoring and self-assessment, and pedagogical supervision, were also offered.
Contents related to diversity appeared in the transversal category, mostly around special education and inclusion. To a smaller extent, 'inter/multiculturality' , 'integration' , and 'PNML' were also present. The titles and terminologies of these courses differed across the schools and tended to use overlapping concepts of inclusion, integration, and inter/multiculturality. For example, 'inclusion' was mostly used in referring to 'SEN' , or to diversity in general. 'Integration' was often mentioned broadly or in terms of 'Roma ethnicity' , 'SEN' or 'PNML' students. 'Inter/multiculturality' was usually a broad topic without specificities. However, in a few cases, the course titles presented problematic perspectives of diversity, for example, 'Integrating students of Roma ethnicity to school: a problem or an opportunity? Managing cultural diversity in school' .
Discussion and Conclusions
This article aimed to explore if and how TEIP promoted teacher learning in general, and specifically learning for diversity, through its initiatives for in-service teachers' professional development.
Starting with whether TEIP promoted teacher learning, it was clear that school policies mentioned several opportunities for teacher learning, such as working and learning together, pedagogical supervision, reflection and professional development courses. Improving teaching was regarded as a crucial component of TEIP, and it was consistently assumed to lead to improving student learning, and ultimately, school results. Aligned with previous literature ( Vangrieken et al., 2015, Vescio et al., 2008, the schools have shown awareness of the need for building collaborative working cultures, and encouraging teacher learning through a variety of initiatives to work and learn together. However, the monitoring of the impact of TEIP actions was often reduced to measuring three TEIP targets: student grades, levels of indiscipline, and school dropout. Despite the clear commitment in TEIP school policies to support teachers' professional development in collaborative environments, it is ambiguous whether these initiatives can have a strong impact on actual teaching practices when constrained by such narrowly defined indicators of assessment. Learning together activities mostly involved teachers, school leadership, support staff, and external experts, but students, families, and communities seemed to be completely missing from these initiatives. As previously pointed out (Abrantes et al., 2013;Ferraz et al., 2014), TEIP actions for strengthening school, family, and community relationships remain underdeveloped. Our analysis found a variety of activities targeting the involvement of parents and communities in school life, but they did not seem to be opportunities for teacher learning, rather the other way around, the actions promoted 'parents learning' by learning from support staff and school. Therefore, it seemed to be assumed in TEIP-related documents that students, families and communities are not legitimate sources of teacher learning, in other words, are not regarded as equal partners in the teaching-learning process. Despite the fact that engaging with communities and student voice (Coffey, 2010;Lees, 2016;Messiou et al., 2016;Schultz, 2003;Simon et al., 2018) is crucial in transformation for diversity and equity, TEIP actions seemed to reinforce an image of school and teachers that might hinder the development of inclusive schools.
Diversity appeared in the TEIP documents when describing the schools' contexts, students, and families, mostly pointing to the dimensions of SEN, ethnicity, nationality, language, socio-economic status, and educational success as measured by grades and grade repetition. In most cases, diversity was simultaneously presented as an opportunity in the school values sections, and as a challenge or problem, in the schools' contextualisation and specific TEIP actions. However, the lack of diversity focus was evident in the initiatives for teachers' professional development.
There was a clear awareness of the need to differentiate teaching, and for teachers to learn about differentiation; however, the rationale for this was not diversity but the improvement of students' grades, especially in Portuguese and Maths. Reflection, another strong component of teacher learning in TEIP, was consistently identified in the documents as a crucial process for improving practice. Teachers were encouraged to reflect mainly on the students' results and on teaching practice in general, and to modify practices, leading to improved school results. Similarly to teachers' 'learning together' activities, the aims and contents of these reflections either remained on a global level, without an emphasis on diversity or focused on reflection about grades, indiscipline and school dropout. Consistently with what was presented by Canário (2004), the teacher reflections may lead to explanations of 'problems' as being within students and their families, which can create further obstacles in developing a critical reflection on how TEIP practices contribute to or hinder tackling social inequalities. Without an understanding of diversity and social justice practices, these initiatives might eventually lead to the opposite effect, such as labelling and tracking students in fixed ability groups, contributing to certain groups of students to remaining in low-achieving paths of academic and professional life. Critical reflection on the self, one's beliefs, and assumptions seemed to be completely absent from the reflection processes. Thus, if teachers hold low expectations and negative views about their students (Ainscow, 2005), transformation is unlikely to happen. Therefore, the teachers' ability to act and think critically is crucial in counteracting such ambiguities (Dyson et al., 2003), for example by understanding and modifying group compositions and arrangements from an inclusive point of view, keeping high expectations for all learners and examining ones' biases.
Regarding professional development courses, largely promoted topics of teacher learning indirectly connected to diversity were 'differentiated pedagogy' and 'curricular differentiation' , but these seemed to remain on a superficial or technical level, which does not necessarily engage with how diversity is targeted through these actions. In contrast, the more specific courses, focused on 'special education' , 'inclusion' , 'integration' , 'inter/multiculturality' , and 'PNML' , will depend greatly on how these issues will be approached and by whom, risking complying with superficial approaches that do not engage teachers' in critical thinking and questioning the status quo of schooling. Additionally, some of the courses seemed to apply confusing principles related to ethnicity, language status, integration, and inclusion. Furthermore, a variety of crucial topics were seemingly missing from these courses, including (among others) equity and social justice, multilingual pedagogies, cultural responsiveness, discrimination, and racism. Some dimensions of diversity remained overlooked, such as gender, sexuality, and religion. Approaches that view student diversity through inclusive, intersectional lenses also seemed to be absent. However, the documents only provided the titles of the courses, and this analysis cannot offer a conclusion about their actual contents.
Ultimately, the success of the TEIP actions being assessed through its impact on students' academic achievement or indiscipline creates a strong barrier to developing inclusive and equitable schools and transforming education in TEIP (Abrantes et al., 2011(Abrantes et al., , 2013Dyson et al., 2003). Our analyses found that these standards of measurements, in fact, disregard both diversity and teachers' professional learning for diversity. Consequently, despite the fact that professional development is essential to improve student learning (Hattie, 2003;Vermunt, 2014), teacher learning within TEIP documents seems to remain instrumental to raising school success, rather than being at the core of transforming education and schools into an asset for a more equitable and socially just society.
These results call for TEIP policies to support teacher learning for diversity, in other words, continuing professional development that fosters critical thinking and engages with communities' and students' voices in order to respond to diversity aiming to create equitable practices and tackling exclusion. These findings might serve as starting points to renew TEIP, as well as other ABIs in Europe.
Biographical note
Nikolett Szelei is an early stage researcher in teacher education at the Institute of Education, University of Lisbon. Her main areas of research include cultural and linguistic diversity in schools, focusing on critical multiculturalism, social justice and students' voices; and teachers' context-based experiences and professional development on the same fields. Other areas of interest include preschool and primary school education, teacher collaboration, mentoring, and music education.
Ines Alves, PhD, is a Lecturer in Inclusive Education at the University of Glasgow. Her research interests are inclusive education, social justice and equity, human rights, and disability. She is interested in the conceptualisation of difference, and in the schools' responses to pupil diversity, namely through the use of inclusive pedagogy and Universal Design for Learning.
|
2018-12-11T04:13:59.137Z
|
2018-09-28T00:00:00.000
|
{
"year": 2018,
"sha1": "7f1570ebae1f0fe33de8e62dd52c6ed55cc3ecb0",
"oa_license": "CCBY",
"oa_url": "https://ojs.cepsj.si/index.php/cepsj/article/download/513/298",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "026fc9b8304c94d4e0a33f4cb9fcca30fb9fe3d0",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
51801833
|
pes2o/s2orc
|
v3-fos-license
|
Atomic force microscopy study and qualitative analysis of martensite relief in zirconia
A recent report [S. Deville et al., J. Am. Ceram. Soc., 86(12), 2225 (2003)] has shown the new possibilities offered by atomic force microscopy (AFM) to investigate martensitic transformation induced relief in zirconia. In this paper, we studied qualitatively the surface relief resulting from martensitic tetragonal to monoclinic phase transformation in yttria and ceria-doped zirconia by AFM. AFM appears as a very powerful tool to investigate martensite relief with a great precision. The phenomenological theory of martensitic crystallography could be succesfully applied to explain all the observed features. The formation conditions of the martensite are discussed, as well as ways of accommodating locally the transformation strain, i.e. self-accommodating variant pairs and microcracking. Variants growth sequences are observed. These observations bring new insights and explanations on the transformation initiation and propagation sequences.
I. INTRODUCTION
The martensite transformation model, developed by Bain 1 , has now been the object of almost a century of investigations. Its relevance to different types of materials, running from metals to ceramics, has drawn a lot of attention and studies. If the characteristics and macroscopic features of the transformation are now well predicted and understood, the number of quantitative reports of surface relief changes resulting from phase martensitic transformation is still limited, which is not surprising considering the scale at which the transformation is occurring. Martensite relief has been investigated mainly by optical methods and scanning electron microscopy, though both methods provide a limited spatial resolution, and 3D quantitative informations are not accessible. Quite fortunately, the recent development of scanning tunneling microscopy (STM) and atomic force microscopy (AFM) 2 provides the scientific community with powerful tools to investigate phenomenon's characterized by relief variations at a nanometer scale. Great progresses were accomplished in the last few years and a few reports might be found on steel based materials 3,4,5 .
The absence of specific sample preparation and the possibility of observing bulk non-conductive materials make it very attractive to study martensitic transformation in ceramics in particular.
The martensitic transformation of zirconia corresponds to the tetragonal to monoclinic phase transformation 6,7 . Zirconia, when doped with yttria (Y 2 O 3 ) or ceria (CeO 2 ), is indeed retained in its metastable tetragonal structure after sintering. Upon the action of mechanical stresses or hydrothermal solicitations 8-10 (i.e. water vapor at 140°C), zirconia might transform to its stable monoclinic structure. This transformation, at the origin of the transformation toughening effect 11,12 , has been the object of extensive studies over the last twenty years, and its martensitic nature is now widely recognized 13 . Evidences of martensitic features were provided mainly by transmission electron microscopy and optical interferometry 14 . However, very few quantitative reports [15][16][17] of the martensitic features might be found in the literature due to experimental difficulties, and less interest attached to these materials so far. Recent progresses 18 in imaging resolution have shown that AFM could provide with very precise measurements of martensitic surface relief on zirconia samples. In this paper, AFM has been used to characterize surface relief resulting from martensitic transformation in ceria and yttria-doped zirconia, at a scale that was never reached before. These qualitative observations brought new insights on the transformation initiation and propagation mechanisms.
II. EXPERIMENTAL METHODS
Ceria stabilized zirconia (Ce-TZP) materials were processed by classical powder mixing processing route, using Zirconia Sales Ltd powders, with uniaxial pressing and sintering at 1550°C for two hours. Yttria stabilized zirconia (Y-TZP) samples were processed using (3 mol.% Y 2 O 3 )-TZP powders (Tosoh, Japan), and also uniaxial pressing and sintering for two hours at 1500°C.
Residual porosity was negligible. Samples were polished with standard diamond based products. Some samples were thermally etched for 12 min at 1350°C, in order to form grain boundary thermal grooves. The effect of a slight thermal etching on the ageing behaviour has been investigated, and it was shown it did not modify either the transformation mechanism or its kinetic. Thermal etching was performed to study the location of the transformation with respect to grain boundaries.
Experiments were performed using both thermally etched and unetched samples.
AFM experiments were carried out with a D3100 nanoscope from Digital Instruments Inc., using oxide sharpened silicon nitride probes in contact mode, with an average scanning speed of 5 µm.s -1 . Since the t-m phase transformation is accompanied by a large strain (4 % vol. and 16 % shear), surface relief is modified by the formation of monoclinic phase. The vertical resolution (down to a few tenths of nanometers) of AFM allows following very precisely the transformation.
Two types of images have been obtained from AFM experiments. The first one is the height image (see Fig. 1,2 or 10), where the height of every point of the scanned surface is measured. This allows relief 3D imaging, making the image analysis and interpretation easy. The second type is the so-called derivated image (see Fig. 6, 11 or 12), where the contrast is originating from the rate of relief variation, i.e. all the surfaces with the same orientation, related to the probe scanning path, will appear with the same contrast. This type of images is very convenient to discern planes with a constant angle, such as the ones forming the sides of self accommodating variant pair of martensite.
Either these two types of images are presented here. The vertical and lateral scales of AFM images are always different here so as to exaggerate the relief and see it more clearly.
Ageing treatments were conducted in autoclave at 140°C, in water vapor atmosphere, with a 2 bar pressure, in order to induce phase transformation at the surface of the samples with time.
A. Martensite fundamental features
The phenomenological theory of martensitic crystallography 19 single martensite plates might appear, leading to an N-like shape of surface relief. If two plates are growing back to back or close enough, their habit planes might join themselves, and the overall surface adopts a triangular shape. It was thought 13 the formation of SAMVP would be present only in the case were the transforming region was isolated and surrounded by untransformable material, e.g. zirconia grains in an alumina matrix, or tetragonal precipitates in MgO -partially stabilized zirconia. However, strain considerations must also be taken into account in the formation of the SAMVP. When two variants are growing back to back, their shape strain directions are opposite.
Considering the very large shear strain (16%) and volume increase (4%) accompanying the t-m phase transformation, very large stresses appear in the surrounding zones of transformed material.
Theses stresses might concentrate and build up to eventually stop the transformation, or they might also trigger the transformation of another neighboring system, providing certain crystallography relationships are respected. It is nevertheless worth noticing that the formation of SAMVP results in a very large reduction of long range overall shear strain, since shear in the variants of a pair are opposite and equal. This configuration is therefore very favorable from an energetical point of view.
In the case of zirconia, SAMVP are present all over the surface (Fig. 1). The orientation of the pairs will be discussed later on, but it is already worth noticing this mechanism seems to be the more favorable to accommodate stresses induced by the transformation. The different ways of accommodating stresses are discussed in the next part.
B. Martensite formation
It is worth noticing the first transformed zones present all the same relief after transformation, suggesting their orientation relationships are very similar. The junction planes of theses variants are all perpendicular to the surface (see Fig. 3, 4 or 5), so that the overall long range lateral stress is almost totally suppressed. Height variations are not restricted by the surface, so that these systems are the easiest to transform. When the transformation is propagating to the surrounding zones of the surface, due to higher stresses in the surrounding zones of the transformed regions, different crystallographical systems may be activated and transform. The relief change is consequently modified. Figure 6 provides an example of a very different orientation relationship to the surface, with two possible explanations for this. The first explanation could be related to a junction plane almost parallel to the surface, leading to a rippled surface, with much smaller height variations.
However, higher residual stresses are expected in the surrounding areas in this case. The other possibility is the accommodation of deformation strain by slipping rather than twinning. It is not possible to conclude on this particular point without further local crystallographic informations.
The spatial distribution of the SAMVP at the surface is more complex. Several situations might be observed indeed. Either the SAMVP might run through the entire grain, as shown in Fig. 3, or they might also stop at the middle of the grain, and a system with a different orientation is activated, as shown in Fig. 4. Some more complex structures between these two situations might be found.
As far as the location of the variants is concerned, AFM allows the observation of very interesting features. The transformation was never initiated away from the grain boundaries, i.e. in the middle of a grain, as this would be energetically too highly unfavorable. SAMVP almost always appear at grains triple junction at first. It can be seen (Fig. 7), the transformation was indeed initiated at the triple junction before propagating to the rest of the grain. This might be interpreted by taking into account residual stress effects. It was shown 21 that residual stresses resulting from material processing concentrate at grains triple junction. As compared to other regions, these sites will act as preferential nucleation sites.
The top shape of SAMVP is of prime interest. Differences are observed between Ce-TZP and Y-TZP. In the case of Y-TZP, the junction of variant parts of a pair is always very sharp ( Fig. 3 and 4). In the case of Ce-TZP, some large flat untransformed zones might be observed (Fig. 2, 5 and 9) at the junction. Since the surface of theses zones seems to be unmodified, it is reasonable to suggest they are not transformed indeed, inasmuch as this effect may be explained by the PTMC. In fact, it was shown the formation of SAMVP was a sequential process. The variants did not form all at once, and even the formation of a single variant is a sequential process. If the two variants of the pair might grow back to back, with a symmetric relief with respect to the surface, a remaining part of tetragonal phase of triangular shape is left in-between, as schematically shown in Fig. 8. When the variants are growing, stresses might add up in these untransformed zones (depending on the crystallographic relationships) until everything is transformed (Fig. 3 and 4). However, it is also possible theses stresses, if present, become so high that the transformation cannot proceed anymore [22][23] . The observed differences between Ce-TZP and Y-TZP could be explained by the differences in grain size and in crystallographic parameters.
The sequential growth of a SAMVP is illustrated in Fig. 10. The same zone of the grain was observed at two different stages of the transformation, and it might be seen the variant pair did indeed grew in height (about 10 nm) and in length. The two triangles indicate the end of the junction plane, and the distance in-between shows an increase of about 50 nm in length. This is another clear evidence that even a single pair is formed via a sequential progression.
Once some variants are formed, lots of stresses are accumulated, and the system will try to reduce its overall energy. Several strategies are possible for this. The first one is to trigger the transformation of a neighboring system, as previously mentioned. This is indeed the main mechanism observed for stress relaxation and accommodation in zirconia. However, the two systems must satisfy certain crystallographic relationships for the transformation to proceed. In the more favorable case, some coherency is found between two adjacent grains and SAMVP running from one grain to the other one might be observed, as shown in Fig. 11. Though the grain boundary thermal groove is disturbing the surface homogeneity of the pairs, the relationship between the SAMVP of the two adjacent grains is obvious. This is however a very rare occurrence in these materials, and very few transgranular martensite laths might be observed.
In a more general manner, if no specific crystallographic correspondences are found, the grain will have to accommodate all the strain when the transformation is propagating, by modifying the spatial arrangement of the SAMVP; this is particularly obvious (Fig. 5 and 12) in Ce-TZP. A system of large SAMVP is usually formed in the grain, occupying almost its entire surface. To transform the remaining parts of the grain and accommodate the strain at the same time, some smaller systems of SAMVP could be formed around, so that the resulting stresses and strains are much lower than those of larger SAMVP. A lot of these small pairs might be observed along the grain boundaries.
Finally, if no correspondence is found between two grains or two parts of a single grain, the combination of the very limited plasticity of zirconia, and the apparition of very large shear strains and stresses might lead to the formation of microcracking at the end of SAMVP. This is illustrated in Fig. 13. Thermal etching was not performed on this sample, so that there is no risk for the observed microcrack to be mistaken with a grain boundary thermal groove. The S-shaped microcrack is running across the entire micrograph. Its shape also eliminates the possibility for this artifact of being a residual polishing scratch. In all the phenomenological models developed 24,25 to explain the t-m phase transformation of zirconia, the formation of microcracks and macrocracks as a consequence of the transformation plays a major role in the propagation of the transformation.
However, these cracks were never clearly observed in the surroundings zones of transformed regions. The observations reported here provide therefore strong evidence supporting these models.
C. Relationships with the system crystallography
One of the great improvements of using AFM as compared to conventional observation methods is that it can provide 3D measurements at a nanometer scale. The lateral resolution (as low as 0.1 nm) and vertical resolution (0.01 nm) provide very reliable quantitative measurements of surface relief characteristics. For example, a precision better than 0.2° might be reached when measuring angles between planes, providing the image was acquired in good conditions, i.e. principally a probe with a radius of curvature 26 as low as possible, typically 10 nm for the best probes used here. 3D information's are nevertheless not necessary to measure angle relationships between SAMVP junction planes. In all of the observations, junction planes were always found to be either parallel or perpendicular. In regards of the original crystallographic structure (tetragonal), this is an excellent agreement with the theory, since two planes of the tetragonal cell are crystallographically equivalent, so that they might equally transform.
IV. CONCLUSIONS
AFM has been used here for the first time to investigate precisely and qualitatively the surface characteristics of martensitic transformation in zirconia. It was shown that all the features observed here could be explained by the PTMC. The formation of SAMVP was observed. The different ways of accommodating locally the strain, i.e. SAMVP formation, system of small SAMVP formation and microcracking were observed and discussed. As far as SAMVP formation conditions are concerned, it was shown that grains triple junction appear as preferential nucleation sites.
Differences in martensitic surface relief between Ce-TZP and Y-TZP could be explained by differences in grain size.
With its unique lateral and vertical resolution, the possibility of observing bulk samples (as compared to thin foils used for transmission electron microscopy) and ease of image interpretation due to the absence of a specific interaction between the probe and the surface of zirconia, AFM appears as a unique and extremely powerful tool to investigate martensitic transformation.
However, it is worth noticing no information's on the local crystallography (i.e. crystallographic orientation of the surface) might be obtained by AFM; it must be combined with different techniques. The lack of quantitative reports should vanish quite rapidly in the next few years by carrying out AFM experiments.
|
2018-07-20T00:06:00.379Z
|
2005-05-01T00:00:00.000
|
{
"year": 2017,
"sha1": "e54717f3b501dca6b0f03ebd7fdad1e02f6500c1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.04443",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a3b0259b14ff81b5dbf717c315e07ca5c4157770",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
15199333
|
pes2o/s2orc
|
v3-fos-license
|
The one-loop renormalization of the gauge sector in the \theta-expanded noncommutative standard model
In this paper we construct a version of the standard model gauge sector on noncommutative space-time which is one-loop renormalizable to first order in the expansion in the noncommutativity parameter $\theta$.
Introduction
The interest to formulate a consistent quantum field theory on noncommutative space comes, besides from string theory, also from mathematics [1] and from phenomenology. There are two main approaches to define gauge theories on the canonical noncommutative space. One possibility, extensively analyzed in the literature [2,3], is to replace the ordinary product in the Lagrangian by the Moyal-Weyl ⋆-product; it is well defined owing to associativity and the trace property of the ⋆-product. Using this prescription, however, only U(N) gauge theories can be consistently defined and the group representations are restricted to the fundamental and the adjoint. This implies in particular the quantization of the electric charge which takes values in {±1, 0}. In perturbative quantization, the interaction vertices obtain additional phase factors in comparison with commutative theory, and this leads to the well-know UV/IR mixing.
A slightly different and nonequivalent representation is the so-called θ-expanded approach. A consequence of the requirement that the gauge algebra closes on noncomutative fields is that the fields are enveloping algebra-valued. Using the Seiberg-Witten map, which is also an expansion in the noncommutativity parameter θ, noncommutative fields are expressed in terms of their commutative counterparts [4,5]. The major advantage of this approach is that models with any gauge group and any particle content can be constructed.
There is a number of versions of the noncommutative standard model in the θexpanded approach [6,7,8,9]. The argument of renormalizability was previously not included in the construction because it was believed that field theories on noncommutative Minkowski space were not renormalizable in general [10,11]. However, a recent result on the one-loop renormalizability of the θ-expanded noncommutative SU(N) gauge theory opens different perspectives [13]. Of course, renormalizability in linear order does not mean renormalizability of the complete theory, but one can expect that the additional Ward identities, which correspond to the full noncommutative symmetry and relate different orders, might help. In this paper we will follow papaer [12]. We show that it is possible to construct a version of the NCSM gauge sector which is one-loop renormalizable to first order in θ.
General considerations
The noncommutative space which we consider is the flat Minkowski space, generated by four hermitian coordinates x µ which satisfy the commutation rule The algebra of the functions φ( x), χ( x) on this space can be represented by the algebra of the functions φ(x), χ(x) on the commutative R 4 with the Moyal-Weyl multiplication: It is possible to represent the action of an arbitrary Lie group G (with the generators denoted by T a ) on noncommutative space. In analogy to the ordinary case, one introduces the gauge parameter Λ(x) and the vector potential V µ (x). The main difference is that the noncommutative Λ and V µ cannot take values in the Lie algebra G of the group G: they are enveloping algebra-valued. The noncommutative gauge field strength F µν is defined in the usual way There is, however, a relation between the noncommutative gauge symmetry and the commutative one: it is given by the Seiberg-Witten (SW) mapping [4]. The expansions of the NC vector potential and of the field strength, up to first order in θ, read Taking the action of the noncommutative gauge theory and expanding the fields via SW map, * product we obtain the expression The discussion given above was a general one, without any specification of the gauge group G or of its representations. In (2..6) we have a factor Tr {T a , T b }T c ∼ d abc . One could perhaps assume that, as the field strength transforms according to the adjoint representation, the symmetric coefficients d abc are given in that representation. However, when the matter fields are included, other representations of G are present too, and therefore the expression (2..6) is ambiguous.
To start the discussion of the gauge field action-dependence on the gauge group and/or on its representation, we use the most general form of the action, [7]: The sum is, in principle, taken over all irreducible representations R of G with arbitrary weights C R . Of course, for the gauge group G we take The previous action may be generalized by adding x 2 −depending term The above action has very interesting renormalization property and phenomenological consequences. The constant a will be fixed by renormalizability property. For a = 1 we obtain (2..7), so-called minimal model. To relate the action (2..6) to the usual action of the commutative standard model, we make the decompositions , R(T a S ) denote the representations of the group generators Y , T i L and T a S of U(1) Y , SU(2) L and SU(3) C , respectively; the group indices run as i, j = 1, . . . 3 and a, b = 1, . . . 8. According to [7], we take that C R are nonzero only for the particle representations which are present in the standard model. Then from (2..7) we obtain the expression for the θ-independent part of the Lagrangian where d(R) denotes the dimension of the representation R. The noncommutative correction, that is the θ-linear part of the Lagrangian, reads .9) where the c.p. in (2..9) denotes the addition of the terms obtained by a cyclic permutation of fields without changing the positions of indices. The couplings in (2..9) are defined as follows: Let us discuss the dependence of κ 1 , . . . , κ 5 on the representations of matter fields. For the first generation of the standard model there are six such representations ; they produce six independent constants C R 1 . These constants are constrained by the three relations which defined g ′ , g, g S . One can immediately verify that κ ijk 4 = 0. We shall in addition take that κ abc 5 = 0. The argument for this assumption is related to the invariance of the colour sector of the SM under charge conjugation. Although apparently one has only the fundamental representation 3 of SU(3) C , there are in fact both 3 and3 representations with the same weights, C 3 = C3. Since the symmetric coefficients for the 3 and3 representations satisfy d abc (2..10) We are left only with three non vanishing couplings, κ 1 , κ 2 and κ 3 , depending on six constants C 1 , . . . , C 6 . Our classical noncommutative action reads [12] S cl = S SM + S θ , The first term in (2..12) is one-loop renormalizable to linear order in θ [13] since the one-loop correction is of 1 We assume that CR > 0; therefore the six CR's were denoted by 1 g 2 i , i = 1, ..., 6, in [6,8].
the second order in θ. We need to investigate only the renormalizability of remaining parts of the action (2..12).
One-loop renormalizability
We compute the divergences in the one-loop effective action using the background-field method. Here we will give only main results, the details are given in [12]. For the action (2..6), the classical Lagrangian reads After a very long and straightforward calculation [12] we get the diveregent part of one-loop effective action (3..14) The divergent contribution due to U(1) Y solely vanishes, both the commutative and the noncommutative one. It is clear from (3..14) that the divergences in the noncommutative sector vanish for the choice a = 3. Therefore one obtains that the noncommutative gauge sector interaction is not only renormalizable but finite. The renormalization is performed by adding counter terms to the Lagrangian. We obtain where the bare quantities are given as follows: Finally, an important point is that the noncommutativity parameter θ need not be renormalized.
Discussion and conclusion
We have constructed a version of the standard model on the noncommutative Minkowski space which is one-loop renormalizable and finite in the gauge sector and in first order in the θ parameter. The renormalizability in the model was obtained by choosing six particle representations of the matter fields for the first generation of the SM, and by fixing the parameter a = 3.
The one-loop renormalizability of the NCSM gauge sector is certainly a very encouraging result from both theoretical and experimental perspectives. So far fermions have not been successfully included: the results on the renormalizability of noncommutative gauge theories with Dirac fermions are negative [10,11] as a 4ψ-divergence always appears. In the case of SU(N) or SU(3)⊗SU(2)⊗U(1) the unexpanded gauge theory cannot be consistently defined. Furthermore, our results show that the requirement of renormalizability fixes the parameter a to a = 1 or a = 3 [16]. We hope that a similar procedure could be applicable to the fermionic sector of the theory.
|
2007-11-18T14:28:25.000Z
|
2006-09-11T00:00:00.000
|
{
"year": 2007,
"sha1": "e9af504d55645a2d1554bd12cc12d06c0ee46104",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0609073",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e9af504d55645a2d1554bd12cc12d06c0ee46104",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
44263259
|
pes2o/s2orc
|
v3-fos-license
|
Response Time Analysis of Messages in Controller Area Network: A Review
is paper reviews the research work done on the response time analysis of messages in controller area network (CAN) from the time CAN speci�cation was submitted for standardization (1990) and became a standard (1993) up to the present (2012). �uch research includes the worst-case response time analysis which is deterministic and probabilistic response time analysis which is stochastic. A detailed view on both types of analyses is presented here. In addition to these analyses, there has been research on statistical analysis of controller area network message response times.
Introduction
e arbitration mechanism employed by CAN means that messages are sent as if all the nodes on the network share a single global priority-based queue.In effect, messages are sent on the bus according to �xed priority nonpreemptive scheduling [1].In the early 1990s, a common misconception was that although the protocol was very good at transmitting the highest priority messages with low latency, it was not possible to guarantee that the less urgent signals carried in lower priority messages would meet their deadlines [1].In 1994, Tindell et al. [2][3][4][5] showed how research into �xed priority preemptive scheduling for single processor systems could be applied to the scheduling of messages on CAN.is analysis provided a method of calculating the worst-case response times of all CAN messages.Using this analysis it became possible to engineer CAN-based systems for timing correctness, providing guarantees that all messages and the signals that they carry would meet their deadlines.In 2007, Davis et al. [1] refuted this analysis and showed that multiple instances of CAN messages within a busy period (this period begins with a critical instant) need to be considered in order to guarantee that the message and the signals that they carry would meet their deadlines, since CAN effectively implements �xed priority nonpreemptive scheduling of messages.
Real-time researchers have extended schedulability analysis to a mature technique which for non-trivial systems can be used to determine whether a set of tasks executing on a single CPU or in a distributed system will meet their deadlines or not [1,2,4,5].e essence of this analysis is to investigate if deadlines are met in a worst-case scenario.Whether this worst case actually will occur during execution, or if it is likely to occur, is not normally considered [6].
In contrast with schedulability analysis, reliability modelling involves the study of fault models, the characterization of distribution functions of faults, and the development of methods and tools for composing these distributions and models in estimating an overall reliability �gure for the system [6].
is separation of deterministic (0/1) schedulability analysis and stochastic reliability analysis is a natural simpli�cation of the total analysis.is is because the deterministic schedulability analysis is quite pessimistic, since it assumes that a missed deadline in the worst case is equivalent to always missing the deadline, whereas the stochastic analysis extends the knowledge of the system by computing how oen a deadline is violated [7].
ere are many other sources of pessimism in the analysis, including considering worst-case execution times and worst-case phasings of executions, as well as the usage of pessimistic fault models.In a related work [8], a model for calculating worst-case latencies of controller area network (CAN) frames (messages) under error assumptions is proposed.is model is pessimistic, in the sense that there are systems that the analysis determines to be unschedulable, even though deadlines will be missed only in extremely rare situations with pathological combinations of errors.
In [9,10] the level of pessimism is reduced by introducing a better fault model, and in [9] variable phasings between message queuing are also considered, in order to make the model more realistic.In [11] the pessimism introduced by the worst-case analysis of CAN message response times is reduced by using bit-stuffing distributions in the place of the traditional worst-case frame sizes which are referred to in [6,7].
e organization of the paper is as follows: in Section 2, the review of the research on Worst Case Response Time Analysis of CAN messages is presented, and in Section 3, the review of the research on Probabilistic Response Time Analysis of CAN messages is presented.In both sections, the method of bit stuffing is reviewed.
Worst-Case Response Time Analysis of CAN Messages
In automotive applications, the messages sent on CAN Many of these signals have real-time constraints associated with them.For example, an ECU reads the position of a switch attached to the brake pedal.is ECU must send a signal, carrying information that the brakes have been applied, over the CAN network so that the ECU responsible for the rear light clusters can recognise the change in the value of the signal and switch the brake lights on.All this must happen within a few tens of milliseconds of the brake pedal being pressed.Engine, transmission, and stability control systems typically place even tighter time constraints on signals, which may need to be sent as frequently as once every 5 milliseconds to meet their time constraints [1].Hence it is essential that CAN messages meet their deadlines.
Related Work.
CAN is a serial data bus that supports priority-based message arbitration and non-pre-emptive message transmission.e schedulability analysis for CAN builds on previous research into �xed priority scheduling of tasks on single processor systems [12].
In 1990, Lehoczky [13] introduced the concept of a busy period and showed that if tasks have deadlines greater than their periods (referred to as arbitrary deadlines) then it is necessary to examine the response times of all invocations of a task falling within a busy period in order to determine the worst-case response time.In 1991, Harbour et al. [14] showed that if deadlines are less than or equal to periods, but priorities vary during execution, then again multiple invocations must be inspected to determine the worst-case response time.We note that non-pre-emptive scheduling is effectively a special case of pre-emptive scheduling with varying execution priority-as soon as a task starts to execute, its priority is raised to the highest level.In 1994, Tindell et al. [12] improved upon the work of Lehoczky [13], providing a formulation for arbitrary deadline analysis based on a recurrence relation.
Building upon these earlier results, comprehensive schedulability analysis of non-pre-emptive �xed priority scheduling for single processor systems was given by George et al. in 1996 [15].In 2006, Bril [16] refuted the analysis of �xed priority systems with deferred pre-emption given by Burns in [17], showing that this analysis may result in computed worst-case response times that are optimistic.e schedulability analysis for CAN given by Tindell et al. in [2][3][4][5] builds upon [17] and suffers from essentially the same �aw.A similar issue with work on pre-emption thresholds [18] was �rst identi�ed and corrected by Regehr [19] in 2002.A technical report [20] and a workshop paper [21] highlight the problem for CAN but do not provide a speci�c in-depth solution.
e revised schedulability analysis presented in [1] aims to provide an evolutionary improvement upon the analysis of CAN given by Tindell et al. in [2][3][4][5].To do so, it draws upon the analysis of Tindell et al. [12] for �xed priority pre-emptive scheduling of systems with arbitrary deadlines, and the analysis of George et al. [15] for �xed priority non-pre-emptive systems, and also presents a sufficient but not necessary schedulability tests, to overcome the complexities involved in calculating the response times of multiple instances of CAN messages within the busy period.
Bit Stuffing in CAN Messages
. CAN was designed as a robust and reliable form of communication for short messages.Each data frame carries between 0 and 8 bytes of payload data and has a 15-bit Cyclic Redundancy Check (CRC).e CRC is used by receiving nodes to check for errors in the transmitted message.If a node detects an error in the transmitted message, which may be a bit-stuffing error, a CRC error, a form error in the �xed part of the message or an acknowledgement error, then it transmits an error �ag [22].e error �ag consists of 6 bits of the same polarity: "000000" if the node is in the error active state and "111111" if it is error passive.Transmission of an error �ag typically causes other nodes to also detect an error, leading to transmission of further error �ags.
Figure 1 illustrates CAN error frames, reproduced from [1].e length of an error frame is between 17 and 31 bits.Hence each message transmission that is signalled as an error can lead to a maximum of 31 additional bits of error recovery overhead plus retransmission of the message itself [22].
One characteristic of Nonreturn-to-Zero code that is adopted in CAN bus is that the signal provides no edges that can be used for resynchronization if transmitting a large number of consecutive bits with the same polarity.erefore bit stuffing is used to ensure synchronization of all bus nodes.is means that during the transmission of a message, a maximum of �ve consecutive bits may have the same polarity.e bit-stuffing area in a CAN bus frame includes the SOF, Arbitration �eld, Control �eld, �ata �eld, and C�C �eld.Since bit stuffing is used, six consecutive bits of the same type (111111 or 000000) are considered an error.
As the bit patterns "000000" and "111111" are used to signal errors, it is essential that these bit patterns are avoided in the variable part of a transmitted message (refer to Figure 3).e CAN protocol therefore requires that a bit of the opposite polarity is inserted by the transmitter whenever 5 bits of the same polarity are transmitted.is process referred to as bit stuffing, is reversed by the receiver.e worst-case scenario for bit stuffing is shown in Figure 2 [1].Note that each stuff bit begins a sequence of 5 bits that is itself subject to bit stuffing.
Stuff bits increase the maximum transmission time of CAN messages.Aer including stuff bits and the interframe space, the maximum transmission time of a CAN message containing data bytes is given by where is 3� for standard format (11-bit identi�ers) or 5� for extended format (2�-bit identi�ers), ⌊ is notation for the �oor function, which returns the largest integer less than or equal to a/b, and bit is the transmission time for a single bit.e formula given in (1) simpli�es to for 11-bit identi�ers and for 2�-bit identi�ers.
2.3.Scheduling Model.e system is assumed to comprise a number of nodes (microprocessors) connected via CAN.Each node is assumed to be capable of ensuring that at any given time when arbitration starts, the highest priority message queued at that node is entered into arbitration [1].e system is assumed to contain a static set of hard real-time messages each statically assigned to a node on the network.Each message has a �xed identi�er and hence a unique priority.As priority uniquely identi�es each message, in the remainder of this paper we will overload to mean either message or priority as appropriate.Each message has a maximum number of data bytes and a maximum transmission time , given by (1).
Each message is assumed to be queued by a soware task, process or interrupt handler executing on the host microprocessor.is task is either invoked by, or polls for, the event and takes a bounded amount of time between 0 and to queue the message ready for transmission. is referred to as the queuing jitter of the message and is inherited from the overall response time of the task, including any polling delay.
e event that triggers queuing of the message is assumed to occur with a minimum interarrival time of , referred to as the message period.is model supports events that occur strictly periodically with a period of , events that occur sporadically with a minimum separation of , and events that occur only once before the system is reset, in which case is in�nite.
Each message has a hard deadline , corresponding to the maximum permitted time from occurrence of the initiating event to the end of successful transmission of
Before stuffing
After stuffing Bits exposed to bit-stuffing (34 control bits and 0-8 byts of data ≥ 34-98 bits) the message, at which time the message data is assumed to be available on the receiving nodes that require it.Tasks on the receiving nodes may place different timing requirements on the data; however in such cases we assume that is the tightest of such time constraints.e worst-case response time of a message is de�ned as the longest time from the initiating event occurring to the message being received by the nodes that require it.
A message is said to be schedulable if and only if its worst-case response time is less than or equal to its deadline ( ≤ ).e system is schedulable if and only if all of the messages in the system are schedulable [1].
Response Time Analysis.
Response time analysis for CAN aims to provide a method of calculating the worst-case response time of each message.ese values can then be compared to the message deadlines to determine if the system is schedulable.
For systems complying with the scheduling model given in �ection 2.3, the CAN has effectively implemented �xed priority non-pre-emptive scheduling of messages.Following the analysis in [2][3][4][5] the worst-case response time of a message can be viewed as being made up of three elements: (i) the queuing jitter , corresponding to the longest time between the initiating event and the message being queued, ready for transmission on the bus, (ii) the queuing delay , corresponding to the longest time that the message can remain in the CAN controller slot or device driver queue before commencing successful transmission on the bus, (iii) the transmission time , corresponding to the longest time that the message can take to be transmitted.
e worst-case response time of message is given by e queuing delay comprises blocking , due to lower priority messages which may be in the process of being transmitted when message is queued and interference due to higher priority messages which may win arbitration and be transmitted in preference to message .
e maximum amount of blocking occurs when a lower priority message starts transmission immediately before message is queued.Message must wait until the bus is idle before it can be entered into arbitration.e maximum blocking time is given by where lp() is the set of messages with lower priority than .e concept of a busy period, introduced by Lehoczky [13], is fundamental in analysing worst-case response times.Modifying the de�nition of a busy period given in [14] to apply to CAN messages, a priority level- busy period is de�ned as follows (i) It starts at some time when a message of priority or higher is queued ready for transmission, and there are no messages of priority or higher waiting to be transmitted that were queued strictly before time .
(ii) It is a contiguous interval of time during which any message of priority lower than is unable to start transmission and win arbitration.
(iii) It ends at the earliest time when the bus becomes idle, ready for the next round of transmission and arbitration, yet there are no messages of priority or higher waiting to be transmitted that were queued strictly before time .
e key characteristic of a busy period is that all messages of priority or higher queued strictly before the end of the busy period are transmitted during the busy period.ese messages cannot therefore cause any interference on a subsequent instance of message queued at or aer the end of the busy period.
In mathematical terminology, busy periods can be viewed as right half-open intervals: [ , ) where is the start of the busy period and the end.us the end of one busy period may correspond to the start of another separate busy period.is is in contrast to the simpler de�nition given in [13], which uni�es two ad�acent busy periods as we have de�ned them, and therefore sometimes results in analysis of more message instances than is strictly necessary.For example, in the extreme case of 100% utilisation, the busy period de�ned in [13] never ends, and an in�nite number of message instances would need to be considered.e worst-case queuing delay for message occurs for some instances of message queued within a priority level- busy period that starts immediately aer the longest lower priority message begins transmission.is maximal busy period begins with a so-called critical instant [1] where message is queued simultaneously with all higher priority messages, and then each of these messages is subsequently queued again aer the shortest possible time intervals.In the remainder of this paper a busy period means this maximum length busy period.
If more than one instance of message is transmitted during a priority level- busy period, then it is necessary to determine the response time of each instance in order to �nd the overall worst-case response time of the message.
In [2][3][4][5], Tindell gives the following equation for the worst-case queuing delay: where hp() is the set of messages with priorities higher than and ⌈ is notation for the ceiling function which returns the smallest integer greater than or equal to .
Although appears on both sides of (6), as the right hand side is a monotonic nondecreasing function of , the equation may be solved using the following recurrence relation: A suitable starting value is 0 = .e relation iterates until either + + + > , in which case the message is not schedulable or + = , in which case the worst-case response time of the �rst instance o� the message in the busy period is given by + + + .e �aw in the previous analysis is that, given the constraint ≤ , it implicitly assumes that if message is schedulable, then the priority level- busy period will end at or before .�e observe that with �xed priority pre-emptive scheduling this would always be the case, as on completion of transmission of message , no higher priority message could be awaiting transmission.However, with �xed priority non-pre-emptive scheduling, a higher priority message can be awaiting transmission when message m completes transmission, and thus the busy period can extend beyond [1].e length of the priority level- busy period is given by the following recurrence relation, starting with an initial value of 0 = and �nishing when + = : where hp() ∪ is the set of messages with priority or higher.As the right hand side is a monotonic nondecreasing function of , the recurrence relation is guaranteed to converge provided that the bus utilisation , for messages of priority and higher, is less than 1: If ≤ − , then the busy period ends at or before the time at which the second instance of message is queued.is means that only the �rst instance of the message is transmitted during the busy period.e existing analysis calculates the worst-case queuing time for this instance via (7) and hence provides the correct worst-case response time in this case.
If > − , then the existing analysis may give an optimistic worst-case response time depending upon whether the �rst or some subsequent instance of message in the busy period has the longest response time.
e analysis presented in Appendix A.2 of [15] suggests that is the smallest value that is a solution to (8); however this is not strictly correct [1].For the lowest priority message, = 0 and so = 0 is trivially the smallest solution.is problem can be avoided by using an initial value of 0 m = [1].
e number of instances of message that become ready for transmission before the end of the busy period is given by To determine the worst-case response time of message , it is necessary to calculate the response time of each of the instances and then take the maximum of these values.
In the following analysis, the index variable is used to represent an instance of message .e �rst instance in the busy period corresponds to = 0 and the �nal instance to = − .e longest time from the start of the busy period to the instance at beginning successful transmission is given by e recurrence relation starts with a value of 0 () = + and ends when + () = () or when + + () − T + > in which case the message is unschedulable.For values of > 0 an efficient starting value is given by 0 ( W ( .e event of initiating instance of the message occurs at time relative to the start of the busy period, so the response time of instance is given by . (12) e worst-case response time of message is therefore e analysis presented previously is also applicable when messages have deadlines that are greater than their periods, so-called arbitrary deadlines [1].However, if such timing characteristics are speci�ed, then the so�ware device drivers or CAN controller hardware may need to be capable of buffering more than one instance of a message.e number of instances of each message that need to be buffered is bounded by e analysis presented in [15] effectively uses ⌊ / ⌋ rather than ⌈ / ⌉. is yields a value which is one too large when the length of the busy period plus jitter is an integer multiple of the message period.Although this does not give rise to problems, the more efficient formulation given by ( 10) is preferred [1].
e analysis given in this section as per Davis et al. [1] corrects a signi�cant �aw in the previous schedulability analysis for CAN, given by Tindell et al. [2][3][4][5].However, this schedulability test presented is more complex, potentially requiring the computation of multiple response times.
An upper bound on the queuing delay of the second and subsequent instances of message within the busy period is therefore given by is result suggests a simple but pessimistic schedulability test.An instance of message can either be subject to blocking due to lower priority messages or to push through interference of at most due to the previous instance of the same message, but not both.Hence we can modify (7) A further simpli�cation is to assume that the blocking factor always takes its maximum possible value: where max corresponds to the transmission time of the longest possible CAN message (8 data bytes) irrespective of the characteristics and priorities of the messages in the system.So far we have assumed that no errors occur on the CAN bus.However as originally shown in [2][3][4][5], schedulability analysis of CAN may be extended to include an appropriate error model.
In [1] it is assumed that the maximum number of errors present on the bus in some time interval [0, is given by the function (.No speci�c detail about this function is assumed, save that it is a monotonic non-decreasing function of .e schedulability equations are modi�ed to account for the error recovery overhead.e worst-case impact of a single bit error is to cause transmission of an additional 31 bits of error recovery overhead plus retransmission of the affected message.Only errors affecting message or higher priority messages can delay message from being successfully transmitted.e maximum additional delay caused by the error recovery mechanism is therefore given by ( Again an appropriate initial value is 0 .Equation ( 19) is guaranteed to converge, provided that the utilisation including error recovery overhead is less than 1.
As before, (10) can be used to compute the number of message instances that need to be examined to �nd the worstcase response time: Equation ( 20) extends (11) to account for the error recovery overhead.Note that as errors can impact the transmission of message itself, the time interval considered in calculating the error recovery overhead includes the transmission time of message as well as the queuing delay.Equations ( 20), (12), and ( 13) can be used together to compute the response time of each message instance and hence �nd the worst-case response time of each message in the presence of errors at the maximum rate speci�ed by the error model.e sufficient schedulability tests given earlier in this section can be similarly modi�ed via the addition of the term ( to account for the error recovery overhead [1].
Probabilistic Response Time Analysis of CAN Messages
3.1.Probabilistic Bit-Stuffing Distributions.When performing worst-case response-time analysis, the worst-case number of stuff bits is traditionally used.In [7], Nolte et al. introduce a worst-case response time analysis method which uses distributions of stuff bits instead of the worst-case values.is makes the analysis less pessimistic, in the sense that we obtain a distribution of worst-case response times corresponding to all possible combinations of stuff bits of all message frames involved in the response time analysis.Using a distribution rather than a �xed value makes it possible to select a worst-case response time based on a desired probability of violation; that is, the selected worst-case response time is such that the probability of a response-time exceeding it is ≤ .e main motivation for calculating such probabilistic response-times is that they allow us to reason about tradeoffs between reliability and timeliness.e number of bits, apart from the data part in the frame, which are exposed to the bit-stuffing mechanism, is de�ned as which is in the range {34, 54}.is is because we have either 34 (CAN standard format) or 54 (CAN extended format) bits which are exposed to the bit-stuffing mechanism.10 bits in the CAN frame are not exposed to the bit-stuffing mechanism (refer to Figure 3).e number of bytes of data in CAN message frame is de�ned as which is in the range [0, 8].
Recall that a CAN message frame can contain 0 to 8 bytes of data.According to the CAN standard [22], the total number of bits in a CAN frame before bit stuffing is therefore where 10 is the number of bits in the CAN frame not exposed to the bit-stuffing mechanism.Since only + 8 bits in the CAN frame are subject to bit stuffing, the total number of bits aer bit stuffing can be no more than Intuitively the above formula captures the number of stuffed bits in the worst case scenario, shown in Figure 2. e expression (22) describes the length of a CAN frame in the worst case.In [6], the number of stuff bits is represented as a distribution.By using a distribution of stuff bits instead of the worst-case number of stuff bits, it is possible to obtain a distribution of response times that allow to calculate less pessimistic (compared to traditional worst-case) response times based on probability.
Firstly, let us de�ne as the distribution of stuff bits in a CAN message frame.We express as a set of pairs containing the number of stuff bits with corresponding probability of occurrence.�ach pair is de�ned as (, (, where ( is the probability of exactly stuff bits in the CAN frame.Note that ∑ ∞ 0 ( .As shown in [6], we can extract 9 different distributions of stuff bits depending on the number of bytes of data in the CAN message frame.We de�ne as the distribution representing a CAN frame containing bytes of data.Recall that is the number of bytes of data (0 to 8 in a message frame .
We de�ne ( as the worst-case number of stuff bits, , to expect with a probability based on the stuff-bit distribution , that is, ∑ ∞ + ( ≤ , or to express it in another way, the probability of �nding more than stuff bits, based on the stuff-bit distribution , is ≤ . Note that the selection of a probability should be done based on the requirements of the application.With a proper value for , the worst case mean time to failure should sufficiently exceed what is required.Finally, by assuming (as in [6]) that CAN message frames are independent in the sense of number of stuff bits, we can de�ne ∏ as the joint distribution corresponding to the combination of distributions of stuff bits; that is, the number of stuff bits caused by a sequence of messages sent on the bus is described by ∏ , where denotes multiplicative combination of discrete distributions.If the distributions happen to be equal, ∏ − is de�ned as the joint distribution of equal distributions of stuff bits; that is, the number of data bytes is the same for all messages considered by the expression.
In order to include the bit-stuffing distributions in ( 12), we need to rede�ne and as () and (), where where is the distribution of stuff-bits in the message and is the transmission time of message excluding stuff-bits: where is de�ned as the distribution of the total number of stuff-bits of all messages involved in the response time analysis for message .is approach obtains the maximum stuffed bits under a given probability , to reduce pessimism of the worst-case response time and busload value.
Anyu Cheng et al. in [23] extend this work in [7] and gives the probability distribution curves of stuffed bits in message's different lengths by introducing the probability model of stuffed bits.ey design and develop scheduling analysis soware on �xed priority message scheduling.en they use the soware to analyses the schedulability for the messages in a hybrid electric vehicle.Furthermore, a simulation experiment based on CANoe was made to test the design.By comparing the results, it shows that algorithm based on the probability model of stuffed bits is right, and the designed soware is accurate and reliable.
Probabilistic Error Model. e analysis as presented
does not cover the effect of transmission errors.Obviously, detected errors trigger the transmission of an error frame as well as a retransmission which increases the busy window and therefore the response time.On the other hand a longer busy window might increase the probability that successive errors might affect the busy window [24].In order to include effects of errors (e.g., retransmission overhead) different approaches were introduced.
Related Work.
A method to analyse worst-case realtime behaviour of a CAN bus was developed by Tindell et al. [5].By applying processor scheduling analysis to the CAN bus, they showed that in the absence of faults the worstcase response time of any message is bounded and can be accurately predicted.Moreover, the analysis can be extended in order to handle the effect of errors in the channel.
e error recovery mechanism of CAN involves the retransmission at any corrupted messages.An additional term can be introduced into their analysis, called the error recovery overhead function, which is the upper bound of the overhead caused by such retransmissions in a time interval.A very simple fault model is used [5], to show how the schedulability analysis is performed in the presence of errors in the channel.e model is based on a minimum interarrival time between faults.e authors note that the error recovery function can be more accurately determined either from observation of the behaviour of CAN under high noise conditions or by building a statistical model.
Punnekkat et al. [8] extend the work of Tindell et al. by providing a more general fault model which can deal with interference caused by several sources.Punnekkat's model assumes that every source of interference has a speci�c pattern, consisting of an initial burst of errors and then a distribution of faults with a known minimum interarrival time.Except for the more general fault model, the rest of the schedulability analysis is performed like [5].
Both Tindell and Punnekkat use models based on a minimum interarrival time between faults and therefore assume that the number of faults that can occur in an interval is bounded.In the environment where CAN is used, faults are caused mainly by Electromagnetic Interference (EMI) which is oen observed as a random pulse train with a Poisson distribution [24].erefore the assumption made by the bounded model may not be appropriate for many systems because there is a realistic probability of faults occurring closer than the minimum interarrival time.
Unlike Tindell and Punnekkat, Navet et al. [25] propose a probabilistic fault model, which incorporates the uncertainty of faults caused by EMI.e fault model suggested by Navet uses a stochastic process which considers both the frequency of the faults and their gravity.In that model, faults in the channel occur according to a Poisson law and can be either single-bit faults or burst errors (which have a duration of more than one bit) according to a random distribution.is allows the interference caused by faults in the channel to be modeled as a generalised Poisson process.Note that if the occurrence of faults in the channel follows a Poisson law, the maximum number of transmission errors suffered by the system in a given interval is not bounded, so the probability of having sufficient interference to prevent a message from meeting its deadline is always nonzero; therefore every system is inherently unschedulable.Hence Navet's analysis does not try to determine whether a system is schedulable (as [5,8]), but it calculates the probability that a message does not meet its deadline.Obtaining such a probability, named Worst Case Deadline Failure Probability (WCDFP), gives a measure of the system reliability, because a lower value of the WCDFP implies a high resilience to interference.
Navet's analysis uses the scheduling analysis of Tindell to calculate the maximum number of faults that can be tolerated for each message before the deadline is reached.is number is called and only depends on the characteristics (length, priority, period, etc.) of the message set.e worst-case response time that faults would generate is called .Once and are obtained, they are used with the fault model to �nd the probability that a message may miss its deadline.Navet de�nes the WCDFP of a message as the probability that more than errors occur during .is probability can be analytically calculated as the fault model assumed by Navet is a generalized Poisson process.
e main drawback of the analysis is that it includes two inaccuracies which increase the pessimism in the estimation of the WCDFP.e �rst source of pessimism is implicit in the de�nition of WCDFP.e de�nition of WCDFP does not properly re�ect the conditions in which a message can miss its deadline.In order for a message to miss a deadline, faults in the channel is required to occur while the message is queued or in transmission; a fault occurring aer the message has been received cannot delay the message.is condition is more restrictive than the condition used in [25], which is that errors occur at any time during the maximum response time of the message, independently of whether the message has already been received.
e second source of pessimism is an overly pessimistic assumption about the nature of burst errors where a fault causes a sequence of bits to be corrupted.In Navel's analysis, a burst error of duration "" bits is treated as a sequence of single bit faults [25,Equation (7)], each causing a maximal error overhead (an error frame and the retransmission of a frame of higher or equal priority).is assumption is inconsistent with the CAN protocol speci�cation [22] since in reality a burst error can cause retransmission of only one frame, because no message is sent again until the effect of the burst is �nished.is causes pessimism of several orders of magnitude.
A different method to calculate probability of deadline failure in CAN under fault conditions is proposed in [9].is work points out that errors happening during bus idle do not cause any message retransmission, and therefore those errors cause interference lower than the interference typically considered in scheduling analysis.To avoid this source of pessimism when performing scheduling analysis, the effect of errors is modelled with a �xed pattern of interference; this is a simpli�cation of the fault model presented in [8].Due to this determinism, interactions between messages and errors can be analysed through simulation, and then the probability of having a message that misses its deadline can be determined.Nevertheless, this method has important drawbacks.First, an interference pattern for every possible error source is hard to be determined.And second, combination of several error sources increases the complexity of the analysis to such an extent that it becomes infeasible, so random sampling is used.
Modelling arrivals of errors with a random distribution, as done in [10], allow a more generalized solution.Broster et al. [26] propose an analysis that provides an accurate probability of deadline failure without excess pessimism, based on the assumption that faults are randomly distributed.
In [27], an approach is presented to tightly bound the reliability for periodic, synchronized messages.erefore, a reliability metric is de�ned which denotes the probability that CAN communication survives time without a deadline miss.e reliability is calculated based on the hyperperiod, which is the time when the activation pattern of a periodic message set repeats itself.It is de�ned by the least common multiple over all periods.Hence, the complexity of the algorithm depends on the amount of activations in the hyperperiod.is algorithm is suitable for automotive message sets in which periods are typically multiples of 10 ms.However, if messages are not synchronized, or the relative phasing is unknown, the approach is not applicable.In [26], the busy-window approach is used, and a tree-based approach is presented, where different error scenarios are evaluated iteratively.In a second step, these scenarios are translated to probabilities and a worst-case deadline failure probability is calculated.e approach was extended in [28], and the treebased was superseded by a simpler, more accurate approach.However, both methods [26,28] allow only deadlines smaller than the periods, which is a limit for practical use since bursty CAN traffic is not supported.In [24], existing methods are generalized to support arbitrary deadlines and derive a probabilistic response time bound.
Error Model.
In [24,26] the occurrence of errors is modeled by using a Poisson model.Practically, a Poisson process models independent single bit errors (without bursts), where speci�es the bit error rate.e probability for the occurrence of m error-events in the time window is It is possible that a message of length is hit by multiple error events and only one retransmission occurs (e.g., aer reception when the CRC is checked), but it is assumed that in the worst-case condition, each error event will lead to exactly one retransmission.us, we can directly use (25) to obtain the probability that error events occur during a given time window, and the probability for the error-free case is e − .
For , it is not enough to just calculate , because error events have to occur in certain segments of the busy window, and more efficient technique was used in [27], which can be applied for the general case in which a busywindow includes multiple queued activations which can be Practically, this function denotes a bound for the probability that a response time exceeds a certain threshold, and the probability that a deadline is exceeded can be bounded to + [ ].
Conclusion
In this review paper, the worst case response time analysis of messages in controller area network and the probabilistic response time analysis of CAN messages are reviewed.e worst-case response time analysis includes the worst-case response time analysis presented in early 1990s by Tindell et al. [2][3][4][5] and the worst case response time analysis by Davis et al. [1] in 2007.Davis et al. in [1] have pointed out the �aw in the earlier analysis by Tindel et al. and showed that multiple instances of the CAN messages should be analysed to determine the response time and hence the schedulability of the CAN messages.e worst-case response time analysis leads to excessive level of pessimism; we may choose a pessimistic approach but with as little pessimism as possible, since worst case does not always occur.e probabilistic response time analysis of CAN messages is recommended; here two approaches are considered [6,7], namely, instead of using the worst-case bit-stuffing pattern, we can consider a distribution of possible bit stuffing according to the application and select one most probable bit-stuffing pattern, thereby we are less pessimistic; another probabilistic approach is considering the probability of occurrence of errors [24][25][26].
In worst-case analysis, it is assumed that every error �ag transmitted has a retransmission associated, whereas this is not true, since the same error can cause many error �ags and only one retransmission.is assumption causes some level of pessimism.ere are different methods presented in [6] whereby we can reduce the number of stuff bits, either by using XOR operation on the messages before transmission (encoding) and redoing the XOR aer reception (decoding), thus avoiding having continuous bits of zeros or ones, thereby avoiding bit stuffing.e other method presented in [6] is to choose the priorities such that the identi�er bits do not have continuous ones or zeros, thereby avoiding bit stuffing.Of course in this method the number of priorities that can be used is reduced.Another approach in making the best usage of the bandwidth is to schedule the messages with offsets, which leads to a desynchronization of the message streams.is "traffic shaping" strategy is very bene�cial in terms of worstcase response times [29,30].e Worst-Case Response Time (WCRT) for a frame corresponds to the scenario where all higher priority CAN messages are released synchronously.Avoiding this situation and thus reducing WCRT can be achieved by scheduling stream of messages with offsets.Precisely, the �rst instance of a stream of periodic frames is released with a delay, called the offset, in regard to a reference point which is the �rst time at which the station is ready to transmit.Subsequent frames of the streams are then sent periodically, with the �rst transmission as time origin.e choice made for the offset values has an in�uence on the WCRT, and the challenge is to set the offsets in such a way so as to minimize the WCRT, which involves spreading the workload over time as much as possible.e future work is to present the review of statistical approach to response time analysis.It is proposed that a fusion of methods may be adopted to cater to the requirement of the application; for safety critical application like automotive and industrial application, the worst-case response time analysis is recommended, and for noncritical applications where we can introduce some tolerance we may apply the probabilistic response time analysis.
3 bit max 8 + + 0 bit , (27)cted by errors.eapproachworksasfollows:oneerrorevent in the entire busy window can happen in two ways.eerrormayactuallylead to an busy window with the probability .Or, we face a busy window of length and the error event occurs in the interval ( ): + − .(27)evalue of can then be obtained by rearranging the equation.Similarly we can apply this idea to 2. Two errors in the time window 2 may occur in the following mutually exclusive ways.(i)Abusywindow of length 2 actually occurs assuming two error events with the probability 2 .(ii),occurredwhichimplies exactly one error in and the second error must then happen in the interval ( ; 2 ).(iii) occurred which implies no error in .And exactly two errors must be in the interval ( , 2 ): 2 2 2 + 2 − + 2 2 − .By rearranging the equation for 2 , we get the probability for a 2 busy window.esameargument is valid for the following -error busy windows, and (28) is generalized into the following form:
|
2018-04-03T05:44:36.633Z
|
2013-01-10T00:00:00.000
|
{
"year": 2013,
"sha1": "c385e6f7248a47469febabb54dad1fddf4f7589c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2013/148015",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c385e6f7248a47469febabb54dad1fddf4f7589c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
252492283
|
pes2o/s2orc
|
v3-fos-license
|
Experimental Study and Modelling on the Structural Response of Fiber Reinforced Concrete Beams
: In many structural applications, concretes reinforced with short metal or synthetic fibers (fiber-reinforced concrete (FRC)) have a number of advantages over traditional concretes reinforced with steel rebars reinforcement, such as easier and more economical production, wear resistance, impact resistance, integrity, etc. In the present study, several concrete mixes were developed and prismatic FRC specimens were fabricated. Their structural behaviors were studied using bending tests until prisms were fractured. Two types of fibers, namely, steel and polypropylene (PP) and three different concrete matrixes were investigated, testing in total 12 FRC prismatic specimens. Every group of FRC had the same concrete matrix, but different internal fiber architecture. All specimens were tested by Four-Point Bending (4PBT). The analysis was carried out with a goal to determine the workability and flexural tensile strength of all FRC groups, comparing these parameters with fracture modelling results. Single crack formation and opening model were established. Crack is crossing whole stretched part of the prism’s orthogonal crossection. Crack is opening, fibers are bridging the crack and are pulling out. Load bearing curves in the model were compared with experimentally obtained.
Introduction
Nowadays, development and introduction of more efficient mass-produced materials are among the main engineering challenges which can be encountered in all industrial areas. Today, concrete is the second most consumed material after water, with three tonnes of concrete being produced for every person living on the globe per year. Twice as much concrete is used in construction as the total of all other building materials [1,2]. Reinforcement of various scale is increasingly used in modern materials and concretes, ranging from reinforcement at the nanoscale [3][4][5][6][7] to the macro-scale reinforcement that has already become classic [8][9][10]. Bars and fibres made of steel and other materials are used as the main or the secondary reinforcement bearing the tensile and shear stresses [11][12][13][14][15][16]. If fibres are randomly uniformly dispersed in the concrete volume, the resulting material is supposed to be homogeneous Fibre Reinforced Concrete (FRC). One important property of FRC is its superior resistance to cracking and crack propagation [17][18][19][20][21]. Concrete matrix can be successfully reinforced with the fibres of various geometry [22][23][24][25][26] and made from various materials [27][28][29][30][31][32][33]. The properties of the concretes with non-metallic fibers are slightly less recognized, especially concretes with new types of the polymer fibers. Additionally, the lack of standardized methods of testing concrete with polymer fibers make their application much more difficult. Designing elements or structures made of fiber-reinforced concrete requires knowledge of its basic mechanical parameters. Unfortunately, the currently available data describing this type structures are insufficient and research in this area is fragmentary. FRC and its modelling are interesting for future practical application. Every investigation in this field is important. Industrial use of the fibers, comparing with steel rebars is only in the starting point. Polypropylene fibre is used in concrete mix design mainly for shrinkage crack arrest, or, in combination with additional reinforcement [34][35][36], also for improvement of the non-linear deformation of FRC ensuring resistance to cracking of the concrete structures [37][38][39]. The bridging effect of the fibres across the cracks inside concrete volume improves the toughness of the concrete after cracking [40][41][42][43]. It is also important to take into consideration that even when the concrete is called "homogeneous", it does not means it is really homogeneous, because during filling the construction formwork (mix flowing process), fibres added to the concrete mix obtain their slightly non-homogeneous distribution and non-random orientation in the fresh concrete volume mix, which inevitably affects the mechanical properties of FRC [44][45][46][47][48].
The current research presents several concrete mixes were developed and prismatic FRC specimens were fabricated, reinforced with three different fibre types: hooked steel 35 mm, polypropylene PP 40 mm, and PP 45 mm. Their structural behaviours were studied using bending tests until prisms were fractured. Every group of FRC had the same concrete matrix, but different internal fiber architecture. All specimens were tested by 4PBT. The analysis was carried out with a goal to determine the workability and flexural tensile strength of all FRC groups, comparing these parameters with fracture modelling results. Load bearing curves in the model were compared with experimentally obtained.
Concrete Mix Materials
Mix design of FRC matrix implies combining Portland cement with multi-fractional aggregates and reactive pozzolanic admixtures, such as fly ash, silica fume and nano-silica. When selecting raw materials for production of a concrete matrix, preference was given to the locally available mineral materials and cement. The composition of the developed concrete mixture and the exact concentration of the ingredients. In the current research, the following components were used as FRC mix materials:
3.
Stabilizer AkzoNobel Cembinder 50. A commercial colloidal nano-silica product Cembinder 50 was used. Nano-silica is a new generation of reactive silicon dioxide admixtures. Its high reactivity is explained with chemical purity and extremely high specific surface (>30,000 m 2 /kg) and particle size <100 nm. The size of the silica fume particle corresponds up to 1000 nano-silica particles and, comparing with the size of cement particles, one cement particle corresponds to approximately one billion of nano-silica particles. This determines high chemical reactivity of the admixture and its accelerated effect on the cement hydration. 4.
Coarse aggregates. Crushed limestone (fractions 2/6 mm) was used as a coarse aggregate. The shape of the rough aggregate has a significant influence on the workability of a concrete mix, because water quantity depends on the type of aggregate, particle size distribution, the shape and texture of grains and the quantity of fines.
Silica fume or Micro-silica, a very fine pozzolanic material composed of amorphous silica produced by electric arc furnaces as a by-product of production of elemental sili- con or ferro silicon alloy. Addition of micro silica to cementitious systems can improve the strength by controlling the structure of C-S-H. Silica fume is high-purity amorphous silicon dioxide with >92% SiO 2 content. Silica fume particles are characterized by spherical shape and diameter in the range of 0.05-0.5 µm or 50-500 nm. 7.
Fly ash. This pozzolanic admixture is produced as a by-product of coal combustion in the power plants, it is usually collected with electrostatic filters. Good quality conventional fly ash offers such advantages as improvement of concrete workability properties, improvement of structural tightness, reduction of hydration heat, increase of resistance to chemical aggression, participation of ash in cement binding reactions, higher resistance of concrete over long period and reduction of production costs of the concrete mixture. Commercially used fly ash with the cumulative content of oxides SiO 2 + Al 2 O 3 + Fe 2 O 3 was equal to 84.7% thereof. It was classified as class F fly ash in accordance with ASTM C618 [49].
Fibers
Short 12 mm long polypropylene fibers (PB Eurofiber MF 1217) have been added in small amount to reduce FRC shrinkage. Additionally, three main types of fibers were used in the research:
1.
Steel fibers: Krampe Harex hooked end steel fibers of 35 mm in length were chosen for the first FRC group. Figure 1a shows the fibers and Table 1 summarizes the properties of the fibers; 2.
PP fibers: Strux PP 40 mm long fibers were used in the second FRC group to compare the structural properties of two different fiber groups. Figure 1b shows the fibers, their properties are given in Table 1; 3. PP fibers: Durus PP 45 mm long fibers were used in the third FRC group, employing the opportunity to compare the results of the samples with the same type of fiber, but of different fiber length, produced by a different company, and ultimately compare them with the samples made using different types of fiber. Figure 1c and Table 1 provide the illustration and the properties of this fiber, respectively. As regards the concrete mix design, water-to-cement ratio was W/C = 0.3 in each group. Other ingredient amounts are given in Table 2. Volumetric concentration of fibres in the mixes was 1.0 % and 0.1 % from the total volume of a concrete, according to the Table 3. . The mix should have a uniform dispersion of the fibres in order to prevent segregation or balling of the fibres during mixing. Most balling occurs in the process of adding fibres. Increase of the aspect ratio, percentage of fibre in the concrete volume, and the size and quantity of coarse aggregate will intensify the balling tendencies and decrease workability [51]. Fibres were added uniformly into the open end of the concrete mixer, just before taking the concrete out to pour it into the beam mould, so that not to damage the fibres (this remark applies to the non-metallic fibres).
Fiber Quantity
It is very important to know the quantity of each fibre required for each individual group while forming the prisms for testing all three FRC groups. Their values are indicated in Table 3. Table 3. Name of groups and fibers quantity, kg/m 3 a . As regards the concrete mix design, water-to-cement ratio was W/C = 0.3 in each group. Other ingredient amounts are given in Table 2. Volumetric concentration of fibres in the mixes was 1.0% and 0.1% from the total volume of a concrete, according to the Table 3.
Concrete Mixing
The concrete mix was prepared according to . The mix should have a uniform dispersion of the fibres in order to prevent segregation or balling of the fibres during mixing. Most balling occurs in the process of adding fibres. Increase of the aspect ratio, percentage of fibre in the concrete volume, and the size and quantity of coarse aggregate will intensify the balling tendencies and decrease workability [51]. Fibres were added uniformly into the open end of the concrete mixer, just before taking the concrete out to pour it into the beam mould, so that not to damage the fibres (this remark applies to the non-metallic fibres).
Fiber Quantity
It is very important to know the quantity of each fibre required for each individual group while forming the prisms for testing all three FRC groups. Their values are indicated in Table 3.
Specimen Sizes
The FRC was placed into the prism moulds of the size 100 × 100 × 400 mm, all groups contained small polypropylene fibres and were casted in compliance with code provisions. The prisms were placed in water for curing [52]. The tests for 4PBT flexural strength were done after 28 days.
Testing Technological Properties
Slump Cone Test Slump cone test is one of the tests done to check the workability of a concrete [53]. The workability of the FRC mix was evaluated by means of Abram's cone slump test in accordance with EN 12350-2. Additionally, time to final cone spread was determined as an additional flowability parameter. Measurements were done for all three FRC groups.
Sieve Segregation Test
The test aims to investigate the resistance of high performance concrete (HPC) to segregation [54] by measuring a portion of a prepared high performance concrete specimen passing through a 5 mm sieve. If the HPC has a weak resistance to segregation, the paste passes easily through the sieve. Thus, the sieved portion indicates whether the HPC is stable or not. This test was also performed for all concrete groups.
The equipment required for this test includes: (1) a sieve with a perforated plate with 5 mm square holes, (2) a frame with a Ø = 300 ± 1 mm, a height is 40 ± 0.5 mm or Ø = 315 ± 1 mm, a height is 75 ± 0.5 mm, (3) a tray of a shape, with volume suitable to hold the materials passing through the sieve and easily removed by the operator without forcing the passage of excess materials, (4) a digital type scale with a capacity ≈10 kg, which can be zeroed, (5) a hard plastic or metal bucket with a maximum inside diameter of 300 ± 10 mm and a capacity of 10 ÷ 12 L and a top suitable to cover the bucket to protect fresh concrete from intensive drying.
The scale was placed in a stable and aligned position. 10 ± 0.5 L of representative fresh FRC was poured into the beaker and covered with a top. It was left to stand for 15 ± 0.5 min. During the waiting period, only the pot was weighed; the weight was noted as Wp. The sieve was placed on the pot without removing it from the scale. The bucket was filled after 15 ± 0.5 min. The surface of the concrete in the bucket was inspected for clear water leaking out, and the result was recorded. The scale was zeroed, and concrete in the amount of 4.8 ± 0.2 kg was poured onto the central part of the sieve from a height of 50 ± 5 cm. The weight of the concrete poured (on the sieve) was noted as Wc. Two minutes later the concrete was poured, the sieve was carefully removed from the pan without any shaking that could force excess material through the sieve. The pan with the sifted materials was weighed and the weight was noted as Wps.
The sifted part, the mass fraction of the sample that passed through the sieve, was calculated according to the equation Π = ((Wps − Wp)/Wc) × 100, expressing the result in percent. Adhesive bonds between the parts of the sintered and cemented materials are the properties defining the strength of these materials, they can be detected both by nondestructive methods [55][56][57] and by the classical destructive method in accordance with EN 12390-3 using a Controls Automax 5 testing machine.
Standard cube samples (100 × 100 × 100 mm, 6 samples in the experimental series) were prepared. Fresh samples were covered with the plastic film, after two days the samples were demoulded and cured in water for 28 days at the temperature of 20 ± 2 • C until testing.
Four Point Bending Testing Procedure
This test is commonly used to determine the bending strength and bearing capacity with a cracked beam. The bending test is the most often used test to measure the mechanical and fracture properties of the FRC in the regime of crack propagation. The 4PBT was used as a constant bending moment test on the portion of the specimen located between the two upper supports, as shown in the Figure 2. The provisions of several standards, such as EN 14651, ASTM C1018-97 and RILEM TC 162-TDF were taken into consideration. The tests were carried out in accordance with these standards and the results were recorded and considered.
were prepared. Fresh samples were covered with the plas samples were demoulded and cured in water for 28 days at until testing.
Four Point Bending Testing Procedure
This test is commonly used to determine the bending st with a cracked beam. The bending test is the most often mechanical and fracture properties of the FRC in the regim 4PBT was used as a constant bending moment test on the po between the two upper supports, as shown in the Figure standards, such as EN 14651, ASTM C1018-97 and RILEM T consideration. The tests were carried out in accordance w results were recorded and considered. The samples of all three groups of FRC with dimension experimentally tested.
The prisms were mounted exactly as shown in Figure per load points. A frame with two HBM WA20 LVDTs senso record the deformation in the prisms during the test, was then The mechanical properties were tested using the 4PBT m Automax 5 loading machine. The load was applied with incr of 60 s.
The load was applied monotonically in small increment The samples of all three groups of FRC with dimensions of 100 × 100 × 400 mm was experimentally tested.
The prisms were mounted exactly as shown in Figure 2, according to the marking per load points. A frame with two HBM WA20 LVDTs sensors on each side, was used to record the deformation in the prisms during the test, was then installed at the same marks. The mechanical properties were tested using the 4PBT method applying a Controls Automax 5 loading machine. The load was applied with increments 0.25 kN in the period of 60 s.
The load was applied monotonically in small increments, and load, strain and stress were recorded at each increment. Resistance strain gauges were installed on the prisms, and the mid-span deflection and support settlements were measured with LVDT. The load versus vertical deflection (mid-span) curve was obtained in every download. The measurement data of the strength obtained on the HBM Spider-8 data acquisition system were processed, synchronized, and stored in MS Excel files, which were later used to construct the necessary graphs. The deflection of the prism was received by counting and summing the values from both sensors and obtaining the average value. The graphs showing the processes in FRC, namely the behavior of the fibers under the influence of bending, were created from these files, using the MS Excel program. The test procedure was performed for all three FRC groups in the same way, as shown in Figure 2, and the graphs were plotted using deflection and load values. The load was applied until the specimens failed completely.
Experimental Results and Discussion
The resultant segregation value for all three groups of fibre concrete mixtures was zero.
The slump cone test results for the concrete matrix Group CSf are as follows: slump height equal to 24 cm with a settling time of 8 s (see Figure 3a). Based these results, it can be concluded that this concrete matrix has very low workability. For Group CPPfS, slump height was 4 cm with a settling time of 5 s (Figure 3b). It indicates that the workability of concrete in this group is very good. For Group CPPfD, the slump height was 23 cm with a settling time of 7 s (see Figure 3c). These indicators imply that this group is characterised by very poor workability. was performed for all three FRC groups in the same way, as shown in Figure 2, and the graphs were plotted using deflection and load values. The load was applied until the specimens failed completely.
Experimental Results and Discussion
The resultant segregation value for all three groups of fibre concrete mixtures was zero.
The slump cone test results for the concrete matrix Group CSf are as follows: slump height equal to 24 cm with a settling time of 8 s (see Figure 3a). Based these results, it can be concluded that this concrete matrix has very low workability. For Group CPPfS, slump height was 4 cm with a settling time of 5 s (Figure 3b). It indicates that the workability of concrete in this group is very good. For Group CPPfD, the slump height was 23 cm with a settling time of 7 s (see Figure 3c). These indicators imply that this group is characterised by very poor workability. Compressive strength of 6 specimens were found out, materials compressive test (concrete cube test) was used in accordance with EN 12390-3 [58]. According to standard LVS 156-1 [59], the correction factor 0.95 was implemented, thus, the obtained concrete corresponds to class C70/85.
The results of tests under the impact of bending forces were presented in the form of Excel values. All values were taken for all three groups, strength-deflection average values were plotted using MATLAB software for 4PBT test and the average graphs were drawn based on the results thereof. During the experiment, a large amount of data was obtained, and the values along the deflection axis often did not coincide with each other.
Therefore, using the code in MATLAB software, a selection of experimental data values with the given deflection step was obtained. The resulting database was used to obtain the average strength-deflection values. The graphs in Figure 4 represent the three groups respectively, and all three groups are drawn in a single frame to plot the differences in their behaviour. Compressive strength of 6 specimens were found out, materials compressive test (concrete cube test) was used in accordance with EN 12390-3 [58]. According to standard LVS 156-1 [59], the correction factor 0.95 was implemented, thus, the obtained concrete corresponds to class C70/85.
The results of tests under the impact of bending forces were presented in the form of Excel values. All values were taken for all three groups, strength-deflection average values were plotted using MATLAB software for 4PBT test and the average graphs were drawn based on the results thereof. During the experiment, a large amount of data was obtained, and the values along the deflection axis often did not coincide with each other.
Therefore, using the code in MATLAB software, a selection of experimental data values with the given deflection step was obtained. The resulting database was used to obtain the average strength-deflection values. The graphs in Figure 4 represent the three groups respectively, and all three groups are drawn in a single frame to plot the differences in their behaviour. Steel fibres demonstrate good performance in the elastic deformation, as it is demonstrated in Figure 4. Mechanical (not chemical) adhesion between steel fibres and concrete is higher compared to the adhesion between PP and concrete. As soon as the concrete absorbs the initial load, fibres carry most of the subsequently applied load until macro-cracks are formed and stretched fibres become visually observable at the fractured walls of the macro-cracks. The concrete-steel combination ensures higher material integrity, fibre sliding starts at higher values of the bending load or deformation in the sample. Although steel fibres demonstrate good load bearing capacity, PP fibres play a The above plotted curves consist of the following phases: Group CSf: 1.
The phase of linearly elastic deformation of concrete matrix, indicating the linear deformation from 0.02 mm to 0.25 mm with no formation of visible cracks and without significant load in the fibres.
2.
The phase of critical load, showing the linear deformation of concrete matrix from 0.25 mm to 0.5 mm.
3.
The phase of fibre pull-out, indicating the formation of macro-cracks with the opening width from 0.1 mm to 0.35 mm. The tension force transfer resulted in the resistance to crack propagation, imparting the strength, i.e., the fibres were bridging the cracks, collecting tension forces at the edges of the cracks. The results obtained correlate with the data from the related research [60,61]. The comparison of experimental data of the steel fibre pull-out test and numerical simulation results was carried out. The third phase corresponds to the growth of delamination along the fibre and matrix interface until full debonding of the fibres (crossing micro-cracks), followed by fibre sliding out of the matrix with friction, and partial plastic deformation of the fibres [62,63].
The phase of linearly elastic deformation indicating the linear deformation from 0.01 mm to 0.15 mm with no formation of visible cracks and without significant load in the fibres.
2.
The phase of critical load showing the linear deformation from 0.15 mm to 0.2 mm. 3.
The phase of elastic deformation provided by the PP fibres, increasing the load bearing capacity of the members and resisting the loads. The change in the deformation from 0.2 mm to 2 mm during the fibre pull-out or change indicating the formation of cracks with the opening width from 0.1 mm to 0.35 mm was recorded. The tension force transfer resulted in the increased resistance to crack propagation, imparting the strength to the prism members.
Steel fibres demonstrate good performance in the elastic deformation, as it is demonstrated in Figure 4. Mechanical (not chemical) adhesion between steel fibres and concrete is higher compared to the adhesion between PP and concrete. As soon as the concrete absorbs the initial load, fibres carry most of the subsequently applied load until macrocracks are formed and stretched fibres become visually observable at the fractured walls of the macro-cracks. The concrete-steel combination ensures higher material integrity, fibre sliding starts at higher values of the bending load or deformation in the sample. Although steel fibres demonstrate good load bearing capacity, PP fibres play a major role in resisting the loads due to their elastic nature. The number of micro-cracks increases, and a network of micro-cracks is formed. The formation of macro-cracks occurs in the perpendicular direction to the longitudinal axis, which indicates a good way for load application. Density of the network of micro-cracks depends on the distribution of the particles in the concrete matrix, as well as the sizes and concentration of the fibres. The complete pull-out of the fibres occurs during the last stages of application of the critical load. The intensity of load in each fibre depends on the material of the fibre, its shape, length and orientation in relation to the plane of the crack, direction of the applied force of pull-out and the depth of fibre inside the matrix of the concrete member. During the last stage of loading (deflection higher than 3-3.5 mm), fibres bridging the macro-crack with one tail are pulled out of the concrete. The process when the polymer fibres are pulled out needs more constant load, in the case with steel fibres, the necessary load is decreasing more rapidly. The pull-out process finishes with the partial or complete removal of the fibres from the concrete matrix in the areas, where friction caused by slipping occurs between the fibre and concrete matrix.
Once the fibres are fully stretched (fully utilised), the crack width increases. The specimen breaks, indicating the maximum loading capacity and the maximum tensile force absorption by the FRC member.
The differences between steel and PP fibres are worth further discussion. Figure 4 allows considering the major differences between these two types of fibre, which are observed during the breaking (failure) mechanism of the concrete members. At the initial stage of loading, each steel fibre interacts with the surrounding concrete as a common material. Mainly thanks to the hooks on its ends, the steel fibre does not slip out of the concrete when micro-crack split the concrete. The hooked ends of the steel fibre in combination with high value of plastic stress necessary to deform steel fibre and the value of friction force between the fibre and concrete are the factors that determine higher resistance of this material. Compared to PP fibres, a higher load is necessary to start the pull-out process (15 MPa for steel fibres as compared with 7.2 MPa for Strux PP fibres and 6.4 MPa for Durus PP fibres). After the first peak on the curves, all samples have macro-cracks. During this pull-out process, steel fibre ends form clouds of micro-cracks. The concrete matrix is rapidly losing elastic modulus and toughness, which results in rapidly decreasing load bearing capacity. Polymeric fibres are softer, the pull-out process of each fibre happens along with the deformation of fibre surface and fibre body. Fibres are not hooked, and the pull-out force is more stable along the entire pulling length.
Several observations can be made considering the results of the research: 1.
It is necessary to take care about the concrete mix during addition of the fibres to make the mix homogeneous, for example, by adding the fibres in layers. Proper finishing of the top layer is necessary, making sure no marks of fibres are left on the sidewalks.
2.
Fibre pull-out is a process dependant on how far each fibre is pulled out. Some fibres are more effective in the beginning, when deflection of structural elements is small (deformation of the structural member with a more sophisticated geometry), some-at the final stage, when many cracks in the material are formed. Since the PP fibres are smooth and do not have a proper grip to the surfaces in the concrete mix, they propagate all over the mix, which might cause problems due to accumulation of several fibres in some places. This may result in value change. Homogeneous fibre distribution in the material volume is important for the decrease of the scatter of load bearing results.
3.
The non-corrosive nature is another important benefit of the PP fibres. It is rarely observed in the steel fibres if they are not property coated with resins. Therefore, care must be taken in this regard if it is planned to create longer spans for any construction.
Modelling
Based on experimental observations, we accept on the middle span of each beam, subjected to four-point bending (see Figure 4) only one single macro-crack finally opens. This crack grows reaching the neutral axis, opens and finally divides the sample into two pieces. Is possible to suggest that at initial stage micro-cracks are few. Only one of them is forming the macro-crack that is reaching the neutral axis of the beam. Macro-crack growth (increase the length) is characterized by small it's opening (at initial stage nonvisible by eye). After reaching the neutral axis, its growth stops (further increase in the crack length is small) and the crack begins to open with an increase in the applied external load. Macro-crack opening is happening by fibers pull-out process. At this stage length of the macro-crack is "in general" stable, in our experiments it was 80-85 mm. Each fiber crossing the surface of a macro-crack pulls out the shortest end in the process of its opening. Polymer (polypropylene) fiber breaks inside the concrete when pulled. Only part of the end length is pulled out.
Crack opening numerical structural model based on the similar suggestions as in [64,65] was accepted and used. Basic model's assumptions were: (a) crack's length is stable, it opens, fibers, crossing it's plane under different angles, are pulling out; (b) fibers distribution in the volume is random (according to orientation angles and each fiber geometrical centre spatial location); (c) each fiber is pulling out as a single fiber-is possible to use experimental data set with pull-out curves for fibers oriented under different angles and embedded at different depth in the concrete (single curves and average data) [64]. Number of fibers in every prism with dimensions 100 × 100 × 400 mm is known (in average) from the experiment. Single prism volume was divided into elementary volumes. N-elementary volumes are on the one side of the crack and the same number on opposite. Number of fibers crossing one particular element grain (which is on the surface of the crack) is easy to calculate. For every particular grain is possible to evaluate each fiber tail length and orientation. Using experimental data obtained in pull-out experiments and performing averaging procedure, is possible to calculate force applied to particular grain depending on "local opening value". "Local opening value" is distance between two grains-neighbours (when macro-crack is closed distance between grains-neighbours is zero. In the model crack mouth opening displacement (CMOD) were numerically changed by steps. At every step, forces applied to every pair of grains-neighbours were calculated, such way all forces on the surface of macrocrack were calculated. From equilibrium conditions applied to the prism external forces were calculated [64]. In the modelling was used commercial software MATLAB. Modelling results were compared with the experimental curves for beams. Predictions generated by the model were validated by 4PBT of 100 × 100 × 400 mm prisms.
Modelling Results
Modelling results for FRC of the group CSf is shown in Figure 5. Model is approximating experimental data quite good. Difference between modelling and experimental data at the first peak is easy to explain-model is describing only macro-crack opening stage. Macro-crack growth and stage of concrete aggregates working as "cohesive" elements are absent in the model. Modelling results for FRC of the group CPPfS and the group CPPfD are shown in Figures 6 and 7. Model is "in general" approximating experimental data. At the same time, pull-out data for fibers oriented close to 90 • are characterized by short length of pulled out tail. Fibers are not currying load and are breaking, see in Figure 8. After that only part of fibers is currying load. Macro-crack growth and stage of concrete aggregates working as "cohesive" elements are absent in the model. Modelling results for FRC of the group CPPfS and the group CPPfD are shown in Figures 6 and 7. Model is "in general" approximating experimental data. At the same time, pull-out data for fibers oriented close to 90° are characterized by short length of pulled out tail. Fibers are not currying load and are breaking, see in Figure 8. After that only part of fibers is currying load. Load bearing mechanisms at initial stage of the curve is different. For the group CPPfD we obtained necessary pull-out curve by back-analysis approach (by minimization difference between modelling and experimental data). Obtained pull-out Macro-crack growth and stage of concrete aggregates working as "cohesive" elements are absent in the model. Modelling results for FRC of the group CPPfS and the group CPPfD are shown in Figures 6 and 7. Model is "in general" approximating experimental data. At the same time, pull-out data for fibers oriented close to 90° are characterized by short length of pulled out tail. Fibers are not currying load and are breaking, see in Figure 8. After that only part of fibers is currying load. Load bearing mechanisms at initial stage of the curve is different. For the group CPPfD we obtained necessary pull-out curve by back-analysis approach (by minimization difference between modelling and experimental data). Obtained pull-out curves are different from experimentally obtained, see in Figure 9. Here we can conclude-load bearing mechanisms in FRC with PP fibers is more complicated than simple pulling out fibers are bridging the crack. Bigger role is playing fibers oriented under smaller angles than 90°. Load bearing mechanisms at initial stage of the curve is different. For the group CPPfD we obtained necessary pull-out curve by back-analysis approach (by minimization difference between modelling and experimental data). Obtained pull-out curves are different from experimentally obtained, see in Figure 9. Here we can conclude-load bearing mechanisms in FRC with PP fibers is more complicated than simple pulling out fibers are bridging the crack. Bigger role is playing fibers oriented under smaller angles than 90 • .
Conclusions
Based on the results presented and discussed in the paper, the following conclusions have been drawn: 1. Macro-crack opening in the concrete with steel fibers is possible to describe by fibers pull-out process. Macro-crack opening in the concrete with polymer (PP) fibers is happening with partial fibers breaking, pulling out of fibers oriented under smaller angles and partial rotation of pulled out fiber part. This not-obvious crack opening mechanism needs additional investigation. 2. The four-point bending tests of FRC prisms allow to analyze and predict load bearing potential of the FRC mix at different stages of damage accumulation in the material. Load bearing capacity of FRC is changing during cracking, along with the increase of the value of deflection of the damaged beam. Cracked beams with opened cracks,
Conclusions
Based on the results presented and discussed in the paper, the following conclusions have been drawn:
1.
Macro-crack opening in the concrete with steel fibers is possible to describe by fibers pull-out process. Macro-crack opening in the concrete with polymer (PP) fibers is happening with partial fibers breaking, pulling out of fibers oriented under smaller angles and partial rotation of pulled out fiber part. This not-obvious crack opening mechanism needs additional investigation.
2.
The four-point bending tests of FRC prisms allow to analyze and predict load bearing potential of the FRC mix at different stages of damage accumulation in the material. Load bearing capacity of FRC is changing during cracking, along with the increase of the value of deflection of the damaged beam. Cracked beams with opened cracks, with polymer fibers start carrying higher load compared to the samples with the steel fibers.
3.
Hooks (or geometry change) on the ends of steel fibers play an important role in ensuring the bending strength.
4.
Failure mechanism in FRC with metal and polymer fibres may be different. Comparing them is possible to recognize the micromechanics of processes happening on the stage of crack opening. Steel FRC load bearing on this stage is characterized by intensive fibers, that are bridging the macro-crack, pulling out process. In the situation with FRC having PP fibres Duru's micromechanics of load bearing is different.
|
2022-09-24T15:23:30.265Z
|
2022-09-22T00:00:00.000
|
{
"year": 2022,
"sha1": "51b2e14aeef04c58ab150b74319104cbee3c07f3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/12/19/9492/pdf?version=1663907232",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a216a20e63037cc8e7370398405d9eb3e07abb16",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
}
|
84235219
|
pes2o/s2orc
|
v3-fos-license
|
Hypogean carabid beetles as indicators of global warming?
Climate change has been shown to impact the geographical and altitudinal distribution of animals and plants, and to especially affect range-restricted polar and mountaintop species. However, little is known about the impact on the relict lineages of cave animals. Ground beetles (carabids) show a wide variety of evolutionary pathways, from soil-surface (epigean) predatory habits to life in caves and in other subterranean (hypogean) compartments. We reconstructed an unprecedented set of species/time accumulation curves of the largest carabid genera in Europe, selected by their degree of ‘underground’ adaptation, from true epigean predators to eyeless highly specialized hypogean beetles. The data show that in recent periods an unexpectedly large number of new cave species were found lying in well established European hotspots; the first peak of new species, especially in the most evolved underground taxa, occurred in the 1920–30s and a second burst after the 70s. Temperature data show large warming rates in both periods, suggesting that the temperature increase in the past century might have induced cave species to expand their habitats into large well-aired cavities and superficial underground compartments, where they can be easily sampled. An alternative hypothesis, based on increased sampling intensity, is less supported by available datasets.
Introduction
Experimental studies on the response to recent climate change of epigean vascular plants and animals show increasing evidence of a poleward or (in mountain areas) uphill shift of population and range boundaries [1,2]. Conversely, little is known about the possible response to climatic changes of the hypogean (or underground) compartment of life, the so called 'cave animals' [3], most represented by relict lineages strongly adapted to life in the depths of mountain massifs [4,5].
The insects of the family carabidae, well known as ground beetles, are inhabitants on the soil surface [6] with various adaptations and modes of life [7]. They may live as arboreal vegetation climbers in the forests [8][9][10], or as herbaceous plant climbers and seed feeders [11,12]. In some partitions of the family, especially in the subfamilies Trechinae, Pterostichinae and Platyninae, a remarkable evolutionary trend towards life in underground compartments has been described, and specific adaptations to subterranean life have been recognized since the first half of the 19th century. The characteristics of the hypogean fauna, especially taxonomy, biogeography, morphology and ecology were studied by several authors [13][14][15].
The habitat of subterranean beetles comprises all types of holes existing below the soil surface, which older authors distinguished as micro-and macrocaves. The former are given by the complex patterns of cracks and fissures existing in the bedrock, especially in calcareous karstic substrates. The latter is the only compartment accessible by humans and in which beetles are actually detected. Only in the last decades the concept of subterranean habitat was extended to the MSS [16], 'milieu souterrain superficiel' or 'mesovoid shallow substrate', represented by the fissure nets of epikarst, the ground of sinks, rock debris covering mountain slopes, talus, screes, soils rich of stones and blocks, and seepage springs [17].
In this work we aim to demonstrate that underground living beetles are highly sensitive to global warming and that the warming rates of the last century might have impacted the habitat extent of these insects. In the last decades, and especially after 1980, considerable new findings and changes of population sizes in well investigated caves have been recorded [3], and many new species and genera have been detected also in the below ground life hotspots of Europe. Some examples of these findings include ( [18] to Promecognathinae, a small carabid subfamily with two species in North America and four genera in South Africa.
There is no doubt that the implementation of cave exploring techniques, the use of long term bait traps, as well as the opening of eastern European countries, are reasons for the detection of many new species. Nevertheless, in the last decades unexpected findings of new subspecies, species or even genera of subterranean carabids progressed at such a rate to warrant at least two main hypotheses.
H1-Global warming hypothesis. Given the strong sensitivity of subterranean ground beetles to climate conditions, among others to low winter temperatures, it is possible that 20th century global warming [19] might have caused these populations to expand their hypogean habitat (or 'inhabitable volume') towards the earth's surface, leading to an increase of findings of new genera/species. We address this question by analyzing the species accumulation curves of representative epigean and hypogean genera or higher taxa, and by comparing these curves to the progressive increase of temperature observed in the 20th century. The curves were constructed for 12 generic or suprageneric taxa (table A.3 of the appendix) that are representative of European fauna and of the full range of adaptations from the epigean life to the most specialized troglomorphic habits. Within this context, we hypothesize that the more a group is sensitive to increasing temperatures, the higher will be the number of new species found, especially in the late decades of the 20th century when the surface warming trend is maximum [19]. Basic assumptions are: (a) because of sensitivity to winter frosts, habitat and range extent of hypogean carabid beetles tend to expand if the zone exposed to deep daily/monthly temperatures retreats towards the surface; (b) habitat shifts should have occurred especially after 1970, when the anthropogenic temperature increase was particularly marked [19], inducing for example 'laurophyllisation'-the spread of evergreen broadleaved species in the submediterranean forests [20]; (c) the warming effects on habitat expansion should be more evident in areas with more continental climate or where the winter frosts are relatively severe, e.g. in Alpine chains and the Balkans compared to Western Europe (Pyrenees); (d) Europe is an especially suitable region to study species accumulation curves because investigations of below ground animals started very early, around the middle of 19th century.
H2-Sampling intensity hypothesis. An alternative explanation to H1 is that the number of biospeologists involved in collecting carabid beetles in subterranean environments may have increased in the last decades in such a way that previously neglected caves and taxa were finally discovered. The systematic use of long-time trapping in the MSS and new cave exploring techniques could have given a substantial 'speed up' to new findings, especially by allowing surveys of deeper galleries and shafts. We address this second hypothesis by counting the authors and co-authors of all the descriptions recorded in the crucial taxa of highly evolved cavernicolous carabids, and comparing species/decade statistics with corresponding author numbers, bearing in mind that in most papers (about 70%) collectors and descriptors coincide.
Before addressing these two hypotheses, it is useful to review the ecological classification of hypogean organisms, which has been recently reexamined as follows [15]: (i) troglobionts (strongly bound to hypogean habitats), (ii) eutroglophiles ('essentially epigean species able to maintain Plate 1. From epigean to hypogean morphs of carabid beetles (hypogean evolution). From left above to right below. Harpalus distinguendus and Amara lunicollis, Italy, partly seed feeders often found on vegetation. Carabus auronitens, Eastern Alps, terricolous predator. Bembidion eques, and B. illigeri, Germany, river bank dwellers. a permanent subterranean population'), (iii) subtroglophiles (species inclined to perpetually or temporarily inhabit hypogean habitats but needing epigean environments for some biological functions, e.g. reproduction), (iv) trogloxenes (sporadic in caves). A second evolutionary pathway, that of deep soil dwellers (endogeous forms), is not considered in this work, because they are not strictly adapted to life in caves [4] (table A.1 of appendix).
Concerning the adaptation to the hypogean domain, epigean forms of carabid beetles are mostly winged and show well pigmented, metallic, colorful or black-brown bodies, well developed eyes and appendages of normal length. The cave dwelling entities are normally yellow-brownish or pale, the eyes being almost completely or totally reduced (anophthalmic), and are characterized by longer antennae and legs and probably an enhanced tactile-olfactory sensorial set. Common 'troglomorphic' characteristics are found in the specialization of sensory organs (touch chemoreceptors, hygroreceptors, thermoreceptors, pressure receptors), elongation of appendages and foot modifications, eyes reduction, pigment and wings reduction and increased egg volume [4,13]. Circadian rhythms of locomotory activity also show a progressive regression in cave beetles [21]. There is a general agreement in defining the status of some extremely specialized taxa as a sort of 'blind end' of subterranean evolution. Their legs and antennae are long, and the swollen elytra form a 'subelytral chamber', perhaps a sort of humid air-volume protecting the animals against rapid desiccation. These extreme specialists are called 'aphaenopsian' and represent the most typical troglobionts.
Hypogean beetles are known to be highly stenothermic and stenohygric, able to react quickly to temperature and/or air humidity variations of their habitats [22]. Most hypogean carabids are adapted to temperatures of 10-12 • C or less, and to relative humidity between 95 and 100% [6]. For example Typhlotrechus bilimeki conducts its entire life cycle [23] at a temperature of 4-8 • C, and reproduces [24] during the 'summer' at a stable temperature of 8 ± 1 • C, and air humidity oscillating between 98 and 100%. Several Anophthalmus and Duvalius species are bound to even lower temperatures, sometimes demonstrating impressive activity rates in ice dropping caves or in alpine calcareous soils at temperatures of 0-5 • C. Climate changes of the past are thought to be important driving forces of subterranean evolution [17]; for example during the Messinian [25] salinity crisis or the Ice Ages [26] troglophilic populations were forced into even deeper habitats. The Ice Ages appear to be the main causes of the actual relict distribution of hypogean carabid lineages in the mountains of Middle and Southern Europe [3,27].
Methods
In this study only hypogean 'cavernicolous' forms [4] are included, as defined in table A.1 of the appendix, i.e. only beetles that are seen as inhabitants of the underground compartment (caves and MSS). Table A.2 of the appendix provides an overview of all hypogean taxa of Europe. To reconstruct the time sequence variations observed in discoveries of epigean taxa versus hypogean ones, a set of 12 diverse genera or higher taxa was chosen, which are listed and described in table A.3 of the appendix. On the basis of the fauna Europaea database [28] we ordered chronologically all the findings of hypogean carabids starting from 1840. We then compared the species accumulation curve of the western palearctic region [27] (figure 2(a), over 3000 species) with that of the 12 significant carabid genera/taxa groups selected for the European area ( figure 2(b)). For each genus or taxa group a chronological list was compiled excluding the nominal forms but including all the subspecies ascertained for any single species. The fauna Europaea database was used for the status of all taxa included in this study, and the decade 2001-2010 was completed by recent descriptions and monographs [29]. The actual discovery dates of type specimens were reconstructed at least for the last decades, and overall more than 2800 taxa were included in the counts (table A.3). For all taxa/groups the discovery dates were ordered chronologically and arranged in: (1) species or subspecies descriptions/decade and (2) cumulated values/decade. An additional set of decadal values were calculated counting the hypogean genera for Europe, (inclusive of endogeous forms), but considering only the date of discovery of the first species.
To obtain a scale of increasing adaptation towards the subterranean environment, the degree of adaptation of the taxa of table A.3 was evaluated and ordered on the basis on their troglomorph evolution using the main behavioral and morphological features listed in table A.4 of the appendix, which presents the results of this evaluation by ranking the twelve genera on a scale from −5 (Amara) to 14 (Aphaenops). The twelve genera/taxon groups were roughly classified in eight steps: seed eaters, 'terricolous', river bank dwellers, 'lapidicolous', troglophilous, partly MSS dwellers, MSS or caves, most troglobionts. Two numeric indices were then calculated based on this sequence of genera: (1) the sensitivity index, which quantifies the influence of the last warming phase on the discovery of new taxa (calculated as the percentage of forms found after 1970 on the total of all species/subspecies known so far); (2) the 'median decade', i.e. the decade in which 50% of all forms known for the taxon was reached. This second index provides us with a rough evaluation of the 'resilience' to discovery expressed by each taxon/group.
The temperature data used in figures 3(b)-(d) were taken from the half degree resolution dataset produced by the Climatic Research Unit (CRU) of the University of East Anglia (New et al, 2000) averaged over the regions shown in figure S1 (supplementary material available at stacks.iop.org/ ERL/8/044047/mmedia).
To test the H2 hypothesis, all authors of the descriptions of discoveries were ordered per decade (five year periods for the Duvalius species), and the correlation coefficient of authors versus the temporally cumulated taxa was calculated. The number of authors/decade was corrected by adding specialists known to be certainly active in the same period. Furthermore, to test the H1 hypothesis we calculated the correlation coefficient between the cumulated taxa curves of Duvalius, Anophthalmus, Aphaenops, Orotrechus, and all European genera, and the decadal variations of the northern hemisphere land-surface temperatures ( • C) from 1850 to 2010 relative to the 1961-1990 mean, as expressed by the smoothed curve of CRUTEM4 [30], which are in line with the trends in European temperatures. Figure S2 (available at stacks.iop.org/ ERL/8/044047/mmedia) provides a graphic illustration of our temperature hypothesis as applied to a cave rich massif.
Results
The geographic distribution of discoveries after 1970 is presented in figure 1. Epigean taxa (Amara, Harpalus, Carabus, (figures 2(a) and (b))) show regular sigmoid curves, while all true cave dwelling taxa (from Duvalius to Anophthalmus) show curves far from saturation and in most cases characterized by two peaks in discoveries, the first around the 1920/30s and the second after the 1970/80s (figures 2(c) and (d)). These periods correspond to times of maximum warming rates globally [19]. Mountain dwellers (Pterostichini) or subtroglophile/eutroglophile forms (Laemostenus, Trechus) show intermediate patterns between the saturation and double peaked ones. Also, the sequence of discoveries of hypogean genera in Europe ( figure 3(a)) shows in the past century a first peak in the 20s and a second, impressive burst in the last three decades. In fact, this last period is characterized by the appearance of several 'aphaenopsian', highly specialized forms, and by the outbreak of non-trechine troglomorph taxa along with the unexpected new subtribe Lovriciina, with three genera and four species [31]. These peaks in new discoveries also correspond to periods of maximum global warming [19]. To illustrate regional differences observed across the European area, we compare accumulation curves of Aphaenops (Pyrenees), Anophthalmus (Eastern Alps) and Duvalius (European-Mediterranean mountains) with mean annual and winter-spring temperatures over the respective areas (figures 3(b)-(d)). It appears that in Western Europe the warming has been relatively continuous in time, without a plateau in the 50s, whereas in the Eastern regions the warming shows two periods of maxima, the 10-20s and after the 70s, corresponding to peaks of trechine findings, with flat temperatures and low trechine discoveries in 1950-1970.
Thereafter we measure the sensitivity of each taxon/group in two ways: (i) counting the % of new species/subspecies found after 1970 (sensitivity index) and (ii) by recording the medium decade in which each taxon scored 50% of the actually known species. All taxa/groups are ordered on the basis on their troglomorph evolution using main behavioral and morphological features (table A.4 of the appendix), obtaining a scale of 12 levels, roughly classified in eight steps: seed eaters, 'terricolous', river bank dwellers, 'lapidicolous', troglophilous, partly MSS dwellers, MSS or caves, most troglobionts. These two indices are shown in figures 4(a) and (b). The sensitivity index for each level of troglomorph evolution (figure 4(a)) shows that for most cave dwellers about 20% of taxa was found after 1970 (lower values are observed in Pyrenean genera). Orotrechus, a genus in which many new highly specialized forms were found in the last decades, shows the most delayed median decade index, but also for other markedly troglomorph groups the median decade index peaks around 1940-50 ( figure 4(b)).
Figures 2 and 3 clearly show that in Europe only troglobitic carabids (and the eutroglophilous Laemostenus species)
were increasingly influenced by the pattern of warming periods, suggesting that the rising temperatures may have allowed previously unknown 'hidden' species/populations to settle new macrocaves and MSS layers closer to the soil surface, thereby facilitating their discovery by humans. Figure 4 quantifies the sensitivity of each taxon/group to the warming trend and confirms that 'resilience' to discovery concentrates in hypogean carabids.
Finally, we compare the correlation coefficients of the cumulated taxa curves of the highly evolved Trechine or European carabid genera versus:
Discussion
In this letter we reconstructed a large set of species/time accumulation curves of the largest carabid genera/taxa groups in Europe and attempted to place these curves within the context of recent warming trends globally and over Europe (figures 2(b)-(d)). We found that new discoveries of European cave species are far from saturation, but show peaks in correspondence of periods of maximum warming, in particular in areas that have been explored since many years. Even in the best explored countries of Europe, as in Northeastern Italy or Slovenia, where the Science of Biospeology started [5], and for apparently 'well known' genera such as Orotrechus and Anophthalmus, an increase of more than 30% of taxa was recorded after 1970 ( figure 4(a)). Moreover, many of these new findings, especially from the Southern Alpine range and in the Dinaric chains, concern highly evolved troglobiont or aphaenopsian forms, normally living deep inside and in the more protected portions of the underground compartment. Last but not least, only in the last decades Europe reveals to be a Pandora's box of carabid (sub)tribes previously unknown from the hypogean environment (Promecognathini, Lovriciina) or shared with other continents (Platynini, Zuphiini). These results suggest that the crowding of highly troglomorph findings in the last decades is not only a consequence of intensified research or technology improvements but also a result of the upward fluctuations of subterranean carabid populations driven by higher temperatures associated with global warming.
In fact, to support this conclusion we quantitatively examined the two hypothesis that the increase in new findings are related to increasing temperatures (hypothesis H1 above) or increasing number of biospeologists and sampling intensity (hypothesis H2 above). We found high correlation coefficients between the species/genera accumulation curves of true cave taxa and decadal temperature anomalies of the northern hemisphere continents, in the range of 0.84-0.92. By comparison, the author numbers are less strongly related to the species curves, with correlation values around 0.60 (0.56-0.65), and even 0.15 (for Anophthalmus). This clearly suggests that changes in sampling intensities alone cannot explain the increase in new findings, but that the contribution from rising temperatures might be not only significant, but even dominant.
An alternative explanation for the increase in new findings in the last decades might be related to the fact that small species of insects tend to be discovered later than medium sized or larger ones, and that many cave dwellers are mostly of small size. In fact, a recent paper [32] demonstrates that in the western Palaearctic area the discovery of small species on average started later than that of larger ones, but this conflicts with the fact that in several genera the large size, aphaenopsian forms have been detected only in the last years (Orotrechus, Allegrettia, Dalyat). Moreover, the global warming effect hypothesis is also supported by the fact that a relevant number of new descriptions concerns specimens or populations that appeared suddenly in long-time explored caves (Lessinodytes, several Duvalius, see supplementary materials available at stacks.iop.org/ERL/8/044047/mmedia).
In conclusion, our results indicate that because of their extremely high sensitivity to abiotic factors, in particular temperature, it is possible that hypogean carabids can be intriguing indicators of climate change, and future work will address the role of possible additional factors acting on more local scales, such as precipitation and cave temperature and humidity. The habitat shifts of these beetles may thus be used as an ecosystem response indicator of global warming, which stresses the importance of a more continuous monitoring of selected cave populations.
Acknowledgments
We thank Dr Dimitar Ouzunov (S Fili, Cosenza, Calabria) for help in constructing figure 1 starting from GIS data. The biological part of the research has been financed by the Italian Ministry for University and Education (PRIN Project 2009-code 200947YRB9, coordinated by PB). The authors are indebted with two anonymous referees who greatly helped in improving the quality of the letter, and with all speleologists and biospeleologists who in last years friendly provided literature and unpublished data or help in the field. Author contributions PB and FW planned the study, PB, FG and AC contributed equally to the manuscript, GC constructed most of the species/genera Databases, LM elaborated temperature and climate data at regional scale, RP analyzed species/author/temperature pattern relationships. All authors discussed the analysis and results and commented on the text.
Additional information
The authors declare no competing financial interests. We reconstructed the species and subspecies descriptions/decade and their cumulated curves in 12 genera or higher groups, selected according to their habitat preferences and adaptation to a subterranean way of life. Starting from the markedly epigean ones, the taxa groups are listed in table A.3.
A.3.2. Ranking the genera/taxa groups according to their ability to colonize the hypogean domain.
Considering behavioral and morpho-ecological characteristics, the genera of table A.3 can be arranged as in table A.4, obtaining a continuous transition from true epigean forms to the most cavernicolous ones. Each trait found in the taxon/group was assigned a positive or negative value based on the following criteria: a-metallic body (reflection by photonic crystals of the cuticula, well developed in diurnally active sun exposed insects: −1; (b)-plant climbing ability: −1; (c)-vegetal food intake (plant seeds or sprouts)): −1; (d)-eyes large or of normal size: −1; (e)-depigmented cuticula: 1; (f)-eyes small in at least some species (but with more than 50 ommatidia): 1; (g)-at least some species microphthalmic: 1; (h)-blind (anophthalmic): 1; (i)-cuticula testaceous (reddish brown): 1; (k)-body entirely yellow: 1, or pale: 2; (l)-elongated appendages: 1, extremely elongated: 2; (m)-specialized trichobothria: 1; n-subelytral chamber: 1, The first five groups (Amara-Pterostichini) are typical trogloxenes, Trechus and Laemostenus are often found in caves, especially the Laemostenus of the subgenus Antiphodrus may be considered eutroglophiles living in the first five/ten meters of the MSS or of the karstic microcaverns. They represent a good example of transition between trogloxene and eutroglophile habits in Europe. The Duvalius are in most cases anophthalmic and testaceous or yellow, but many species are microphthalmic and darker. The more evolved forms live in caves, while the less evolved are frequently found in the MSS, often at very low temperatures. The pyrenean genus Geotrechus shows similar features, but all species are blind, the legs are short and the elytrae flat, with no subelytral chamber. In Orotrechus the legs vary from short to longer, some late discovered forms are 'aphaenopsian', they are found both in the MSS as well as in caves. In Anophthalmus there is a clear shift of most species into caves, many species are tied to very low temperatures (life in 'ice caves'), while in the related genus Aphaenopidius a subelytral chamber is somewhat developed. Finally, in Aphaenops all species show a more or less pronounced 'aphaenopsian' habit, long appendages and subelytral chamber.
|
2019-03-21T13:02:55.386Z
|
2013-12-01T00:00:00.000
|
{
"year": 2013,
"sha1": "aa6482adff5217871658a0af55d0a7d7caeec67d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-9326/8/4/044047",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2735b14e368626750f20478b670712b24738aa88",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Biology"
]
}
|
209893961
|
pes2o/s2orc
|
v3-fos-license
|
A case of clonal seborrheic keratosis with characteristic dermoscopic features
To the Editor: A 40-year-old woman presented with a ovoid nests which are generally considered a dermoscopic [4,5] gradually enlarging red oval plaque with a rough surface on feature of basal cell carcinoma. The clonal SK may [4] the lateral aspect of the left leg for 4 years. No ulceration, crust, exudation, or bleeding was seen. Physical examination showed the lesion was a well-defined, red flat plaque, 1.5 cm 1.0 cm in diameter, medium hardness, and non-tender [Figure 1A]. There were no systematic abnormalities and the patient had no complaints of pain or itching. Dermoscopic examination revealed multiple brown globules, dotted, and globular vessels on the red background. Several follicular keratotic plugs and multiple milia-like cysts could also be seen.The lesionhadademarcatedborder [Figure 1Band1C].
To the Editor: A 40-year-old woman presented with a gradually enlarging red oval plaque with a rough surface on the lateral aspect of the left leg for 4 years. No ulceration, crust, exudation, or bleeding was seen. Physical examination showed the lesion was a well-defined, red flat plaque, 1.5 cm  1.0 cm in diameter, medium hardness, and non-tender [ Figure 1A]. There were no systematic abnormalities and the patient had no complaints of pain or itching. Dermoscopic examination revealed multiple brown globules, dotted, and globular vessels on the red background. Several follicular keratotic plugs and multiple milia-like cysts could also be seen. The lesion had a demarcated border [ Figure 1B and 1C].
After surgical removal, the specimen was sent for pathologic examination. The post-operative pathology revealed that the proliferative nests of basaloid cells contained melanin [ Figure 1D], along with hyperkeratosis and acanthosis [ Figure 1E]. In addition, there were follicular keratotic plugs and horn cysts. Some dilated capillaries can be seen in the upper dermis [ Figure 1E]. The pathologic diagnosis is clonal seborrheic keratosis (SK).
The clonal SK is a rare variant of SK. Dermoscopy is a noninvasive diagnostic tool widely used in the world, which can provide auxiliary information for the diagnosis of SK. The most common dermoscopic characteristics of typical SK are comedo-like openings, sharply demarcated border, milialike cysts, and hairpin vessels. [1] Although dermoscopy is a useful tool for the correct diagnosis of typical SK, the dermoscopic manifestations of the lesion, in this case, do contain only some of the common characteristics of SK. And the distinct feature within the red background is the central brown globules, which increases the difficulty of accurate diagnosis of this case. As reported by Uzuncakmak et al, [2] the overall architecture of clonal SK usually presents a dermoscopic clod pattern, which can be seen in congenital nevi, Spitz nevi, and malignant melanoma. Also, many dotted and globular vessels were observed, but vascular structures were less prominent in previous reports of clonal SK. [2,3] In several cases, we can even see that large blue-gray ovoid nests which are generally considered a dermoscopic feature of basal cell carcinoma. [4,5] The clonal SK may present a dermoscopic pitfall. [4] Compared with typical SK, clonal SK has many overlapping dermoscopic manifestations, so carefully differential diagnosis is needed when using dermoscopy to assist the diagnosis of clonal SK. [2,4] The histopathology of a clonal SK is different from that of a typical SK. [6] In this case, histopathology examination showed that the basaloid cells gathered into nests in the thickened epidermis, clearly bounded by the surrounding cells. This is known as intra-epidermal epithelioma of Borst-Jadassohn [ Figure 1D], which corresponds to the "brown globules" seen on dermoscopy. Borst-Jadassohn appearance is only a histopathologic phenomenon that can be found in some diseases, including mainly clonal SK, hydroacanthoma simplex, and Bowen's disease. Besides, there were dilated capillaries in the upper dermis, which manifested as dotted or globular vessels on dermoscopy.
In conclusion, the peculiarity of clonal SK lies in the lack of characteristic manifestations of typical SK in both clinical manifestations and dermoscopic findings. Therefore, we should perform histopathology examination to make an accurate diagnosis in this type of lesion to avoid misdiagnosis.
Declaration of patient consent
The authors certify that they have obtained the appropriate patient consent form. In the form, the patient provided her consent for her images and other clinical information to be reported in the journal. The patient understands that her name and initials will not be published and that due efforts will be made to conceal her identity but that anonymity cannot be guaranteed.
Funding
This work was supported by grants from the Beijing Natural Science Foundation (No. 7182127) and the National Natural Science Foundation of China (No. 61871011).
|
2020-01-07T14:03:15.388Z
|
2020-01-03T00:00:00.000
|
{
"year": 2020,
"sha1": "f1945c9afd1958dc8470e8a609e71955f6ab7d86",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/cm9.0000000000000630",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1038f3593d64d8688c6260d7caa00a5470b9ef1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
268839537
|
pes2o/s2orc
|
v3-fos-license
|
Integrated Physical Therapy in a Unique Case of Holstein-Lewis Fracture With Radial Palsy: A Case Report
The term "Holstein-Lewis fracture" describes a spiral fracture that occurs in the shaft of the humerus at its distal third, which has been linked to radial nerve palsy in adults, and operative treatment is the preferred method of treating the trapped nerve at the fracture site. This paper describes a clinical case involving a 20-year-old male patient demonstrating a humeral fracture syndrome accompanied by complications associated with radial nerve palsy. After the necessary investigation, he was diagnosed with a Holstein-Lewis fracture with radial nerve paralysis; he underwent open reduction internal fixation (ORIF), after which he was referred to physical therapy. Developing a successful postoperative rehabilitation program that consists mostly of functional physical therapy interventions is essential for the treatment of this condition. Outcome measures like the Numerical Pain Rating Scale (NPRS), Disabilities of the Arm, Shoulder, and Hand (DASH) score, and Patient-Rated Wrist Evaluation (PRWE) score were recorded before and after rehabilitation, and pain reduction, improvement in strength, range of motion (ROM), grip strength, and activities of daily living (ADL) were found. The purpose of this case report is to present a comprehensive treatment plan that includes ROM exercises, cryotherapy, and strengthening of grip using a robotic glove for a patient who had a wrist drop and underwent ORIF surgery. This tailored intervention was effective in speeding up the return of functional abilities and improving function in ADLs.
Introduction
Holstein and Lewis discovered a unique kind of fracture of the humeral shafts with a specific propensity for radial nerve palsy.This fracture is situated within the distal one-third of the humeral shaft, presenting as a spiral type.The distal fragment of bone consistently exhibits proximal displacement, with its proximal end displaying radial deviation.The entrapment of the radial nerve occurs at the fracture site, and the presence of a comminuted fragment introduces a potential risk for nerve damage due to the oblique surface of the distal end of the proximal fragment [1].The fracture line extends from the proximolateral to the distomedial plane.In accordance with the Orthopaedic Trauma Association (OTA) classification, humeral shaft fractures categorized as type 12A1.3 meet the criteria for classification as a Holstein-Lewis fracture, and the radial nerve is frequently injured together with the fracture line, which is in close proximity to the elbow joint at its distal end.The annual occurrence varies between 13 and 20 per 100,000 individuals and has been observed to increase with age [2].The impact of the injury causes the proximal fragment to be pushed distally, which displaces the intermuscular septum and the radial nerve that is located within the septum's foramen.Concurrently, it causes the entrapment or laceration of the radial nerve between fragments of the bone.The incidence of radial nerve damage in adults with humeral shaft fractures varies from 7% to 17% [3].
The preferred treatment for this type of fracture or injury consists of primary open reduction and internal fixation (ORIF).Several surgical methods have been implemented to treat the fractures of the shaft humerus occurring at the distal third of the shaft.In most clinical settings, the posterior approach is favoured for surgical procedures [4,5].The operative intervention has demonstrated enhanced reliability for immediate fracture stability, providing predictable alignment and enabling early elbow mobilization, although there are potential complications such as iatrogenic nerve damage, olecranon impingement, infection, and loosening of hardware [6].The additional issue with such methods is that they can result in muscular weakness and restricted range of motion (ROM) due to injuries to the triceps brachii or brachioradialis.Moreover, regaining pre-injury functions, such as elbow motion, muscle strength, and grasping, is crucial [7].It mostly affects the elbow and shoulder, which has an impact on the person's entire functional capacity.Therefore, the intent of physical therapy must be to enhance upper-limb function [8].During the recovery stage, exercise and muscle retraining preserve joint ROM and enhance motor function recovery when muscle reinnervation occurs [9].An efficient rehabilitation program is necessary to avoid joint stiffness and restore ROM following surgery for distal humerus intraarticular fractures [10].
The hallmark of a radial nerve injury is a wrist drop.The non-functioning wrist extensors are overpowered by flexor tone, causing the hand to flex [11].Individuals with loss of hand function have significant challenges with gripping, grasping, and manipulating objects, which makes it difficult for them to carry out daily tasks on their own [12].The hand extension robotic orthosis glove was designed through an iterative method to help people with major hand impairments.Portability, lightweight qualities, and ease of setup and use are highlighted in this design.The robotic apparatus is composed of a batting glove equipped with artificial tendons integrated into the fingers of the glove.A linear actuator pulls and pushes the tendons to flex and extend the finger [13].In this case report, we see the case of a 20-year-old male with a Holstein-Lewis fracture associated with radial nerve palsy, treated with ORIF, and how physical therapy contributed to his improvement.Our study's goal was to assess the impact of using a robotic glove on early wrist mobility and grip strength in the case of a wrist drop related to a distal third humerus fracture.
Patient information
We present a case of a 20-year-old male who experienced a road traffic accident and was brought to the emergency causality at our hospital.Having been hit by a four-wheeler, he had undergone an X-ray investigation that revealed a Holstein-Lewis fracture on the right side.He experienced severe pain in his right arm, which aggravated during movement, and was relieved with rest and medications.Pain was acute in onset, continuous, and non-radiating in nature.On observation, the shoulder was adducted with the elbow in 45-degree flexion, and the forearm was in a semi-prone position.A bony deformity was noted over the distal arm.He had no history of head trauma.The patient was managed with ORIF with plate osteosynthesis and radial nerve exploration under nerve block.After surgical repair, a tailor-made physical therapy regimen was started.
Clinical findings
The patient's consent was acquired before the examination.He was cooperative, conscious, and welloriented toward people, places, and time.On clinical examination, the patient was afebrile and maintained hemodynamic stability.On observation, the arm was supported with the sling.The swelling was present on the distal arm.No previous surgical scars, discharging sinuses, or dilated veins were seen.On palpation, the local temperature was raised over the right arm.Grade 3 tenderness was noted over the midshaft humerus.Wrist and finger extension were absent; wrist and finger flexion were present.Upon examination, the patient demonstrated a right-sided wrist drop, exhibiting complete passive wrist joint ROM and restricted active and passive ROM in the shoulder for flexion, abduction, and internal and external rotation.The strength was notably diminished in the right upper limb.The patient reported experiencing intermittent and dull pain, with an activity-related pain rating of 7/10 and a pain rating of 3/10 at rest, as assessed using the Numerical Pain Rating Scale (NPRS).
Radiological findings
The patient underwent an investigational X-ray of the arm, which revealed a displaced distal 1/3rd humerus fracture, as shown in Figure 1.
FIGURE 1: Preoperative X-ray of the arm (lateral view)
The red rectangle indicates a distal 1/3rd humerus fracture (Holstein-Lewis fracture).
Therapeutic intervention
The physiotherapist designed tailored exercise sessions based on the patient's clinical status.Table 1 depicts the physical therapy protocol.On the second postoperative day, the patient started active, assisted physical therapy with elbow movements while wearing a dressing and a wide-arm polysling for comfort.
FIGURE 3: Patient performing wrist and finger flexion using robotic gloves
Figure 4 shows the patient performing wrist and finger extension with the help of robotic gloves.
Follow-up and outcome measures
For four weeks, the patient underwent a structured physical therapy regimen, followed by a subsequent follow-up evaluation.The findings of the manual muscle testing of the upper limb are shown in Table 2. 0: no contraction palpated; 3-: some but not complete range of motion against gravity; 4: complete range of motion against gravity with moderated resistance Table 3 depicts the ROM of the upper limb.The pre-and post-treatment findings (right side) of the outcome measures are shown in Table 4.
Discussion
A Holstein-Lewis fracture represents a humeral shaft fracture with a characteristic spiral pattern, potentially leading to radial nerve injury, given its anatomical course around the humerus.Open reduction internal fixation is an ideal course of treatment to stabilize the fracture site, lower the likelihood of non-union, and promote a prompt return to everyday activities.A progressive physical therapy regimen can effectively regain the early restoration of elbow ROM.For wrist drops, a cock-up splint is employed, and physical therapy begins on the second postoperative day.A humerus fracture-related radial nerve palsy has a high incidence of spontaneous recovery [14].Postoperative physical therapy care is essential in order to achieve an early functional recovery and avoid subsequent complications.In this paper, we have discussed a clinical case of a 20-year-old male diagnosed with a Holstein-Lewis fracture and managed surgically with ORIF with plate osteosynthesis and nailing.The ORIF led to postoperative pain, loss of ROM, strength, and wrist drop due to injury to the radial nerve for which the patient underwent physical therapy management, which proved to be effective in early functional recovery.Strengthening exercises, ROM exercises, stretching, and joint mobilization are beneficial in helping the person become functionally independent [15].
A study by Casmus et al. demonstrated that a structured rehabilitation program that incorporates proprioceptive exercises, plyometric ball activities, resistance tubing exercises, and dumbbell activities can effectively recover early ROM at the elbow joint [16].A case study concluded that combining mirror therapy with conventional therapy after a humeral shaft fracture improves the function of the affected joint.Therefore, it should be included in the fracture rehabilitation regimen [8].The hand and fingers are vital anatomical structures for executing a wide range of functional activities in everyday life, especially for grasping and handling objects.Correia et al. demonstrated findings that indicate that individuals experiencing compromised hand function attributable to spinal cord injury demonstrated significant enhancements in power and pinch grip forces, alongside improvements in the active ROM of the fingers, facilitated by glove assistance [17].The robotic exoskeletal glove emerges as a viable assistive technology or adjunct rehabilitative intervention to maximize sensorimotor deficits and upper limb and hand-related functional capacities for those who have hand paresis or paralysis after a stroke [18].Radial nerve damage from a humerus fracture can result in serious and irreversible disability.Fader et al. found that the strategic application of synergistic upper extremity movement patterns in multiple planes, incorporating neuromuscular irradiation or overflow, and utilizing neuroplasticity led to enhanced strength and ROM [19].Similarly, we used proprioceptive neuromuscular facilitation and found positive effects.We gave cryotherapy to reduce postoperative pain.According to Khadijah et al., a distinct form of transcendence that is specifically attained by using cold compresses is a change in the way that pain is perceived, with an increased focus on the sensation of cold, which enhances one's comfort.Based on current theories and research, it may be concluded that cryotherapy is preferable to warm compresses in terms of reducing pain perception and improving comfort [20].
Conclusions
An integral component of the therapeutic approach for Holstein-Lewis fracture is determining the optimal postoperative rehabilitation plan that emphasizes elements of functional physiotherapy.Meticulously designed rehabilitation protocols lead to better functional results after surgery.Wearable hand rehabilitation apparatus can help physiotherapists improve early fine motor movement, wrist mobility, grip strength, and the effectiveness of rehabilitation exercises.Physiotherapeutic interventions, encompassing cryotherapy, ROM exercises employing robotic gloves, and muscle-strengthening exercises, prove advantageous for patients with Holstein-Lewis fractures.
info: All authors have declared that no financial support was received from any organization for the submitted work.Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
Figure 2
Figure2shows the lateral postoperative radiograph of the arm.
FIGURE 2 :
FIGURE 2: Postoperative X-ray (lateral view)The red rectangle indicates open reduction internal fixation of distal 1/3rd humerus fracture with plate osteosynthesis and nailing.
Figure 3
Figure3shows the patient performing wrist and finger flexion with the help of robotic gloves.
TABLE 3 : Upper limb range of motion (in degrees, right side)
N/A: not assessable
|
2024-04-02T17:17:19.520Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "a0d379049f9b16d84997933e78fe173d72631bbf",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/237518/20240328-24282-96fmet.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca79be91f0c8ce317c849374af2ac4bd181153d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
67856773
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Resource Allocation for Joint Sensing and Communication: Multiple Targets and Clutters
We study a contactless target probing based on stimulation by a radio frequency (RF) signal. The transmit signal is dispatched from a transmitter equipped with a two-dimensional antenna array. Then, the reflected signal from the targets are received at multiple distributed sensors. The observation at the sensors are amplified and forwarded to the fusion center. Afterwards, the fusion center performs space-time post processing to extract the maximum common information between the received signal and the targets impulse responses. Optimal power allocation at the transmitter and amplification at the sensors is investigated. The sum-power minimization problem turns out to be a non-convex problem. We propose an efficient algorithm to solve this problem iteratively. By exploiting maximum-ratio transmission (MRT), maximum-ratio combining (MRC) of space-time received signal vector is the optimal receiver at sufficiently low signal-to-interference-plus-noise-ratio (SINR). However, zero-forcing (ZF) at the fusion center outperforms MRC at higher SINR demands.
I. INTRODUCTION
Environment sensing for the aim of targets' material characterization requires efficient system design and parameter estimation. Targets to be identified can be classified into two categories, namely, active signal sources and passive objects. Assuming active signal sources, the signals are detected by multiple sensors, which is then forwarded to a fusion center for joint processing. The authors in [1], [2] provide analytical solution for the power allocation at the sensors under sensor sum-power and individual power constraint. Furthermore, a similar system is investigated in [3]- [6] from different perspectives including localization, scheduling and energy harvesting. In contrary to active signal sources, passive targets are silent, hence they require stimulation by external signal sources to be activated. The systems that deal with passive targets are known as radar systems. These systems can be designed to be collocated, i.e., the transmitter and received are embedded in a single device. The authors in [7] study the optimal waveform design in a collocated multiinput multi-output (MIMO) system with a single extended target. In that work, the mutual information between the transmit and received signals is maximized. Moreover, they study the waveform design for minimizing the mean-squared A. Kariminezhad of the channel estimation error. Moreover, the authors in [8] study similar problem assuming multiple extended targets. For distributed transceivers, the authors in [9] study the optimal waveform design that maximizes the so-called Bhattacharyya distance. That work mainly focuses on single-target detection in a single-clutter environment.
Having multiple targets and clutters in the sensing environment, in this paper, a radar system is exploited for material characterization purposes. This characterization can be fulfilled by estimating the second-order moment of the materials. We address this radar system system by utilizing a two-dimensional multi-antenna transmitter which preprocesses the signal given the position of the objects in the sensing environment. This design allows three-dimensional beamforming which has been shown to be beneficial [10]- [14]. Here, we assume that the position of the objects are known. Having this information by, for instance, the methods in [15] and the references therein for localization problems, the dispatched signal power at the objects surface is maximized by transmitting in the direction of their steering vectors. The incident signal in then reflected by the objects in the environment. Sensing the response of the objects in a particular spectrum can help classifying the material of the objects. For instance, in photo-acoustic imaging, the target is stimulated at a very high frequency spectrum, however the response is captured at the ultrasonic frequency range [16]- [18]. However, the reflection by the targets over the same transmit signal spectrum can also be helpful for identification purposes. Therefore at the same frequency spectrum, the reflected signal from the objects are sensed, amplified at multiple sensors, and then forwarded to the central processing unit which is referred to as fusion center. The fusion center is equipped with multiple singleantenna baseband units with high capacity links. The receive antennas at the fusion center observe the noisy version of the forwarded signals from the sensors, Fig. 1. Then, the fusion center performs post-processing for target response detection. For this purpose, we minimize the transmit power and sensor amplification under received signal-to-interferenceplus-noise ratio (SINR) constraint. This problem turns out to be a signomial program, which is essentially a non-convex problem. We exploit an efficient algorithm to obtain a good sub-optimal solution iteratively.
II. SYSTEM MODEL
We consider a sensing environment with N targets of interest and N ′ clutters, i.e., N + N ′ objects in total. The RF signal from a single multi-antenna transmitter with a M × M ′ planner antenna array stimulates the objects in the sensing environment, where there is a line-of-sight (LoS) between the transmitter and objects. We assume that, the transmit antennas are equi-distantly (uniform linear array) positioned in twodimensional Cartesian basis dimensions. Here we consider the antenna at the center of coordinates as the reference antenna. By assuming half-wavelength distance between horizontal and vertical antenna elements, we obtain the steering vector corresponding to the object i as where m ∈ {0, · · · , M − 1} and m ′ ∈ {0, · · · , M ′ − 1}. Notice that the azimuth and elevation angles of the object i are represented by θ i and φ i , respectively. Now, having the steering vectors of the targets, the transmit signal is formed as s = N j=1 u j d j , where u j ∈ C MM ′ specifies the beam direction towards jth target and d j is the transmit symbol to the j target. The corresponding allocated power to the jth target is represented by p j = E{|d j | 2 }. Now, the reflected signals from the objects are given by where N and its complement N c are the sets of targets and clutters, respectively, i.e., N = {1, ..., N } and N c = {N + 1, ..., N + N ′ }. Moreover, the response of object i for the incident RF signal is represented by l i . Here, we assume that this response is a random variable with Gaussian distribution. Therefore, estimating the second-order moment E{|l i | 2 }, ∀i helps classifying the objects. Now the signal x i , ∀i is sensed by K distributed sensors, see Fig. 1. Then, the received signal at the kth sensor is given by where g ik ∈ C is the channel from object i towards sensor k and n k ∈ C is the additive receiver noise at the sensor k.
Here, we assume that the noise at all sensors follow identical and independent zero-mean Gaussian distribution. Moreover, the noise variance at the sensor k is given by σ 2 k . The signal is amplified at the sensors and forwarded to the fusion center in different time slots. Then, the post-processed received signal at the fusion center over R antennas and K time instants is written as where V ∈ C KR×N is the post processing matrix. Notice that where v j , ∀j ∈ N , is the jth target post processing vector. Moreover, the power of the signal received at target i is represented by δ i = E{|a H i s| 2 }. The vectors w i and n ′ are the equivalent channel and noise vectors which are defined as where α k is the amplification factor at the kth sensor and f k ∈ C R , ∀k ∈ K is the communication channel from kth sensor to the fusion center. Note that, n fc ∈ C KR is the zeromean Gaussian noise at the fusion center with R antennas over K time slots with the covariance matrix σ 2 fc I. In what follows, we assume that the steering vectors corresponding with the targets are known at the transmitter, i.e., a i , ∀i ∈ N . Whereas, the following knowledge is known at the fusion center, 1) a i ∀i ∈ N ∪N c , 2) g ik , ∀i ∈ N ∪N c , k ∈ K, 3) f k , ∀k ∈ K. In the next sections we discuss the preand post-processing schemes exploited at the transmitter and the fusion center, respectively.
A. Pre-processing
Given the steering vectors of the targets at the transmitter, maximum ratio transmission (MRT) is the optimal scheme. In MRT, the transmit directions toward the jth target (j ∈ N ) is adjusted to the corresponding steering vectors, i.e., a j . Hence, Utilizing this filter at the transmitter the received signal power at ith object (either a target or a clutter) is written as where M M ′ is the antenna gain for the jth target.
B. Post-processing
As can be noticed from (4), the post-processed signal includes both desired and interference components. This can be seen by where the jth column of the post-processing matrix V is denoted by v j ∈ C KR . Now, the post-processing filters, power allocation per target and signal amplification at the sensors need to be designed to guarantee a certain threshold is differentiating the targets l j , ∀j ∈ N . Here, mutual information is exploited as the information measure as I(z j ; l j ) = log 2 (1 + ρ j ), ∀j ∈ N , where ρ j is the SINR corresponding with jth target. Here, mutual information is a monotonically increasing function in ρ j . We formulate the SINR for target j as ρ j = δj Qj v H j wj w H j vj Σj int +Σj ns +Σj n fc , ∀j ∈ N , where the interference and noise variances are respectively. The equivalent sensor noise covariance matrix observed in decoding the information of the jth target is denoted by A n ∈ C KR×KR which is a block diagonal matrix with the kth block represented by α k σ 2 n f k f H k . Now, given MRT at the transmitter, we consider following signal combining strategies at the fusion center, A) maximum-ratio combining (MRC): maximizes signal-to-noise ratio (SNR). B) zero-forcing (ZF): maximizes signal-tointerference ratio (SIR). In what follows we discuss these schemes.
1) Maximum-ratio combining: Assuming MRC at the fusion center, the following signal combining vector maximizes SNR, v (MR) j = w j ∀j ∈ N , which is less complex for practical implementations, however does not consider the destructive effect of interference in the signal combining phase. Utilizing MRC, we will minimize the sum transmit power plus sum power amplification at the sensors jointly.
2) Zero-forcing: Here, we enforce the interference to zero while decoding the signal of jth target. This can be done in space-time by v (ZF) j = null{w 1 , ..., w j−1 , w j+1 , ..., w N +N c }, where ZF combining vector spans the null-space of the interference dimensions. However, the optimal combining vector in this null-space for the jth target is the jth column of III. OPTIMIZATION PROBLEM In this section, we formulate sum-power minimization problems under target SINR constraints. Exploiting maximumratio transmission (MRT) at the transmitter and maximumratio combining (MRC) at the fusion center, the sum transmit power plus sum power amplification minimization problem is formulated as where the jth target SINR demand is defined by ψ j . Furthermore, the sum transmit power is restricted by P max and the maximum amplification power of each sensor is limited by α max , respectively. Evidently, the objective function is affine, however SINR constraints in (13a) produce a non-convex set. Utilizing MRT at the transmitter, the SINR expression for the jth target is written as where Here, we assume the thermal noise variance at the fusion center and sensors are equal, i.e., σ 2 = σ fc = σ n k , ∀k.
Having the SINR for the jth target, the constraint (13a) can be reformulated as where Σ jdes , Σ jn s and Σ jn fc are posynomials and Σ jint is a signomial, in general.
Lemma 1. Σ jint is a posynomial if the following constraint holds, (∀i = j and ∀k = l, ∀ψ ∈ Z) Proof. The expression in (16) is rewritten as where we define the expression in the braces as Γ(α), with α = [α 1 , · · · , α K ]. Notice that, Γ(α) is the summation of K 2 monomial functions. The monomials corresponding with k = l have real positive values, since g jk g * ik g * jl g il = |g jk | 2 |g ik | 2 ≥ 0. The monomials corresponding with k = l are which do not necessarily yield a positive value. Hence, Σ jint is a signomial function in p j ∀j ∈ N t and α k , k ∈ K. The expression (22) is positive if Here, we consider the general case where Σ jint is a signomial. In general, the expression in (16) can be reformulated as the difference of two posynomials as Σ jint = Σ Hence, the inequality constraint (19), we obtain The left hand-side of the inequality constraint (24) is the division of posynomials, which can not be converted to a convex function. Problem (13) is a signomial program (SP) [19], which can be converted to a complementary geometric program (GP). This program allows upperbound constraint on the division of two posynomials. The denominator of (24) is approximated by a monomial function (known as condensation method [20]) based on the following lower-bound k c k µ k ≥ k µ c k k , which states the relationship between arithmetic and geometric mean. This lower-bound on the denominator of (24) operates as an upper-bound on the whole expression. By definingμ k = c k µ k we get Now, we utilize this inequality in the SINR constraints of the jth target in (24). The denominator of (24) is rewritten as the summation of monomials by whereμ jk are the individual monomials. Furthermore, K ′ is the number of monomials that the posynomial Σ (2) jint consists of, which can be quantified according to lemma 1. Then from (25) and (26), whereΣ D is a function of c jk , which need to be optimized to fulfill the inequality with equality. For that, c jk must be a function of α k , ∀k as c jk =μ jk Σj D . Due to the inter-dependency of the optimization parameters, we optimize α k , ∀k and c jk successively in an iterative fashion. That means, c jk , ∀j, k is optimized for the current iteration based on the solution ofμ jk in the previous iteration. Notice that the lower-bound in (27) is the approximation of Σ D around any feasible α k , ∀k, though sub-optimal. Hence, by improving α k ∀k and p j , ∀j ∈ N at each iteration, the approximation around it (defined by c jk ) is utilized for the next iteration. The convergence of the algorithm is numerically illustrated in section IV.
IV. NUMERICAL RESULTS
In this section, we provide the simulation results for a twotarget single-clutter environment, i.e., N = 2, N ′ = 1. The number of antennas at the transmitter is assumed to be 4, i.e., M = 2 and M ′ = 2 (two antennas per dimension). The fusion center is equipped with 10 antennas, i.e., R = 10. We assume 2 targets are at the azimuth and elevation angles θ = [20 45], φ = [40 30], respectively. Moreover, there exists a single clutter at the azimuth θ = 70 and elevation φ = 85. Furthermore, the sensors maximum amplification factor is assumed to be equal to 2, i.e., α max = 2. The noise variance at the sensors and fusion center are assumed to be equal to 0.5, i.e., σ 2 fc = σ 2 n k = 0.5, ∀k. Considering MRC at the receiver, the sum-power minimization problem is solved iteratively under per-target SINR constraints. This problem is a signomial program, which is turned to a geometric program and solved iteratively until convergence. The convergence of the algorithm for joint transmit power and sensor amplification minimization problem is depicted in Fig. 2(a), where we observe the fast convergence. Assuming maximum amplification at the sensors, the transmit power minimization problem is also a signomial problem, which is treated similarly. The minimum sum-power consumption for this case (maximum amplification) is compared to the case with optimal amplification in Fig. 2(b). With maximum amplification at the sensors, MRC is compared with ZF Fig. 2(b). In this figure, we observe that MRC is optimal when the SINR demands are sufficiently low. However, ZF outperforms MRC as the interference increases, hence, it is efficient to zero force the interference. Intuitively, by zero-forcing processing at the fusion center, interference-free signaling dimensions becomes less than the number of available dimensions KR. This is due to reserving N + N ′ − 1 dimension for null steering. This leaves us with KR − N − N ′ + 1 signaling dimensions. Therefore, comparing ZF and MRC we notice the trade-off between sacrificing some dimensions in expense of obtaining interference-free dimensions, and utilizing all dimensions.
|
2019-02-28T14:51:19.000Z
|
2019-02-28T00:00:00.000
|
{
"year": 2019,
"sha1": "fffffa95e1bccaf042aebf51afaf1784539aa126",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fffffa95e1bccaf042aebf51afaf1784539aa126",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
22058691
|
pes2o/s2orc
|
v3-fos-license
|
Recurrence of Dupuytren’s contracture: A consensus-based definition
Purpose One of the major determinants of Dupyutren disease (DD) treatment efficacy is recurrence of the contracture. Unfortunately, lack of agreement in the literature on what constitutes recurrence makes it nearly impossible to compare the multiple treatments alternatives available today. The aim of this study is to bring an unbiased pool of experts to agree upon what would be considered a recurrence of DD after treatment; and from that consensus establish a much-needed definition for DD recurrence. Methods To reach an expert consensus on the definition of recurrence we used the Delphi method and invited 43 Dupuytren’s research and treatment experts from 10 countries to participate by answering a series of questionnaire rounds. After each round the answers were analyzed and the experts received a feedback report with another questionnaire round to further hone in of the definition. We defined consensus when at least 70% of the experts agreed on a topic. Results Twenty-one experts agreed to participate in this study. After four consensus rounds, we agreed that DD recurrence should be defined as “more than 20 degrees of contracture recurrence in any treated joint at one year post-treatment compared to six weeks post-treatment”. In addition, “recurrence should be reported individually for every treated joint” and afterwards measurements should be repeated and reported yearly. Conclusion This study provides the most comprehensive to date definition of what should be considered recurrence of DD. These standardized criteria should allow us to better evaluate the many treatment alternatives.
Methods
To reach an expert consensus on the definition of recurrence we used the Delphi method and invited 43 Dupuytren's research and treatment experts from 10 countries to participate by answering a series of questionnaire rounds. After each round the answers were analyzed and the experts received a feedback report with another questionnaire round to further hone in of the definition. We defined consensus when at least 70% of the experts agreed on a topic.
Results
Twenty-one experts agreed to participate in this study. After four consensus rounds, we agreed that DD recurrence should be defined as "more than 20 degrees of contracture recurrence in any treated joint at one year post-treatment compared to six weeks post-treatment". In addition, "recurrence should be reported individually for every treated joint" and afterwards measurements should be repeated and reported yearly.
Conclusion
This study provides the most comprehensive to date definition of what should be considered recurrence of DD. These standardized criteria should allow us to better evaluate the many treatment alternatives. PLOS
Introduction
Recurrence of disease following any technique to correct the contracture(s) is one of the major setbacks in the treatment of Dupuytren's disease (DD). Since present techniques only treat the symptoms of this chronic and progressive disease, recurrence over time is inevitable in the majority of patients. Therefore, assessment of recurrence rates is an essential element in describing and comparing the efficacy of different treatment options for DD. Two separate systematic reviews [1,2] have recently identified dire need for consensus on how to define recurrence of DD. This lack of a clear definition may partly explain why reported recurrence rates vary from 0% to 100% [3][4][5][6][7][8]. In addition, we have shown that applying the different definitions on a single dataset can change the resulting recurrence rates from 2% to 86% [1].
To obtain an internationally accepted and wide supported definition of recurrence for DD, a consensus agreement based on the experience and knowledge of an international group of renowned experts is needed. Therefore, the goal of this international study was to develop consensus on a single definition of recurrence of DD that is applicable in clinical and research settings.
Methods
In this study we used the Delphi method, which is designed to reach consensus between individuals using questionnaire-based surveys [9]. This expert-based consensus study did not involve participation of study subjects such as patients or non-patient volunteers. Therefore, no institutional review board approval was needed for the present study based on local law. Experts in the field of Dupuytren's disease (DD) were invited to participate in our Delphi study. To identify these experts, we selected all clinical DD-related PubMed articles that were published between 2005 and 2012. In addition, we used the articles from our systematic review to identify experts in the field of DD [1]. Either the first or last author of each article, based on the number of publications in the field of DD, was invited to participate. When multiple experts were identified from the same institution, only the most experienced expert was invited to participate. We excluded experts that did no longer participate in the field, for example due to retirement, or authors who published only a single DD-related paper.
In November 2012, 42 experts from ten countries in four continents were invited to participate. All experts were provided with information on the Delphi study as well as with a draft of our systematic review. Following Delphi guidelines, 51% agreement is considered consensus. However, we aimed for a minimal of 70% agreement for consensus. The identities of the other participating experts were not disclosed to the experts during the process.
In the first round, experts were asked to score the relevance of four different dimensions of recurrence to be included in a single definition of DD recurrence (first two columns of Table 1) using a 0-10 numerical scale and multiple choice questions. For example, we asked "On a scale from 0 to 10, how important is it to include the return of Dupuytren's nodules based on palpation or visual inspection in the definition of recurrence?" After each question, the experts could add a comment or explanation.
The first two authors analyzed the results and discussed the outcomes with the other authors. If 70% of the experts scored five or higher, the item was considered important for further consideration. These included items were discussed more in-depth in the following rounds.
In each following round, we provided feedback to the experts by summarizing the answers on the previous round in combination with a synopsis of anonymous comments. After this feedback, we asked the experts to answer each question again on which consensus was not yet reached. Topics on which consensus was reached were also presented but only with the opportunity for the experts to give additional comments. If experts did not complete a previous round before the deadline, they were still invited to the next round.
Results
Twenty-one experts (64%) from 10 countries participated in this study: 7 from North-America, 13 from Europe, and 1 from Australia. A total of four rounds were needed to reach consensus. The response rate varied per round between the 76% and 90% (Fig 1).
A first dimension scored by the experts was location of recurrence. Consensus was that recurrence of Dupuytren's disease (DD) should be located in the operated area only in order to differentiate recurrence from disease extension to other joints. In addition, since DD can affect multiple joints, fingers and hands, consensus was that recurrence should be measured in all treated joints, fingers and hands regardless if full extension was reached during treatment. Experts also reached consensus that all treated joints should be scored individually to count as a recurrence rate ( Table 1).
The second dimension was whether a recurrence should be assessed based on the presence of nodules, cords and/or joint contractures. Experts agreed DD nodules and cords should not be explicitly taken into account and furthermore a recurrent joint contracture of at least 20 degrees in one joint is needed for a recurrence.
A third dimension was the timing of baseline measurements and follow-up. Experts agreed recurrence should be measured at one year post-treatment and should be compared to a baseline measurement. Consensus was that intra-operative measurements should not be used as a baseline value and, therefore, an assessment at six weeks after treatment was selected as a baseline. Since it is presently unclear from literature how recurrence develops over time, experts agreed to recommend yearly repeated measurements when feasible.
A fourth dimension consisted of scoring patients' characteristics, such as diathesis and patient perception of recurrence. Although it is clear that diathesis has a significant influence on recurrence, the experts agreed that information on diathesis should not be included into the definition, although it should be scored in every study. The experts also agreed that, while patient-rated information about recurrence can be relevant, it should not be included in a single definition of recurrence of DD.
After the last round, all 21 experts agreed to define recurrence of Dupuytren's disease after treatment as "an increase in joint contracture in any treated joint of at least 20 degrees at one year post-treatment compared to six weeks post-treatment". Additionally, although not part of the definition, the experts advised the community to 1) conduct studies that repeat measurements yearly to study the development of recurrence, and 2) measure and report recurrence rates for all treated joints individually (Table 2: implementation of the definition). [1][2][3][4] were presented to the experts and the resulting consensus on each dimension is presented. The last column shows the percentage of experts that agreed on each consensus or a range of percentages, when the outcome differed in more than one round of the Delphi study.
Dimensions
Consensus % Experts Recurrence of Dupuytren's contracture: A consensus-based definition Recurrence of Dupuytren's contracture: A consensus-based definition
Discussion
Since the present lack of a consensus for recurrence of Dupuytren's Disease make it impossible to compare results between different studies, we conducted this international study to obtain consensus on a universal definition for recurrence of DD after treatment. Based on this, we propose to define recurrence of DD after treatment as "an increase in joint contracture in any treated joint of at least 20 degrees at one year post-treatment compared to six weeks posttreatment The definition established in this study was obtained by evaluation four different dimensions of recurrence. The first dimension was location of recurrence. Consensus was that only the operated or treated area should be considered and that all treated hands, fingers and joints should be included to calculate recurrence rates, which allow to distinguish recurrence (in the same area) from disease extension (outside of the treated area). In addition, although additional measures such as a total passive extension deficit (TPED) can also be of value, consensus was that individual joint measurements should be used primarily. One expert stated: 'TPED is measured while all joints are being simultaneously passively extended. As such, it represents fixed joint contractures. This will yield a different measurement than the sum of measurements made of individual joint passive extension, while the proximal joint or distal joints in that same ray are allowed to flex.' Furthermore, a disadvantage of a TPED is that it includes non-affected joints and newly affected joints (disease extension), creating possible false-positive recurrence rates.
A second dimension considered including palpable nodules, palpable cords and contractures in the definition of recurrence. The experts unanimously agreed to include increase of contracture in the definition of recurrence. Furthermore, they agreed to exclude nodules and cords. The angular threshold for the contracture to be considered a recurrence was set at 20 degrees. There were two reasons for this threshold. Firstly, inherent measurement errors of goniometry are approximately 5-10 degrees and therefore a larger threshold is needed [10]. Secondly, 15-20 degrees is often considered an indication for a new intervention, for example in the Hueston Table-top test [11].
The exclusion of the presence of nodules and cords in the definition was more controversial in our group of experts. While the main reason to include palpable nodules and palpable cords in the definition was that reappearing nodules and cords are the earliest signs and often the cause of recurrence, the majority of the experts mentioned three main reasons to exclude palpable nodules and palpable cords in the definition. Firstly, nodules and cords by themselves very seldom cause any disability, or require surgical treatment. Secondly, minimal invasive techniques are meant to disconnect Dupuytren tissue that forms cords or nodules. However, these cords and nodules are left in place during these techniques [5,12]. This makes it difficult to identify newly formed nodules and cords because the old ones remain. Thirdly, it is challenging to reliably identify the presence of nodules and cords in the presence of post-surgical scarring.
A third dimension considered timing of baseline and follow-up measurements. Consensus was to perform baseline measurements at six weeks post treatment, mainly because experts concluded that wound healing takes time following surgery. Furthermore, hand function will return in approximately two to four weeks and it also has been demonstrated that the results at six weeks post treatment were better compared to one-week post treatment [13,14]. Therefore, six weeks was considered a first time-point evaluation for treatment success. The follow-up time was more controversial. Experts mentioned from a clinical point of view, longer followup measurements might express more precisely the amount of recurrent treatments that are needed. However, from a research perspective, a one-year follow-up may already express the main differences between techniques. One expert stated: 'recurrence progresses with time. But this progression is non-linear. Either our scientific community develops standardized time-torecurrence charts, or we all decide to evaluate all patients at a given point in time.' After four rounds, consensus was to measure recurrence after one year. In addition, the experts advised yearly repetition of measurements in studies that cover multiple follow-up years since more knowledge is needed on how recurrence progresses over time.
A last dimension included patient characteristics and patients' perception. Consensus was that patient factors (e.g. diathesis) can predict the risk of developing recurrence, but are not a characteristic of recurrence itself [15]. Therefore, it was excluded. In addition, while all experts concluded that patients' perception is very important [16], it was also excluded. One experts stated 'while we can pat ourselves on the back for a great range of motion improvement, or feel we did not achieve our goal, the patient's own perception is the bottom line of what matters the most. Unfortunately, we do not have very objective measures (of subjective improvement) and any measure will be invariably affected by factors unrelated to the medical treatment delivered'. Since, there are no objective measures to measure patients' perception about recurrence, it is not included in this definition.
Our study has a number of weaknesses and strengths. Firstly, only the minimal amount of experts generally assumed to be needed for a Delphi study participated in our study [9]. Unfortunately the invited experts from the Asian continent did not respond and are therefore not represented in this Delphi study. However, all responded experts represent countries from all over the world and are clearly renowned in the field. Experts completed all rounds with an average response rate of 80% and, at the end of the process, all experts agreed on the final definition of recurrence. Secondly, this Delphi study was conducted with computer-based questionnaires. A disadvantage of this method is that it lacks the ability to stimulate discussion and can lead to misinterpretation of comments given by experts. On the other hand, computerbased questionnaires allow anonymous responses from the experts, and thus avoiding possible peer-pressure. A third limitation was that the goniometric measurement protocol needed for this definition was not part of the Delphi consensus rounds. To our knowledge, an internationally recognized guideline for measuring joint angle is presently lacking. In our experience, most researchers and clinicians measure joint angle dorsally [17]. As some of the experts as well as a reviewer of this manuscript have correctly noted, it is important to control for the adjacent joints when measuring a specific joint, especially when a cord spans multiple joints. Fortunately, since the present definition is based on a change in joint angle of time, differences between goniometric measurement techniques may lead to different absolute angles, but difference may be much smaller when analyzing the change in joint angle over time. A final limitation is that while our goal was to obtain one clinically relevant and easily applicable definition for recurrence of DD after treatment, it may not be possible to reflect the complexity of recurrence of DD in this single definition. Table 2 shows an example of how a typical dataset from a clinical study should be interpreted to calculate a recurrence rate. From this table, it is also clear that this single recurrence rate does not capture the complexity of the data. Therefore, we do not advocate researchers to only use this single measure, but we do advocate this is the minimal measure to report. Additional secondary measures may be needed to also describe the presence of the disease or disease extension, for example the presence of palpable nodules and cords. Also, in addition to using a threshold for recurrence, it could also be valuable to describe the average change in joint angle between baseline and follow-up or to report recurrence rate per joint separately.
In conclusion, we present a uniform definition that for the first time allows comparison between future studies, thereby improve our understanding of the effectiveness of different treatment methods.
Acknowledgments
The Dupuytren Delphi Group consists of the experts that contributed to the Delphi study. All experts answered the questions in each round, provided detailed feedback and made suggestions for the following rounds. In addition, the experts reviewed and corrected the manuscript (all members equally). Recurrence of Dupuytren's contracture: A consensus-based definition
|
2017-07-15T10:43:48.635Z
|
2017-05-15T00:00:00.000
|
{
"year": 2017,
"sha1": "0593a3caaf6b5be29ddbc2c4147e846e8346f8e3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0164849",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76c39840ba92e61ef7b050b57dea82b188f2dddd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268728479
|
pes2o/s2orc
|
v3-fos-license
|
IronIII-catalyzed asymmetric inverse-electron-demand hetero-Diels–Alder reaction of dioxopyrrolidines with simple olefins
The asymmetric catalytic inverse-electron-demand hetero-Diels–Alder reaction of dioxopyrrolidines with a variety of simple olefins has been accomplished, significantly expanding the applicability of this cyclization to both cyclic hetero-dienes and dienophiles. A new type of strong Lewis acid catalyst of ferric salt enables the LUMO activation of dioxopyrrolidines via formation of cationic species, this method yields a range of bicyclic dihydropyran derivatives with exceptional outcomes, including high yields (up to 99%), diastereoselectivity (up to 99 : 1) and enantioselectivity (up to 99% ee) under mild conditions. This facile protocol was available for the late-stage modification of several bioactive molecules and transformation into macrocycle molecules as well. The origins of enantioselectivity were elucidated based on control experiments.
Introduction
The synthesis of novel heterocyclic scaffolds continues to capture the attention of the organic chemistry community owing to their potential pharmaceutical activities. 1 The inverseelectron-demand hetero-Diels-Alder (IEDHDA) reaction, a signicant variant of Diels-Alder reactions, proves adept at incorporating functional groups into hetero-rings (Scheme 1a).Particularly noteworthy is its asymmetric catalytic version, enabling the construction of optically enriched six-membered rings. 2 Mechanistically, this reaction unfolds through the interaction between the LUMO of an electron-decient heterodiene and the HOMO of electron-rich dienophiles.In this context, the use of chiral Lewis acids or organocatalysts becomes instrumental in expediting enantioselective transformations, employing three primary strategies: elevating the HOMO of dienophiles, lowering the LUMO energy of dienes, or concurrently activating both through dual modulation. 3As evidenced in Scheme 1a, the asymmetric catalytic IEDHDA reactions involving a,b-unsaturated carbonyls with electronrich dienophiles, such as cyclic or acyclic 1,3-dienes, 4 vinyl ethers 5 and the related, 6 or hydrazone conjugated carboncarbon double bond, 7 have been well demonstrated.HOMOraising strategies, employing enamine 8 or enolate activation 9 with organocatalysts, further broaden the scope of dienophiles.The generation of active dienophiles in situ, exemplied by the conversion of 3-cyclopropylideneprop-2-enone into cyclobutene-fused furans 10 or the oxidation of ethers, 11 facilitates rapid and alternative access to cycloaddition products.In contrast, IEDHDA reaction of simple alkenes is generally challenging 12,14 and achievable only by lowering the LUMO of hetero-dienes.Two successful examples involve the asymmetric [4 + 2] cycloaddition of b,g-unsaturated a-ketoesters into chiral oxanes (Scheme 1b), as illustrated in Scheme 1b.Luo's approach employs a chiral binary acid complex that synergistically combines a chiral phosphoric acid with a metal salt (In III or Sc III ). 13 Sakakura and Ishihara's report features an n-cation copper complex. 14Critically, both processes aim to enhance the cationic nature of the metal centers for the LUMO activation of dienes.Despite these advancements, further exploration of diene species, such as cyclic enones instead of linear unsaturated a-ketoesters, for the asymmetric catalytic IEDHDA reaction of unactivated alkenes, which may suffer partial polymerization, remains a challenging frontier in current research.
Bicyclic dihydropyrans fused with a g-lactam or pyrrolidine moiety constitute core structures in bioactive molecules and natural products. 15The asymmetric IEDHDA reaction of glactam-derived cyclic enones offers an efficient pathway for constructing these valuable backbones (Scheme 1c). 16We propose that chiral Lewis acid catalysts, specically those employing N,N 0 -dioxide ligands, may expedite the cycloaddition by LUMO-activating dioxopyrrolidine 17 for cyclization with simple olens.This rationale is grounded in the observation that chelation of these tetra-oxygen ligands allows the counterion of the metal precursor to delocalize from the metal center to extend, generating n-cation-characterized stronger Lewis acid catalysts. 18Concerning stereoselectivity, challenges extend beyond enantioselectivity to include the endo/exo ratio inuenced by orbital-favored transition states and a stepwise 1,4addition/cyclization process. 19In this context, we present a chiral iron-complex catalyzed asymmetric IEDHDA reaction of dioxopyrrolidines with simple alkenes, yielding various optically active bicyclic dihydropyran derivatives with outstanding results: up to 99% yield, 99 : 1 diastereoselectivity, and up to 99% enantioselectivity, all achieved under mild conditions.
Results and discussion
Our investigation of the IEDHDA reaction began with dioxopyrrolidine A1 and styrene B1 as model substrates to optimize the reaction conditions (Table 1).We identied the critical parameters to the reactivity and found that Lewis acidity of the metal salts played a decisive role in whether the reaction occurs (see ESI † for details), which is in consistent with the mechanism of LUMO-activation of hetero-diene.The reaction performed only in the presence of stronger Lewis acids, such as In(OTf) 3 , Fe(OTf) 3 , or Al(OTf) 3 , to give the desired product C1 in low diastereoselectivity.The comparison with the reaction of heterosubstituted alkenes which could be performed with Ni II complexes manifested critical of the Lewis acidity of the catalyst. 20Using 10 mol% of In(OTf) 3 in CH 2 Cl 2 at 35 °C, the cycloaddition product was isolated in 84% yield and 48 : 52 endo : exo, which is selected to identify optimal chiral ligands to control the stereoselectivity.Noteworthily, the diastereoselectivity is poor, differing from the preferred endo-selectivity of achiral Lewis acid-promoted IEDHDA of b,gunsaturated a-ketoesters. 21Investigating asymmetric catalytic reaction showed that the substituents of chiral N,N 0 -dioxide ligands at the anilines affected the diastereo-and enantioselectivity a lot (Table 1, entries 1-3).The reaction catalyzed by In(OTf) 3 /L 3 -TQEt 2 Me delivered the product in 91% yield with 73 : 27 endo : exo, and 84% ee for endo-isomer when 1,1,2-trichloroethane was used as the solvent (entry 4).The use of Fe(OTf) 3 instead led to slightly increased endo-diastereoselectivity (entry 5).Further modication of para-substitutions at the aniline units of the ligands was carried out (entries 5-7), and it was observed that by introduction of steric hindered 1-adamantyl group into the para-position (L 3 -TQEt 2 Ad) lead to a slightly higher diastereoselectivity and enantioselectivity, and the product C1 was isolated in 95% ee with 90 : 10 endo : exo ratio and 97% ee (entry 7).Reducing the usage of styrene to 5 equivalents led to no disadvantage to the outcomes (entry 8).Reinvestigation of the amino backbone of N,N 0 -dioxide ligands showed that the reactivity dropped a lot when L-proline or Lpipeclic acid based ones were employed (entries 9 and 10).
Comparatively, other representative chiral ligands, such as chiral phosphoric acid L1, bisoxazoline L2 and pyridineoxazoline L3 resulted in poor reactivity (entries 11-13), which manifests the importance of ligand-acceleration in asymmetric Lewis acid catalysis. 22In these cases, the minor exo-product was obtained in lower ee value than the endo-product.The absolute conguration of endo-product C1 was determined to be (2S, 4R) by X-ray crystallography analysis 23 (see ESI † for details).
Scheme 2
The substrate scope of dioxopyrrolidines.a Unless otherwise noted, all reactions were performed with A (0.1 mmol), styrene B1 (0.5 mmol) and Fe(OTf) 3 /L 3 -TQEt 2 Ad (1 : 1, 10 mol%) in CH 2 ClCHCl 2 (1.0 mL) at 35 °C.The yield is isolated yield.The endo : exo ratio was determined by 1 H NMR analysis and the ee value was determined by UPC 2 or HPLC on a chiral stationary phase.
reaction temperature had to be rationalized in order to balance the enantioselectivity and yield.Propen-2-ylbenzene participated in the cyclization in moderate yield and dropped stereoselectivity (D29).Both trans-and cis-propen-1-ylbenzene could perform the reaction in good endo-selectivity and excellent enantioselectivity but the yields were moderate (D30-D31) aer the addition of NaBAr F 4 .Similarly, when trans-methylstyrene bearing 3,4-dimethoxyl substituents was subjected, only moderate stereoselectivity was obtained (see ESI † for details).Although the trans-conguration of olene delivered into the product without change, and no obverse intermediate was detected, stepwise pathway or a concerted asynchronous could not be ruled out.Moreover, when a,a-dialkyl olens or 2,3dimethylbut-2-ene were used as the dienophiles (D32-D35), good enantioselectivity could be given aer reinvestigation of the chiral catalyst as In(OTf) 3 /L 3 -TQEt 2 combination.The reaction of tetrahydronaphthalene bearing an exocyclic double bond worked in the presence of Fe(OTf) 3 /L 2 -TQEt 2 Ad to afford only endo-D36 in 52% yield with 67% ee.
To illustrate the potential synthetic utility of the current catalytic system, a scale-up synthesis of C1 were performed.As shown in Scheme 4a, under the optimized reaction condition, dioxopyrrolidine A1 (3.2 mmol) reacted smoothly with styrene B1 (5.0 equiv.),affording the desired product C1 in 94% yield (1.15 g) with 90 : 10 endo/exo ratio and 99% and 71% ee separately.Under the oxidation of RuCl 3 with NaIO 4 , the double bond of dihydropyrrolone ring of endo-C1 broke to give O,Nbased macrocycle E1 in good yield with slight loss of stereoselectivity. 23Alternatively, reduction reaction mediated by H 2 in the presence of Pd/C in methanol led to the ring-opening reaction to yield E2 in nearly completely yield but the optical purity decreased a little (Scheme 4b).
UV-vis absorption spectra were carried out to show the interaction of chiral ferric iron catalyst with the diene (Fig. 1a).There was obvious hyperchromic effect upon mixing Fe(OTf) 3 with dioxopyrrolidine A1, especially in the presence of the L 3 -TQEt 2 Ad ligand.In addition, the investigation of relationship between ee value of L 3 -TQEt 2 Ad and that of C1 showed a self-evident linear effect, 24 implying catalytically active species was likely to be the monomeric complex of Fe(OTf) 3 and L 3 -TQEt 2 Ad (Fig. 1b).
Chiral N,N 0 -dioxide rare-earth metal complexes, such as Sc(OTf) 3 or Y(OTf) 3 , which are usually rationalized as good Lewis acid candidates, were found to sluggish in accelerating the cycloaddition (see ESI † for details).Analysis of several crystal structures of N,N 0 -dioxide-M(OTf) 3 complexes provide interesting cooperation priority (Fig. 1c).It was found that the Fe(OTf) 3 complexes of N,N 0 -dioxides were more likely to form trication characterized complexes although the whole structures are neutral overall. 25However, there are at least one anion coordinating to the Sc(III) center as showing in the related structures. 26In most cases, the counter ions locate around the complexes, having interaction via H-bonds with the outwards amide subunits. 14In addition, other rare-earth metal complexes with larger ion radii are prone to form metal complexes with coordination number beyond six, 18,27 which exhibited different conguration geometry from the octahedral complexes of iron.Thus, the chiral ferric iron complexes are capable to efficiently lower the LUMO energy of the dienophile upon coordination.
Based on the observed stereoselectivity and the previous works, 20 possible catalytic modes were proposed (Fig. 1d).Initially, the tetradentate L 3 -TQEt 2 Ad coordinates to Fe(III) to form an octahedral trication species.The substrate A1 bonds to the ferric iron center with the two carbonyls via bidentate manner, which leads to its b-Si face blocking by one of the amide subunits of the ligand.The free styrene B1 prefers to undergo cyclization from b-Re face of A1 in endo-or exo-selectivity (TS1 or TS2).The exo-product is the minor one which is in line with the disfavoured steric hindrance between the aryl substituent of olen and tetrahydroisoquinoline backbone of the ligand as shown in TS2.As a result, the preferred endo-b-Re face selective cyclization gives raise to the formation of (2S,4R)-C1 as the major one.Additionally, the exo-cyclization may occur in a step-wise conjugate addition manner.
Conclusions
In summary, we have developed a highly enantioselective [4 + 2] cycloaddition of a number of simple olens with cyclic heterodiene of dioxopyrrolidines.The reaction proceeded well in the assistance of chiral N,N 0 -dioxide/Fe III complex catalysts, which could form cationic Lewis acid species to lower the LUMO energy of dioxopyrrolidines.It effectively delivered various optically active bicyclic dihydropyranes derivatives with excellent yield (up to 99%), diastereoselectivity (up to 99 : 1) and enantioselectivity (up to 99% ee) under mild conditions, including of late-stage modication of drug-molecular-based dienes.Mechanistic studies support the strategy and transition states were proposed to elucidate stereo-induction.Further investigations of asymmetric transformations of simple olens are currently ongoing in our laboratory.
Table 1
Optimization of the reaction conditions a
|
2024-03-29T05:13:31.019Z
|
2024-02-29T00:00:00.000
|
{
"year": 2024,
"sha1": "3b741abe70cf6e6ea05ef7168590a87f03d0c0c3",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2024/sc/d4sc00078a",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b741abe70cf6e6ea05ef7168590a87f03d0c0c3",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212683991
|
pes2o/s2orc
|
v3-fos-license
|
Acute appendicitis: Diagnostic accuracy of Alvarado and RIPASA scoring system
Introduction: One of the commonest clinical presentation that requires emergency surgery is acute appendicitis. Much efforts have been directed towards early diagnosis and intervention. Delay in diagnosis leads to increase morbidity and costs. In present study, we aimed to compared two scores i.e. RIPASA and Alvarado scoring system in the diagnosis of acute appendicitis. Materials and methods: Present hospital based screening included 100 patients presenting with Right iliac fossa pain and clinically diagnosed as acute appendicitis. The diagnosis of acute appendicitis was confirmed by operative findings and histopathological assessment of the appendicectomy specimen. A score of 7 or above is taken as high probability of acute appendicitis for Alvarado scoring system while a score of 7.5 or above was taken as high probability of acute appendicitis for RIPASA scoring system. Results: Sensitivity and specificity of modified Alvarado score in diagnosing Appendicitis was 58.7% and 88% while PPV and NPV was 93.6% and 41.5%. Sensitivity and specificity of RIPASA score in diagnosing Appendicitis was 89.3% and 84% while PPV and NPV was 94.4% and 72.4%. Overall diagnostic accuracy was 88% and 66% for RIPASA and modified Alvarado Score. Both scores showed good efficacy in screening the cases appendicitis on ROC analysis. However, overall screening efficacy of RIPASA score was better than Modified Alvarado Score (AUC – 0.891 vs 0.702). Conclusion: RIPASA score is currently a much better diagnostic scoring system for acute appendicitis compared to the Alvarado score. RIPASA had significantly higher sensitivity, NPV and diagnostic accuracy in our study group. The 14 fixed parameters can be easily and rapidly obtained in any population setting by taking a complete history and conducting a clinical examination and two simple investigations. In remote settings or emergency, a quick decision can be made with regards to referral to an operating surgeon or observation. The use of RIPASA scoring would help in decreasing the unwarranted patient admissions and also expensive radiological investigations.
Introduction
One of the commonest clinical presentation that requires emergency surgery is acute appendicitis [1] . It is rare in infancy and amongst the elderly, but is common in children, teenagers and young adults [2] . Approximately 6% of the population will suffer from acute appendicitis during their lifetime; therefore much effort has been directed toward early diagnosis and intervention. This effort has successfully lowered the mortality rate to less than 0.1% for non-complicated appendicitis, 0.6% where there is gangrene, and 5% for perforated cases [3] . Much efforts have been directed towards early diagnosis and intervention. Delay in diagnosis leads to increase morbidity and costs. Despite attempts to increase the diagnostic accuracy in cases of acute appendicitis, the rate of misdiagnosis in developed countries has remained constant at 15.3% [4] . The classical signs of symptoms of acute appendicitis were first reported by Fitz in 1886. Since then it has remained the most common diagnosis for hospital admission requiring laparotomy [5,6] .
Simple appendicitis can progress to perforation, which is associated with a much higher morbidity and mortality, and surgeons have therefore been inclined to operate when the diagnosis is probable rather than wait until it is certain. As a result of their concern about this, surgeons create for themselves 'a surgical security zone which allows them to accept a 15-30% negative laparotomy rate with impunity [7] . Despite more than 100 years' experience, accurate diagnosis still evades the surgeon. Simple appendicitis can progress to perforation, which is associated with a much higher morbidity and ~ 415 ~ mortality, and surgeons have therefore been inclined to operate when the diagnosis is probable rather than wait until it is certain [7] . But the surgical principle about acute appendicitis, "when in doubt, take it out", does not hold true in view of the number of major and minor complications following appendectomy. The diagnosis of appendicitis can be difficult, occasionally challenging the diagnostic skills of even the most experienced surgeon. Attempts to increase the diagnostic accuracy of acute appendicitis have included computer aided diagnosis, imaging by ultrasonogaphy, laparoscopy and even radioactive isotope imaging [8][9][10][11] . Owing to its myriad presentations, acute appendicitis is a common but difficult diagnostic problem. The accuracy of the clinical diagnosis has been reported to range from 76% to 92% and varies greatly depending on the experience of the examiner [12] . A number of scoring systems have been used for aiding in early diagnosis of acute appendicitis and its prompt management. These scores make use of clinical history, physical examination and laboratory findings. The Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) and Alvarado score are new diagnostic scoring systems developed for the diagnosis of Acute Appendicitis and has been shown to have significantly higher sensitivity, specificity and diagnostic accuracy. The RIPASA scoring system includes more parameters than Alvarado system and the latter did not contain certain parameters such as age, gender, duration of symptoms prior to presentation. These parameters are shown to affect the sensitivity and specificity of Alvarado scoring system in the diagnosis of acute appendicitis [13] . The RIPASA Score is a new diagnostic scoring system developed for the diagnosis of Acute Appendicitis and has been shown to have significantly higher sensitivity, specificity and diagnostic accuracy compared to Alvarado Score, particularly when applied to Asian population [14][15][16][17][18][19] . Not many studies in India have been conducted to compare RIPASA and Alvarado scoring system in the diagnosis of acute appendicitis. Hence, we prospectively compared Alvarado and RIPASA score by applying them to the patients attending our hospital with right iliac fossa pain that could probably be acute appendicitis.
Materials and Methods
A hospital based screening study was conducted at Department of Surgery of a tertiary care hospital. Study included 100 patients presenting with Right iliac fossa (RIF) pain and clinically diagnosed as acute appendicitis. The detailed history, clinical examination, laboratory investigations was done, which include routine haematological investigations, urine routine, x-ray KUB and USG abdomen and pelvis in some equivocal cases. Two specially-designed proforma were filled in for each patient. These proforma had general information about the patient plus eight variables based on the Alvarado scoring system [20] and another proforma had similar patient details and the fourteen variables based on RIPASA scoring system [21] . The decision to operate on the patient (vs. conservative line of management) was based solely on the clinical suspicion of an experienced surgeon who was not a part of/ involved in the study. The diagnosis of acute appendicitis was confirmed by operative findings and histopathological assessment of the appendicectomy specimen with the ultimate criterion for the final diagnosis of acute appendicitis being the histological demonstration of polymorphonuclear leucocytes throughout the thickness of the appendix wall. Those patients who were treated conservatively and subsequently discharged were reviewed in the surgical outpatient within a week. A score of 7 or above is taken as high probability of acute appendicitis for Alvarado scoring system while a score of 7.5 or above was taken as high probability of acute appendicitis for RIPASA scoring system.
Statistical analysis
All the data was noted down in a pre-designed study proforma. Qualitative data was represented in the form of frequency and percentage. Quantitative data was represented using Mean ± SD and Median & IQR (Interquartile range). Diagnostic accuracy was evaluated by calculating sensitivity, specificity, PPV and NPV using standard formulae. A p-value < 0.05 was taken as level of significance. Results were graphically represented where deemed necessary. SPSS Version 21 was used for most analysis and Microsoft Excel 2010 for graphical representation.
Results
Mean age of the study cases was 27.32 years with maximum number of cases between 21-40 years of age (67%). Out of 100 patients enrolled in the study 64 were males (64%) and 36 were females (36%). Diagnosis of acute appendicitis as per modified Alvarado Score was made in 47% cases. Diagnosis of acute appendicitis as per modified RIPASA Score was made in 71% cases. Diagnosis of acute appendicitis was confirmed on histopathology by 75% cases. Sensitivity and specificity of modified Alvarado score in diagnosing Appendicitis was 58.7% and 88% while PPV and NPV was 93.6% and 41.5%. Sensitivity and specificity of RIPASA score in diagnosing Appendicitis was 89.3% and 84% while PPV and NPV was 94.4% and 72.4%. Overall diagnostic accuracy was 88% and 66% for RIPASA and modified Alvarado Score (Table 1 & 2). ROC curve analysis was done to evaluate for screening efficacy of Modified Alvarado & RIPASA Score. Both scores showed good efficacy in screening the cases appendicitis. However, overall screening efficacy of RIPASA score was better than Modified Alvarado Score (AUC -0.891 vs 0.702) (Graph 1, Table 3).
Discussion
Acute appendicitis is one of the most common conditions seen in surgical emergency. It can be easily treated if accurate diagnosis is made in time, otherwise delay in diagnosis and treatment can lead to gangrene, perforation and diffuse peritonitis. Acute appendicitis has a lifetime occurrence of approximately 7% and perforation rates of 17-20% [1] . The decision to operate is based on disease history and physical findings. Often patients with acute appendicitis are not diagnosed until the occurrence of severe complications while waiting for more evidence to the diagnosis. These patients have a higher mortality and morbidity than patients who are diagnosed in time [13] . Thus, a timely therapeutic intervention of acute appendicitis is important for decreasing the morbidity and mortality by timely accurate diagnosis. By and large throughout History main emphasis was on clinical judgment but prone to interobserver variation in diagnosing and decision making for surgery. Despite attempts to increase the diagnostic accuracy in cases of acute appendicitis, the rate of misdiagnosis in developed countries has remained constant at 15.3%. Despite more than 100 years' experience, accurate diagnosis still evades the surgeon. Simple appendicitis can progress to perforation, which is associated with a much higher morbidity and mortality, and surgeons have therefore been inclined to operate when the diagnosis is probable rather than wait until it is certain [4] .
But the surgical principle about acute appendicitis, "when in doubt, take it out", does not hold true in view of the number of major and minor complications following appendectomy. Many scoring systems have been proposed in the past as an objective to ensue a clinically accurate uniform diagnostic tool which can be put into practice for early and timely diagnosis of acute appendicitis without wasting time waiting for other means of diagnosis like imaging modalities. The Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) and Alvarado score are the two diagnostic scoring systems developed for the diagnosis of Acute Appendicitis and has been shown to have significantly higher sensitivity, specificity and diagnostic accuracy. Not many studies in India have been conducted to compare RIPASA and ~ 417 ~ Alvarado scoring system in the diagnosis of acute appendicitis. Hence, we prospectively compared Alvarado and RIPASA score by applying them to the patients attending our hospital with right iliac fossa pain that could probably be acute appendicitis.
In present study, a score of 7 or above is taken as high probability of acute appendicitis for Alvarado scoring system while a score of 7.5 or above was taken as high probability of acute appendicitis for RIPASA scoring system. At the respective cut-offs, diagnosis of acute appendicitis as per modified Alvarado and RIPASA Score was made in 47% and 71% cases respectively. Sensitivity and specificity of modified Alvarado score in diagnosing Appendicitis was 58.7% and 88% while PPV and NPV was 93.6% and 41.5%. Overall diagnostic accuracy was 66%. Sensitivity and specificity of RIPASA score in diagnosing Appendicitis was 89.3% and 84% while PPV and NPV was 94.4% and 72.4%. Overall diagnostic accuracy was 88%. In a study done by Nanjundaiah et al. [18] at optimal cutoff threshold of >7 the sensitivity and specificity of the Alvarado scoring system were 58.9% and 85.7% respectively which is very much comparable with present study. The positive predictive value and negative predictive value of Alvarado score is 97.3% and 19.1% respectively. In a study done by Chong CF et al. [19] the cut-off threshold score of 7.0 for the Alvarado score, the sensitivity, specificity, PPV, NPV and diagnostic accuracy were 68.3 percent, 87.9 percent, 86.3 percent, 71.4 percent and 86.5 percent, respectively. In a study done by Nanjundaiah et al. [18] using the RIPASA score, 96.2% of patients who actually had acute appendicitis were correctly diagnosed and placed in the high probability group (RIPASA score >7.5), compared to only 58.9% when using the Alvarado score on the same population sample. In a study done by Chong CF et al. [19] the RIPASA score correctly classified 98 percent of all patients confirmed with histological acute appendicitis to the highprobability group (RIPASA score greater than 7.5) compared with 68.3 percent with the Alvarado score (Alvarado score greater than 7.0; pvalue less than 0.0001).
The comparison of diagnostic accuracy of modified Alvarado and RIPASA Score across various studies is being tabulated below ( In a similar study by Pasumarthi V [15] , both scores showed good efficacy in screening the cases appendicitis with overall screening efficacy of RIPASA score was better (AUC -0.810 vs 0.77). In a study done by Chong CF et al. [19] , area under curve for the RIPASA score is 0.9183 which is greater than that for the Alvarado score, which is 0.8651. Similar results were also observed by the study of Nanjundaiah et al. [17] . Thus to conclude, the RIPASA score is a useful, rapid diagnostic tool for acute appendicitis, especially in the settings of the emergency as it requires only the patient's demographics (age, gender), a good clinical history (RIF pain, migration to RIF, anorexia, nausea and vomiting), clinical examination (RIF tenderness, localised guarding, rebound tenderness, Rovsing's sign and fever) and two simple investigations (raised white cell count and negative urinalysis performed at triage, which is defined as an absence of red and white blood cells, bacteria and nitrates). Thus, in the emergency setting, a quick decision can be made upon seeing patients with RIF pain. Those with a RIPASA score >7.5 need admission and further management, while patients with a RIPASA score <7.0 can either be observed. With its high sensitivity (89.3%) and NPV (72.4%), the RIPASA score can also help to reduce unnecessary and expensive radiological investigations such as routine CT imaging.
Conclusion
RIPASA score is currently a much better diagnostic scoring system for acute appendicitis compared to the Alvarado score. RIPASA had significantly higher sensitivity, NPV and diagnostic accuracy in our study group. The 14 fixed parameters can be easily and rapidly obtained in any population setting by taking a complete history and conducting a clinical examination and two simple investigations. In remote settings or emergency, a quick decision can be made with regards to referral to an operating surgeon or observation. The use of RIPASA scoring would help in decreasing the unwarranted patient admissions and also expensive radiological investigations.
|
2020-03-12T10:16:52.494Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "50d4a13ed916aae218a6e507328d61a3eaa3a981",
"oa_license": null,
"oa_url": "https://www.surgeryscience.com/articles/370/4-1-49-686.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "784acd207d22e93e9e42fc3146ae62fbafd581c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270488598
|
pes2o/s2orc
|
v3-fos-license
|
Engineered Antibodies as Cancer Radiotheranostics
Abstract Radiotheranostics is a rapidly growing approach in personalized medicine, merging diagnostic imaging and targeted radiotherapy to allow for the precise detection and treatment of diseases, notably cancer. Radiolabeled antibodies have become indispensable tools in the field of cancer theranostics due to their high specificity and affinity for cancer‐associated antigens, which allows for accurate targeting with minimal impact on surrounding healthy tissues, enhancing therapeutic efficacy while reducing side effects, immune‐modulating ability, and versatility and flexibility in engineering and conjugation. However, there are inherent limitations in using antibodies as a platform for radiopharmaceuticals due to their natural activities within the immune system, large size preventing effective tumor penetration, and relatively long half‐life with concerns for prolonged radioactivity exposure. Antibody engineering can solve these challenges while preserving the many advantages of the immunoglobulin framework. In this review, the goal is to give a general overview of antibody engineering and design for tumor radiotheranostics. Particularly, the four ways that antibody engineering is applied to enhance radioimmunoconjugates: pharmacokinetics optimization, site‐specific bioconjugation, modulation of Fc interactions, and bispecific construct creation are discussed. The radionuclide choices for designed antibody radionuclide conjugates and conjugation techniques and future directions for antibody radionuclide conjugate innovation and advancement are also discussed.
Introduction
The goal of the quickly emerging medical area of theranostics is to integrate therapy and imaging into a single platform for use in the next wave of customized medicine. [1]This strategy relies on imaging to validate the existence of a biological target, followed by therapeutic actions.Despite the decades-long combination of imaging and therapy, the field has made significant strides in recent years. [2]The cornerstone of theranostics, radiotheranostics, radiolabeled compounds that can both visualize and eradicate targeted cells, facilitating a dual role in patient management by enabling the assessment of disease presence and the subsequent targeted therapeutic intervention, has significantly enhanced healthcare worldwide since its introduction into clinical practice.Selecting patients for targeted radiotherapy based on imaging of the same target region is a crucial aspect of radiotheranostics, which relies on the premise that the uptake of the radiolabeled compound used for imaging mirrors the uptake of the therapeutic agent, ensuring that the treatment is delivered specifically to the areas identified by imaging. [3,4]ne of the most important components of radiotheranostic agents is the targeting moiety, which determines the specificity, affinity, and stability of the agent for the tumor antigen.9] Antibodies are widely used as targeting moieties for radiotheranostics due to their high specificity and diversity; [10] radiolabeled antibodies are called antibody radionuclide conjugates (ARCs).Creating ARCs involves attaching a radioactive isotope to monoclonal antibodies (mAbs), antibody fragments, or engineered variants such as bispecific antibodies.A variety of radionuclides can be used to label antibodies in both preclinical and clinical settings.These include gamma emitters for single photon emission computed tomography (SPECT) imaging, positron emitters for positron emission tomography (PET) imaging, and beta or alpha emitters for radioimmunotherapy (RIT). [11]RIT has shown promise in the treatment of hematological malignancies, including leukemia and lymphoma, as well as some solid tumors, such as glioblastoma, melanoma, neuroendocrine tumors, prostate, breast, and ovarian cancer. [12]RIT harnesses the specificity of antibodies to deliver radiation directly to cancer cells and their microenvironment, including tumor microenvironment, immune microenvironment, and microenvironmental cells, thereby minimizing damage to healthy tissues associated with conventional radiation therapy.
Tumor heterogeneity is now an obvious consideration in clinical disease management.Consequently, the use of ARCs in clinical settings has increased with the discovery of more tumor cell surface antigens. [13,14]One instance is the application of RIT in the management of non-Hodgkin's lymphoma (NHL), [15] delivering -emitting radionuclides to tumor cells via mAbs that recognize lymphoma-specific surface antigens. [15,16]For example, Ibritumomab Tiuxetan, an anti-CD20 antibody coupled to yttrium-90, a -emitter with a half-life of 64 h and tissue penetration of 5 mm, was the first RIT agent authorized by the FDA. [17]Patients with relapsed or refractory NHL, as well as those with rituximab-refractory NHL, have demonstrated effectiveness with ibritumomab Tiuxetan. [18]Another RIT agent developed is Tositumomab, an anti-CD20 antibody conjugated to iodine-131, a -emitter with a half-life of 8 days and a tissue penetration of 2.4 mm. [19,20]Tositumomab has also shown efficacy in patients with relapsed or refractory NHL, especially in those with follicular lymphoma. [21]Abs have been envisioned as ideal vehicles for delivering radionuclides to tumors ever since Pressman and Korngold pioneered this concept several decades ago. [22,23]This field has witnessed remarkable progress over the decades and encouraging clinical outcomes with radioimmunoconjugates labeled with 131 I, 225 Ac, and 177 Lu for radioimmunotherapy and with 89 Zr for immunoPET.However, a recurring theme in the literature on immunoPET, immunoSPECT, and radioimmunotherapy is the need to overcome the inherent drawbacks of antibodies as radio-pharmaceutical vectors. [24]Antibodies are not merely passive carriers of radionuclides to target cells; they are complex and multifunctional molecules that evolved as part of the immune system.Thus, using antibodies for radiotheranostics entails both benefits and challenges.On the positive side, high specificity and affinity for cancer antigens, good in vivo stability, and significant tumor uptake are all provided by radiolabeled antibodies.On the negative side, they might be difficult to synthesize in a homogeneous and well-defined manner, have lengthy biological halflives, and may be unintentionally absorbed into healthy tissues.Many studies have been conducted to overcome the limitations of antibodies as radiopharmaceutical vectors and enhance their potential as therapeutics, theranostics, and diagnostics. [25]Three primary paths have been taken.One method is called "in vivo pretargeting", where the radionuclide and the antibody are given apart and given time to separate before the antibody binds to the tumor spot through an extremely specific response. [23]This method reduces the dosimetry problems associated with conventional radioimmunoconjugates but adds considerable scientific and logistical complexity. [23]Utilizing artificial biomolecules like scaffold polypeptides, DARPins, affibody molecules, engineered ankyrin repeat proteins, and antibody mimetics-which imitate the structure and functionality of antibodies-is a further strategy. [26]These molecules often have better pharmacokinetic properties than radiolabeled antibodies but lack some of the features of IgG-based platforms, such as bivalency, high tumor accumulation, and in vivo stability.Scaffold polypeptides' minimum immune response, DARPins' strong, high-affinity binding and economical synthesis, and affibodies' precise targeting and low toxicity constitute a revolutionary advance in therapeutic design and implementation.In this article, we will discuss the third approach: antibody engineering.
Our aim in this review is to provide a broad overview of the design and engineering of antibodies for tumor radiotheranostics.We specifically go over the four ways that antibody engineering has been used to improve radioimmunoconjugates: pharmacokinetics optimization, bispecific construct generation, site-specific bioconjugation, and regulation of Fc interactions.Along with discussing conjugation methodologies and radionuclide selections for created antibody radionuclide conjugates, we will also discuss future perspectives for the development and innovation of antibody radionuclide conjugates.
Construction of Antibody Radionuclide Conjugates
The design and synthesis of ARCs have followed a consistent strategy since their inception.ARCs consist of three essential elements: an antibody that recognizes a tumor-specific antigen, a radionuclide that delivers therapeutic or diagnostic radiation, and a connecting linker.However, the choice and optimization of each element can vary widely in ways that affect the pharmacokinetics and efficacy of ARCs (Figure 1). [27,28]
Engineered Antibodies
The term "antibody engineering" describes the process of changing the immunoglobulin scaffold using molecular biology methods to enhance the functionality of antibodies or radioimmunoconjugates.The humanization of antibodies is the most well-known use of antibody engineering, and other excellent reviews have thoroughly examined the different techniques and uses of antibody engineering. [29]Antibody engineering provides a means of optimizing radioimmunoconjugate performance for cancer radiotheranostics without the additional complexities of pretargeting or the drawbacks of antibody mimetics.Reducing the circulation half-life to optimize the pharmacokinetics of antibodies, encouraging site-specific bioconjugation, altering Fc interactions, or developing constructs with multiple target binding capabilities are the primary objectives of antibody engineering.
Reducing the Circulation Half-Life of Antibodies
The serum half-life of a radioimmunoconjugate is determined by its interaction with the neonatal Fc receptor (FcRn), expressed in various cells and tissues throughout the body with a critical role in managing the immune system; FcRn binds to the IgG's Fc region of the radioimmunoconjugate, preventing its degradation by lysosomes.The FcRn also recycles IgG molecules back to circulation, thus extending the serum half-life of the entire radioimmunoconjugate, allowing for high tumor uptake and prolonged radiation exposure.However, long-lived radioimmunoconjugates also tend to circulate in the blood and deposit in healthy tissues, including the liver, spleen, and bone marrow, [30] lowering the tumor-to-background (T/B) ratio.
To increase the T/B ratio, antibody fragment constructs (Figure 2A) have been utilized to increase tumor penetration due to their increased diffusivity and smaller size, which also facilitates the clearance of unbound radiotracers from the systemic circulation.Fragments also have different binding properties compared with whole antibodies, such as lower valency or affinity, that can reduce the binding site barrier effect, where high-valency antibod-ies bind so effectively to antigens on the surface of tumors that they fail to penetrate deeply into the tissue, thus enhancing tumor saturation. [31,32]Moreover, lacking the Fc region, antibody fragments are potentially less immunogenic.Some antibody fragments, including single-domain antibodies (sdAb, ≈15 kDa; also known as VHH and nanobodies), single-chain variable fragments (scFv, ≈27 kDa), F(ab) fragments (≈50 kDa), diabodies (≈60 kDa), minibodies (≈75 kDa), (scFv)2-Fc fragments (≈100 kDa), and F(ab′)2 fragments (≈110 kDa) (Figure 2A). [33]The choice of radionuclide affects the balance between therapeutic efficacy and safety, considering the pharmacokinetics of the antibody fragments and the half-life of the radionuclides; the faster pharmacokinetics of fragments allows the use of shorter-lived radionuclides (e.g., 64 Cu, 18 F, 68 Ga, and 211 At) instead of the longer-lived ones (e.g., 89 Zr, 131 I, 225 Ac, and 177 Lu) typically used for radioimmunoconjugates.The goal is to optimize both the treatment impact and safety, as well as the logistical handling, including storage, transport, and disposal, which is easier with shorter half-life radioisotopes.Because shorter-lived radionuclides have lower radiation doses and shorter half-lives, making them easier to handle and safer, this replacement improves the dosimetry and logistics of antibody fragments.Fragments also enhance image clarity and contrast while lowering background radiation.However, the quicker pharmacokinetics of antibody fragments come at the expense of decreased stability and affinity as compared to full-length IgG.Radioimmunoconjugates based on antibody fragments usually have lower tumor uptake than those based on full-length IgG, and some fragments show high renal uptake and retention during clearance, such as ScFv, which raises concerns when these antibody fragments are labeled with radioisotopes. [22]umerous preclinical studies have demonstrated the potential of antibody fragments for cancer radiotheranostics.Particularly for same-day immunoPET, which involves imaging tumors that express certain antigens a few hours after injection using radiolabeled antibody fragments, such as diabodies or minibodies.Zettlitz et al. reported a cys-diabody targeting CD20 in 2019 for same-day immunoPET of B-cell lymphoma.Using transgenic mice expressing human CD20 on mature B cells, they demonstrated the ability of an 18 F-labeled diabody, [ 18 F]FB-GAcDb, to identify both cancerous and normal B cells in the liver (Figure 2B). [34]In 2023, Jonatan et al. used a 68 Ga-labeled anti-CD70 VHH with a C-direct-tag with a free thiol to facilitate sitespecific conjugation to a NOTA bifunctional chelator, CD70 in human cancer xenografts. [35]The tracer demonstrated the ability to distinguish and measure CD70-positive tumors, indicating that a comparable imaging strategy might be applied in the clinic to pinpoint and monitor patients who will most likely benefit from anti-CD70 treatment over an extended period.Increased gallbladder (GB) and GI tract activities suggest secondary excretion of radio metabolites through the hepatobiliary system.Reproduced with permission. [34]Copyright 2018, Springer Nature.C) [ 131 I]I-A11 Mb and [ 177 Lu]Lu-DTPA-A11 Mb inhibit PSCA-positive cell growth in vitro.Reproduced with permission. [36]Copyright 2020, Springer Nature.D) Whole-body images of 1 patient at various times after injection of 89 Zr-IAB22M2C (1.5-mg minibody dose).All images show the most intense activity within the spleen, followed by marrow, liver, and kidneys.Reproduced with permission. [40]opyright 2020, SNMMI.
survival, suggesting that the minibody might be a good vehicle for beta-emitting isotope-based targeted radionuclide therapy (Figure 2C). [36]In another study, Feng et al. reported a radioimmunotherapy study using a HER2-specific sdAb conjugated to a residualizing 131 I-labeled prosthetic group for HER2-positive breast and ovarian cancer models.The radioimmunoconjugate [ 131 I]SGMIB-VHH-1028 achieved a tumor-to-kidney therapeutic index of over 8.5 and almost completely inhibited tumor growth in BT474 xenograft-bearing mice after doses of 18 and 30 MBq. [37] The same group also demonstrated the efficacy of HER2-targeted radioimmunotherapy using a conjugated sdAb to a residualizing prosthetic group that was identical and labeled with the alphaemitting radio halogen 211 At. [38] These preclinical findings are promising, but what is even more remarkable are the few clinical trials that have been conducted recently.In 2022, Morris et al. compared the performance of 89 Zr-df-IAB2M, an 89 Zr-labeled anti-PSMA minibody, and 68 Ga-PSMA-11, a 68 Ga-labeled PSMA small molecule ligand, for PET imaging of localized prostate cancer, and correlated the results with multiparametric MRI.The PET tracers showed higher detection rates of prostate cancer than MRI, with 89 Zrdf-IAB2M being superior to 68 Ga-PSMA-11 in terms of sensitivity, specificity, and accuracy.The PET tracers also provided additional information about the presence and location of the therapeutic target, which could have implications for management change in men with localized prostate cancer. [39]Additionally, the CD8-targeting [ 89 Zr]Zr-IAB22M2C minibody-based probe was recently clinically translated to image CD8-positive T cells in a phase I trial.The results demonstrated that [ 89 Zr]Zr-IAB22M2C was safe, well-tolerated, and could effectively target CD8-rich organs such as the spleen and lymph nodes (Figure 2D). [40]reover, Scott et al. reported on the use of a PEGylated 124 Ilabeled diabody ([ 124 I]I-PEG-AVP0458) for tumor-associated antigen TAG-72 PET imaging in patients with primary prostate cancer, metastatic prostate cancer, or ovarian cancer. [41]As early as one day after injection, the radiotracer showed strong specificity and sensitivity for TAG-72-positive tumors, suggesting its potential as a diagnostic tool for cancer staging and monitoring.
Early-phase clinical trials have proven the benefits of minibodies and other antibody fragments, which include lower immunogenicity and enhanced tumor targeting.However, obstacles, including intricate manufacturing procedures that raise production costs impede their wider clinical applicability. [42]Furthermore, compared to full-length antibodies, these fragments frequently show decreased stability and shelf life, which presents logistical difficulties.Clinical acceptance of innovative medicines is further delayed by the rigorous and time-consuming regulatory routes. [43]Integration into clinical practice is further delayed by the necessity for substantial clinical evidence to prove effectiveness and safety over current medicines.Despite these challenges, minibodies hold significant promise for targeted therapy.Continued research and development may resolve these problems and pave the way for broader clinical applications in the future.
Site-Specific Bioconjugation
Radioimmunoconjugates are designed to enable precision medicine, but their synthesis is surprisingly imprecise.Most radioimmunoconjugates are made by randomly attaching aminereactive prosthetic groups, such as chelators or radio-halogenated moieties, to lysines in the mAbs.This method is easy, but it creates heterogeneous products and can affect the mAb's antigen binding.
Variability in conjugation methods can significantly influence the distribution, targeting accuracy, safety, and efficacy of radioconjugates.The pharmacokinetics, stability, tolerability, and effectiveness of early-generation ARCs, for example, were shown to be suboptimal when they were manufactured as heterogeneous mixtures. [44]Consequently, the focus was switched to creating homogenous constructs with precise drug loading and regulated attachment sites.When considering their pharmacological characteristics, homogeneous ARCs have proven to have better results than their heterogeneous counterparts. [45]Additionally, by altering an amino acid's microenvironment, site-specific bioconjugation can be accomplished, making it possible to activate a particular amino acid residue when other reactive species are present.This precision is crucial for maintaining the functionality of the mAb and ensuring the effective delivery of the therapeutic agent to target cells. [45]here are instances of monoclonal antibodies where the effect on antigen binding has affected the effectiveness of treatment.For example, Entyvio (vedolizumab) is used to treat inflammatory bowel disease (IBD), while Avastin (bevacizumab) is used to treat colon cancer.Since these medications are made to attach to particular antigens, modifications to this capacity may have an impact on the therapeutic efficacy of the medications. [46]he integrity of the mAb's antigen-binding site must be preserved for site-specific bioconjugation to result in the intended therapeutic effect.The homogeneity and effectiveness of these medicinal compounds are constantly being enhanced by developments in bioconjugation procedures.
To overcome these problems, various site-specific and siteselective bioconjugation methods have been developed, and the results demonstrate that they generate radioimmunoconjugates that function better in vivo than those generated at random. [47]recision is even more important for fragment-based radioimmunoconjugates because their small size makes it more likely that the cargoes will interfere with their antigen-binding domains.The most common site-specific bioconjugation method uses thiol-reactive probes, such as maleimide, to link to cysteines created by reducing the antibody's interchain disulfide bonds.This method is better than the traditional one, but it still has some drawbacks: the maleimide-thiol bond can break under physiologic conditions, and the full-length IgG can have 4-8 free cysteines (depending on the reduction conditions), which can cause some heterogeneity.Therefore, the field has increasingly used antibody engineering to achieve more stable and consistent sitespecific bioconjugation.
Peptide recognition sites can be added to immunoglobulins for chemoenzymatic labeling by antibody engineering.Rudd et al., for instance, employed the transpeptidase sortase A to bind glycine-linked chelators to an epidermal growth factor receptor (EGFR)-specific Fab with a C-terminal LPETG motif in a mouse model of EGFR-positive epidermoid cancer.This resulted in 64 Cu-and 89 Zr-labeled radioimmunoconjugates with very promising in vivo efficacy, demonstrating rapid tumor uptake, high tumor-to-background ratios at 1 h post-injection of 7.54 ± 2.16 and 5.50 ± 0.30 respectively, and tumor retention time of up to 18 h, showing superior in vivo efficacy compared to the radiolabeled full antibodies that require several days between injec-tion of the tracer and imaging (Figure 3). [48]Similarly, Bridoux et al. joined an hPD-L1-binding sdAb with a C-terminal LPETG element to a NOTA variant with a GGGYK tag.They demonstrated that the in vivo performance of the site-specifically labeled radioimmunoconjugate, [ 68 Ga]Ga-NOTA-(hPD-L1), was superior to that of its randomly labeled counterpart. [49]ioconjugation of antibodies with natural or unnatural amino acids at specific sites is also a promising strategy, especially for fragment-based probes.For instance, Chigoho et al. made a radiotracer, [ 68 Ga]Ga-NOTA-mal-hPD-L1, by attaching a maleimidelinked NOTA variant to a C-terminal cysteine of an hPD-L1binding sdAb, and showed its promising in vivo performance. [50] far more complex method introduces artificial amino acids with orthogonal reactive groups through genetic code expansion, providing unparalleled site selectivity.Using this technique, Ahn et al. added four p-azido-methyl-phenylalanine residues to the mAb trastuzumab's Fc region. [51]Next, they attached the antibody to DFO and DO3A that had been changed by dibenzocyclooctyne.The site-specifically modified immunoconjugates were then tagged with 89 Zr and 111 In, respectively.
Interest in engineering-driven bioconjugation has increased due to the chemoenzymatic bioconjugation.Vivier et al. truncated the heavy chain glycans of the HER2-targeting antibody pertuzumab using the enzymes -galactosidase and endoglycosidase and then linked azide-modified galactose residues to the residual sugars.They next added the chelator desferrioxamine (DFO) to the azide-bearing glycans using the strain-promoted azide-alkyne click reaction, resulting in site-specifically labeled immunoconjugates containing 89 Zr.They showed that in a humanized mouse model of HER2-positive breast cancer, the sitespecific bioconjugation not only increased the homogeneity and repeatability of the radioimmunoconjugates but also decreased their binding to human and murine FcRI and improved their tumor uptake and contrast. [52]
Modulating Fc Interactions
The pharmacokinetics and pharmacodynamics of mAbs are influenced by their Fc regions and how they bind to Fc receptors.While FcRn regulates the serum half-life of antibodies, different types of Fc receptors-such as FcRI, FcRII, FcRIIIA, and FcRIIIB-modulate the immune response by forming immune complexes with antigens.Therefore, Fc engineering is a promising strategy to optimize the performance of radioimmunoconjugates for imaging and therapy.
Several studies have sought to modulate the behavior of mAbbased radioimmunoconjugates by either increasing or decreasing Fc receptor binding.By interacting with Fc gamma receptors (FcR), which are expressed in several organs, particularly liver Kupffer cells, the Fc region of mAb can cause off-target binding.Certain antibodies are designed with mutations in the Fc region that decrease Fc-receptor binding in order to overcome this.It was demonstrated by Mangeat et al. that in tumor-bearing mice, the insertion of the LALAPG triple mutation in [ 89 Zr]Zr-DFOantibodies against several targets reduced FcR binding and liver accumulation. [53]They noticed that compared to wild-type antibodies, the Fc-engineered antibodies showed greater tumor-toliver ratios and decreased liver absorption.These modifications [48] Copyright 2021, RSC.
were linked to improved Fc-engineered antibody tumor targeting and imaging.Similar outcomes were noted by Burvenich et al. when using anti-Lewis-Y mAbs tagged with 111 In and 177 Lu, which had two mutations (I253A and H310A) that decreased FcRn binding. [54]The heavy chain glycans of mAb modulate their Fc receptor engagement, which inspired the development of glycoengineered radioimmunoconjugates.In contrast to completely glycosylated monoclonal antibodies, Vivier et al. showed that the enzymatic removal of glycans from [ 89 Zr]Zr-DFO-trastuzumab decreased FcRI binding and absorption in healthy tissues in tumor-bearing NSG and huNSG mice (Figure 4). [55]They found that the deglycosylated immunoconjugates had impaired FcRI binding in vitro and reduced off-target uptake in vivo, especially in the liver and spleen.These reductions were accompanied by increased tumoral uptake and improved tumor-to-healthy organ contrast and PET image quality.Two glycoengineered variants of the L1CAM-targeting radioimmunoconjugate [ 89 Zr]Zr-DFO-HuE71 were used more recently by Sharma et al. (Figure 5) to demonstrate the effectiveness of this strategy. [56]The afucosylated version with greater FcRIIIA binding displayed higher accumulation in the liver and lymphoid organs and decreased tumor uptake compared to the parent radioimmunoconjugate.The low tumor uptake was attributed to the liver being a major sink.In contrast, the aglycosylated version that did not bind 89 ZrDFO-nsstrastuzumab, 89 Zr-DFO-sstrastuzumab-EndoS, and 89 ZrDFO-nsstrastuzumab-PNGaseF in NSG mice bearing subcutaneous BT474 xenografts at 24, 48, and 120 h post-injection.Reproduced with permission. [55]Copyright 2022, SNMMI.
the Fc receptor exhibited significantly less accumulation in the bone and lymph nodes than the original [ 89 Zr]Zr-DFO-HuE71 formulation, Lastly, Bensch et al. recently reported results from a clinical trial using [ 89 Zr]Zr-lumretuzumab, a full-length HER3-targeting mAb glycoengineered to improve FcRIIIA binding and antibody-dependent cellular cytotoxicity (ADCC) for PET imaging to assess target distribution. [57]ADCC is a critical immune response where immune cells, primarily NK cells, recognize and destroy antibody-coated target cells.Unlike fragments, which often lack the Fc region and therefore do not recruit effector functions, full-length antibodies can achieve immune-mediated tumor cell killing through enhanced ADCC, a process depending on antibody interaction with the FcRIIIA on NK cells. [58]The observation from preclinical studies that enhanced FcRIIIA binding could lead to decreased tumor uptake and increased liver and lymphoid organ uptake highlights a complex balance in designing radioimmunoconjugate agents based on full-length antibodies.
To modify the distribution and immune response functions of radioimmunoconjugates, the distinct Fc interactions of the four IgG subclasses, IgG1, IgG2, IgG3, and IgG4, have also been utilized.Bicak et al. synthesized an 225 Ac-labeled IgG3based radioimmunoconjugate of the hexokinase 2-targeting mAb hu11B6 for use in prostate cancer radioimmunotherapy. [59]heir idea was that the increased complement activation and Fc receptor binding of the IgG3 scaffold would result in an enhanced anti-tumor immune response by the active recruitment of effector cells, leading to enhanced radioimmunotherapy.Nevertheless, neither the R435H mutant variant restored FcRn binding nor the [ 225 Ac]Ac-hu11B6-IgG3 had enhanced therapeutic effectiveness over [ 225 Ac]Ac-hu11B6-IgG1.Sharma et al.'s study, which was previously mentioned, not only explored how modifications to the Fc region affect the biodistribution of antibodies targeting L1CAM in vivo, but they also investigated the choice of IgG subclass. 89Zr-immunoPET was performed using two IgG4-based radioimmunoconjugates that target L1CAM.Since this subclass generates fewer effector functions than other subclasses, potentially decreasing the risk of toxicity, it has drawn interest in the realm of immunotherapeutics. [56]However, wildtype IgG4-based radioimmunoconjugate exhibited high levels of nonspecific renal absorption.An engineered variant with a mod-ified hinge region S228P resulted in reduced renal uptake.The modification prevented in vivo Fab arm exchange, a characteristic of IgG4, allowing for a more stable structure similar to IgG1-like interchain disulfides.Lastly, Man et al. went beyond the IgG isotype by examining the in vivo behavior of an IgEbased anti-CSPG4 antibody designed to trigger a more robust immune response using, monitored with immunoSPECT. [60]In the absence of FcRn-mediated recycling, the [ 111 In]In-IgE was eliminated from the blood of tumor-bearing mice significantly more quickly than its [ 111 In]In-IgG counterpart.The faster blood clearance of [ 111 In]In-IgE led to tumor-to-blood activity concentration ratios that were equal to those of homologous IgG, although it accumulated considerably in the liver.
Single-chain fusion proteins composed of Fc domains and antigen-binding fragments have also been used as radioimmunoconjugates.These fusion proteins share several characteristics with full-length mAbs, such as large size, multivalency, and FcRn interaction.The single-chain structure, which does not require simultaneous expression of heavy and light chains, can be produced more easily.Rochefort et al. developed an antibody fragment, 124 I-labeled (scFv)2-Fc, that targets CA19-9 and possesses an H310A mutation that prevents FcRn binding for microPET imaging. [61]They found that the fragment had a similar affinity as the intact antibody and that the mutated fragment had an increased blood clearance rate.Delage et al. described using a 177 Lulabeled anti-TEM-1 scFv-Fc fusion antibody (scFv)2-Fc, 1C1m-Fc for theranostics of a sarcoma model.1C1m-Fc exhibited modest uptake and tumor-to-background ratios in TEM-1-positive xenografts, although they made no mention of Fc engagement or mutations. [62]he process by which therapeutic antibodies deliver cytotoxic payloads into cancer cells is known as endocytosis, and it is essential for the therapeutic potential of both conjugated (e.g., drugconjugated) and naked (unconjugated) antibodies. [63]The Fc region interacts with Fc receptors on leukocytes to elicit various effector functions, one of which is ADCC, a crucial cytolytic mechanism of natural killer (NK) cells.Research is ongoing to enhance the Fc region's interaction with Fc receptors, such as CD16A on NK cells.Copyright 2019, AACR.D) Imaging workup of patient 2 with a history of sigmoid cancer and synchronous liver metastases, treated with chemotherapy and intra-arterial chemotherapy.E) Liver CT and MRI scans performed in the presence of a progressive increase in serum CEA show stable residual liver lesions with no evidence of tumor activity.F) FDG-PET was negative.G-I) The immuno-PET (iPET) revealed multiple liver tumor foci.Surgical specimens of the liver lesions confirmed the diagnosis of metastases from the sigmoid cancer.Reproduced with permission. [69]Copyright 2020, Springer Nature.
Binding Multiple Targets
Antibody engineering has enabled the development of radioimmunoconjugates that use bispecific antibodies (BsAbs) to recognize two different antigens simultaneously.Compared to traditional mAbs, BsAbs offer several benefits, including the ability to attract immune cells, overcome drug resistance, induce synergistic anticancer effects, and block protumor signaling pathways.Generally, a heavy-chain/light-chain pair from one mAb and another pair from a different mAb make up full-length BsAbs (Figure 6A).Various bi-and tri-specific formats have been created using antibody fragments.
Like mAbs, BsAbs can be used as companion imaging agents for their cold equivalents.For example, immunoPET using 89 Zrlabeled BsAb, which binds CD3 and carcinoembryonic antigen (CEA) to activate T cells in patients with gastrointestinal cancer, was studied in a clinical trial (Figure 6A-C). [64]The CD3/CEAspecific BsAb accumulated in tumor lesions as well as lymphoid organs, offering insights into the diversity of antigen expression.This data could be useful for designing patient-tailored treatments and dosing regimens.Crawford et al. assessed the in vivo behavior of an alternative T-cell-engaging BsAb that targets CD3 and mucin 16 (MUC16) in a mouse model of ovarian cancer.They discovered that the organs with lymphoid function and tumor tissue had the highest activity concentrations. [65]Blocking analysis, which demonstrated that blocking with a CD3-specific mAb preferentially decreased uptake in the lymphoid tissues while block-ing with a MUC16-specific mAb did the same for the tumor, eloquently illustrated an explanation of this distribution.
BsAbs have also been used in the field to create radioimmunoconjugates that can pass through the blood-brain barrier (BBB). [66]The BBB normally prevents mAbs from passing through because of their large size and polarity, which limits the applications of radiolabeled antibodies in neuroimaging and therapy. [67]However, it has been shown that tagging a transferrin-binding Fab fragment to a BsAb facilitates the transferrin receptor-mediated transcytosis of the bispecific construct across the BBB.Antibody fragments, with their high specificity and low background signal, provide enhanced imaging accuracy for brain targets compared to, e.g., small molecules, the lipophilicity of which enables them to cross the BBB but also contributes to their non-specific binding.Syvänen et al. created a 124 Ilabeled Tribody™ to identify amyloid- protofibrils in the brain, which are protein aggregates linked to Alzheimer's disease. [68]hey created five (A1-A5) different fusion proteins that combine a transferrin receptor antibody with the amyloid- protofibrilspecific antibody mAb158. 124I-A3 was shown on PET to only be retained in A plaque pathology mice, with a region of interest to cerebellum ratio that increased from 1 at 2 h post-injection to almost 3 at day 3 and correlated with ex vivo ELISA of soluble A protofibrils.
Pretargeted imaging and therapy, decoupling the targeting of the tumor from the delivery of the radioactive payload to reduce the off-target exposure and provide more favorable pharmacokinetics, is another way that BsAbs are used in nuclear medicine. [22]The BsAb used in this method is developed to bind both an exogenous radiolabeled hapten and a tumor antigen.The unlabeled BsAb is injected first, giving the tumor time to absorb it from the circulation.Then, the high affinity and selectivity of the BsAb for the radiolabeled hapten, which is provided later, enable the in vivo ligation between the two components.This technique allows the use of short-lived radionuclides that are generally incompatible with mAb-based vectors while lowering radiation dose rates to healthy tissues.In two recent clinical investigations, pretargeted PET was evaluated in patients with breast and colon cancer using a 68 Ga-labeled peptidic hapten ([ 68 Ga]Ga-IMP288) and a BsAb (TF2) targeted to CEA.In general, radiolabeled antibodies for nuclear imaging and radioimmunotherapy have demonstrated excellent performance and adaptability through the application of antibody engineering for radiotheranostics.Nonetheless, certain challenges and limitations persist, necessitating focused attention on refining dosimetry and pharmacology, along with assessing safety and effectiveness in clinical studies.
Radionuclides
When selecting a radionuclide for conjugation to an antibody or a fragment, several parameters are taken into consideration.These include whether or not the protein will be internalized by the target cell, conjugation chemistry, and chelation procedure for radiometals.Once internalized radiometal-based probes and their daughter isotopes, released from their chelators, are retained intracellularly because their polar and charged nature makes them unable to cross cell membranes, leading to accumulation in lysosomes. [71]They can both enhance activity in normal tissue, particularly in excretory organs, and increase overall uptake in target tumor tissues.Conversely, free iodide or iodotyrosine released from radioiodinated internalizing carrier proteins are easily expelled from the cell and swiftly removed from the body. 131I is the most commonly used iodine isotope for SPECT and targeted radiotherapy of, e.g., NETs, prostate cancer, etc.While traditional treatment for thyroid cancer, especially differentiated thyroid cancer, relies heavily on radioiodine therapy without the need for carriers due to the thyroid's natural uptake of iodine, advancements in biotechnology have introduced various approaches for targeting thyroid cancer cells more specifically, particularly for types of thyroid cancer that are less responsive.For thyroid cancer, mAbs may be designed to target thyroid cancer cell-specific markers, such as thyroglobulin and the thyroid-stimulating hormone receptor.Conventional radioiodination methods like Iodogen lead to detachment of the isotope after internalization; [72] overall, this leads to decreased background in imaging, but detached radioiodine can damage healthy thyroid cells similarly to free radioiodine therapy, and over time, internalization and degradation cause target (tumor) tissue to become less active. [73]Thus, for internalizing radioiodine probes, using a different radioiodination method that ensures greater stability of the bond between iodine and the protein is more desirable.For probes that remain on the cell surface, radiopharmaceuticals de-signed for the radionuclide to detach or be released from their chelator after binding can be preferable because it allows potential irradiation of nearby tumor cells (cross-fire effect), not just the cells to which the radiopharmaceutical is bound.This can be particularly useful in solid tumors, where not all cells might express the targeted antigen at levels sufficient for direct targeting.It is crucial to remember that internalization is not an all-or-nothing process; there is a broad range in the degree and rate of internalization of cell surface antigens.Some cell surface receptors and their bound antigens undergo a slow and continuous process of internalization, which is part of the natural membrane turnover.This steady-state internalization allows for the maintenance of cellular functions and receptor homeostasis, ensuring that cells can respond to environmental signals over time without overaccumulation of specific receptors on the cell surface.In contrast, certain antigens, when bound to their receptors, can trigger a much faster internalization process, often observed in the context of receptor cross-linking, where the binding of an antigen (or an antibody) to a receptor leads to the aggregation of receptorantigen complexes which can activate signaling pathways that accelerate endocytosis. [74]ommon radionuclides that release positrons include 18 F, 68 Ga, 64 Cu, 89 Zr, and 124 I; common radionuclides that release single photons are 99m Tc, 123 I, 131 I, and 111 In (Table 1).Shorter-lived radionuclides ( 18 F and 68 Ga) match well with antibody fragments such as diabodies, single-domain antibodies, and scFvs that have relatively short biological half-lives, making them appropriate for imaging tumors a few hours after injection using radiolabeled antibody fragments.Longer-lived radionuclides such as 64 Cu, 89 Zr, and 124 I can mix well with the whole antibody or a larger antibody fragment such as the minibody for the greatest imaging contrast 24 to 48 h or more after injection. [74]ositron yield, range, and other emissions falling within the scanner's energy range are additional factors to consider when choosing a radionuclide for imaging.For instance, the larger positron range of 68 Ga and 124 I compared to 18 F results in reduced resolution and extra partial volume effects (PVE) that may have an impact on PET quantification. [75]Some radionuclides, like 64 Cu, have dual therapeutic and imaging properties because they decay and release high-energy beta particles.To estimate the dosimetry of a therapeutic isotope, other positron emitters can be utilized as a surrogate, such as the pairing of the iodine isotopes 131 I and 124 I. [76] RIT selectively delivers therapeutic radionuclides to the tumor site, but if the half-life of the probe is too long, such as with intact antibodies, normal organs are unnecessarily exposed to radioactivity.Antibody fragments have the potential to lower the dosage to normal organs, particularly the radiosensitive bone marrow; nonetheless, renal elimination of fragment sizes necessitates monitoring the kidney dose. [77]RIT radionuclides emit beta, alpha, or Auger electrons to cause cytotoxic DNA damage through a variety of processes, including the generation of reactive oxygen species, the breaking of single and doublestranded DNA, and the inhibition of DNA repair (Table 2). [78,79]ndependent of the radionuclide, an immune response such as antibody-dependent cellular cytotoxicity can also result in cell death, although the protein mass in RIT is frequently lower than a standard therapeutic antibody dosage.Target density, heterogeneity, and the kind and extent of the malignancy all impact the radionuclide selection because beta, alpha, and Auger emissions have different ranges and LETs.Due to the cross-fire effect and the extensive reach of the radiation field impact spanning several millimeters, beta-emitting radionuclides for RIT can influence tissue throughout and around the tumor attributed to their higher LET compared to gamma rays, typically ranging from 0.2 keV μm −1 , and their relatively long path length in tissue, which can span from one to several millimeters. 131I is a common beta emitter used since 1941 for thyroid diseases and is readily available.In addition to RIT, 131 I and 177 Lu both release gamma photons that can be picked up by SPECT imaging.Antibodies coupled with 177 Lu and 90 Y catabolize the formation of charged radiometal chelate metabolites.To enhance tumor retention, these compounds are held onto in tumor cells. 90Y primarily emits beta radiation, not gamma rays, which conventional SPECT is designed to detect, which is one of its drawbacks.Conversely, 90 Y generates Cerenkov radiation, a type of luminescent emission that occurs when charged particles like beta particles travel through a dielectric medium (such as biological tissue) at a speed greater than the phase velocity of light in that medium, and bremsstrahlung photons, which can be detectable by optical Cerenkov luminescence imaging and SPECT, respectively. [80]Alpha emitters, such as the for RIT more frequently used 225 Ac, 211 At, 213 Bi, and 223 Ra can be helpful for treating smaller lesions and lesions resistant to beta radiation because of their high LET (80-100 keV μm −1 ) and much shorter range (a few cell diameters). [81]In vitro and in vivo preclinical investigations have demonstrated the effectiveness of 225 Ac-labeled fulllength antibodies in killing tumor cells. 225Ac undergoes decay to generate four daughter particles that emit alpha radiation.Additionally, its decay chain includes the emission of SPECT-visible gamma rays. [82]However, the relatively low abundance and energy of gamma emissions in the decay chain of alpha particles, especially at the low therapeutic doses that patients receive, render SPECT ineffective for tracking their biodistribution.
Low-energy Auger emitters, like 125 I, are thought to have a high LET (4-26 keV μm −1 ) and a very short path length in tissue (2-500 nm) (Table 2). [83]Auger emitters, like alpha emitters, might be a better option than beta emitters for limiting damage to healthy tissues. [84]Nonetheless, methods for intracellular delivery, ideally close to the cell nucleus, must be discovered.
Other radiometals, such as 47 Sc, 67 Cu, 149 Tb, 161 Tb, 166 Ho, and 212 Pb, that may also be considered for antibody-mediated imaging and treatment are listed in Table 1.Gamma rays and beta particles are released by a few of these radionuclides ( 47 Sc, 67 Cu, and 166 Ho), which are useful for imaging and therapeutic purposes. 149Tb produces alpha particles, positrons, and imageable gammas, making it useful for treatment as well as SPECT or PET imaging.These radionuclides have shown therapeutic potential in preclinical and clinical studies when labeled to full-length antibodies and are, therefore likely to produce a similar effect with antibody fragment-based RIT. [85]
Conjugation Strategies
One of the challenges in developing ARCs is the choice of an appropriate conjugation strategy that can attach radioactive isotopes to antibodies without compromising their stability, immunoreactivity, and pharmacokinetics. [45]Conjugation to lysine residues is a common ADC method due to its convenience and stability; however, it lacks site-selectivity, resulting in a mixture of products with variable properties and potential toxicity.For ARCs specifically, achieving a consistent and predictable biodistribution of the radiolabel is crucial for both therapeutic efficacy and minimizing radiation exposure to non-target tissues.Therefore, methods that allow for site-specific conjugation are being researched and developed, aiming to provide uniform radiotracers, ensuring that each antibody carries a predictable number of radioactive atoms, thus optimizing the balance between therapeutic effect and safety. [86]
Enzyme-Mediated Radiolabeling
Enzyme-mediated radiolabeling, which employs enzymes to impart certain functional groups to antibodies that may react with radionuclide conjugates, is ideally suited to accomplish site-specific labeling of antibody vectors.Enzymes are biocatalysts that can perform selective and mild reactions in biological systems.Site-selective modification of antibodies via enzymemediated radiolabeling can enhance the homogeneity, stability, and immunoreactivity of the radiolabeled conjugates. [10] recent study found that in immunoPET imaging investigations, site-specifically modified 89 Zr-DFO-trastuzumab showed enhanced immunoreactivity and stability, suggesting that it performed better than its counterpart created via a random conjuga-tion approach.This resulted from using a chemoenzymatic approach.Site-specific radiolabeling of VHHs with either 18 F [87,88] or radiometals has been simplified by the use of sortase A (SrtA) [89,90] Additionally, a distinctive two-step modular technique for conjugating immunoPET probes is available.In this method, SrtA is utilized to introduce strained cyclooctyne functional groups into the targeting vector of interest.With the development of SrtA variants that are Ca 2+ independent and enhancements of SrtA's catalytic activity, SrtA will provide an adaptable platform for the creation of more complex immunoPET probes. [75,91]igure 7. Schematic of a chemoenzymatic methodology for site-specifically grafting cargoes (e.g., chelator) to the heavy-chain glycans of an antibody of interest.Reproduced with permission. [93]Copyright 2016, ACS.
However, enzyme-mediated radiolabeling also has some limitations, such as the need for pre-functionalization of the radionuclide with the appropriate click handle, the potential interference of endogenous azides or alkynes in biological systems, and the availability and cost of the enzymes. [92]To overcome these limitations, other enzymes that may change distinct amino acids or antibody patterns have also been used in various alternative techniques.
Click chemistry-Mediated Radiolabeling
Click chemistry-mediated radiolabeling (Figure 7) is a promising method that uses biorthogonal reactions to attach radioactive isotopes to antibodies that target specific tumor cells. [93]Engineering techniques like inserting cysteine residues or unnatural amino acids enable bio-orthogonal click chemistry.Click reactions encompass the inverse electron-demand Diels-Alder reaction (IEDDA) between tetrazine (Tz) and transcyclooctene (TCO), as well as the copper-free strain-promoted Huisgen cycloaddition. [94]With first-order rate constants as high as 10 5 M −1 S −1 , the IEDDA reaction 1,2,4,5-tetrazine and strained alkene (such as TCO) is a well-known biorthogonal reaction.It is usually considered the fastest click reaction. [95]IEDDA has shown to be a very helpful ligation method for labeling radioisotopes with short half-lives because of its exceptionally fast reaction rate in benign circumstances like room temperature, neutral pH, and aqueous medium.Additionally, tetrazines based on IEDDA-labeled radioiodine can effectively solve the deiodination reaction caused by radiolabeled tracers based on the traditional radioiodination method by electrophilic substitution reaction.IEDDA-based fast radiolabeling of antibodies was described by Valliant et al.To get the intended product in 69% radiochemical yield in this investigation, the TCO-modified anti-VEGFR2 was treated for 5 minutes with the 125 I-labeled tetrazine analog.It's interesting to note that the radiolabeled antibody produced by this method was 10 times more stable against in vivo deiodination than the identical antibody synthesized by direct radioiodination using iodogen. [96]or diagnostic purposes, IEDDA ligation has also been used to generate a number of radioactive metal-labeled tracers.Lewis et al. reported employing 64 Cu or 89 Zr to radiolabel norbornene carrying trastuzumab, together with tetrazine-conjugated metalchelating agents such as DOTA and DFO. [97]This process produced radiolabeled trastuzumab with high specific radioactivity (>2.9 mCi/mg) and high radiochemical yield (>80%).Additionally, PET imaging research indicated that radiolabeled antibodies exhibited specific absorption in HER2-positive BT-474 tumor cells and were rather durable in vivo.Therapeutic radioisotopelabeled human antibodies 5B1 and huA33 were prepared in 2018 via IEDDA ligation. [98]This work involved the synthesis of a tetrazine-conjugated DOTA chelator tagged with 225 Ac.Within five minutes, the TCO-modified antibodies interacted with the radiolabeled tetrazine tracer to provide the intended products.Better radiochemical yields were obtained with this two-step strategy compared with the traditional approaches used in clinical applications, The 225 Ac-labeled antibody also showed significant tumor uptake values and comparatively minimal non-specific accumulation in normal organs, according to the biodistribution data.
Pre-targeting Strategy
The goals of pretargeted radioimmunodiagnosis and radioimmunotherapy are to effectively combine therapeutic radioisotopes with anticancer antibodies for high-contrast imaging and high-therapeutic-index (TI) tumor targeting, respectively (Figure 8). [99]Pretargeting strategies, in contrast to traditional radioimmunoconjugates, isolate the payload phase from the tumortargeting stage, increasing tumor uptake while minimizing exposure to normal tissue. [23]A new BsAb platform for extremely effective 2-step radio hapten pretargeting was described by Santich et al. as a combination of a tandem single-chain BsAb and a selfassembling-and-disassembling (SADA) domain. [100]To increase the therapeutic index, they created a unique drug delivery platform that quickly eliminates tumor-targeting proteins from the blood.The platform comprises a BsAb that binds to both ganglioside GD2 and DOTA, joined to a SADA domain.SADA-BsAbs were used for two-step pretargeted radioimmunotherapy (PRIT) with various radioisotopes in mouse models of GD2-positive neuroblastoma and GD2-negative melanoma.The PRIT approach delivered high doses of radiation to tumors and eradicated them Representative bispecific antitumor/antiradiocarrier vectors.Reproduced with permission. [99]Copyright 2022, SNMMI.
with negligible toxicity to the bone marrow, kidneys, or liver, demonstrating the potential of SADA-BsAbs as versatile and safe vectors for molecular targeting of cancer.The clinical trial of anti-GD2 SADA-PRIT and 177 Lu-DOTA-hapten are planned to involve patients with malignant melanoma, sarcoma, and smallcell lung cancer who express GD2 and are resistant or recurrent in their metastatic solid tumors (NCT05130255).The trial is currently recruiting participants and no results have been posted yet.Furthermore, a novel BsAb antitumor/anti-chelate hapten pretargeting system (antigen targets: CD20, HER2, and CEA) based on an anti-1,4,7,10-tetrakis(carbamoylmethyl)−1,4,7,10tetraazacyclododecane (DOTAM) antibody with femtomolar affinity for lead-DOTAM complexes was described by the teams at Hoffmann-La Roche, Inc. and Orano Med LLC.In particular, they discovered that for lead-DOTAM and bismuth-DOTAM, the anti-CEA/DOTAM BsAb PRIT-0213 produced dissociation values of 0.84 pM and 5.7 pM, respectively.They reported dosimetry of 0.74 MBq of 212 Pb-DOTAM for three-step pretargeting in nude mice carrying human cancer xenografts.They calculated an absorbed dose of 99.55 Gy to the BxPC3 tumor and TIs of 28, 14, and 91 for the liver, kidneys, and blood, respectively, based on relative biologic effectiveness of 5. [101]
Combined Conjugation Strategies
Integrating different conjugation techniques offers distinct advantages for enhancing ARCs.Enzyme-mediated conjugation alters antibodies at specific sites, avoiding random modifications and maintaining the antibody's affinity and functionality, but comes with limitations that can be overcome using copper-free or catalyst-free click reactions, including SPAAC or IEDDA. [90]These methods offer advantages such as higher biocompatibility, faster kinetics, and lower background noise.The combination of enzyme-mediated conjugation and clickchemistry ensures that the conjugates are uniform and stable, improving the efficacy and safety of targeted therapies.The common glycosyltransferase-mediated reaction involves the modification of the glycans on the antibody with specific functional groups, such as azide or alkyne, that can then react with the corresponding radionuclide conjugates via click chemistry. [102]he glycosyltransferase-mediated reaction can avoid the random modification of lysine or cysteine residues and produce more uniform and well-defined ARCs.Zeglis et al., for instance, developed and verified a protocol for the enzyme-and click-chemistry-mediated site-selective radiolabeling of antibodies on heavy chain glycans.Utilizing −1,4-galactosyltransferase (4GalT1), a glycosyltransferase enzyme, they integrated azidemodified N-acetylgalactosamine monosaccharides into the antibody's glycans.Next, they clicked-coupled desferrioxaminemodified dibenzocyclooctynes to the sugars containing azide without the need for a catalyst.By using the positron-emitting radiometal 89 Zr to radiolabel the prostate-specific membrane antigen-targeting antibody J591, they were able to establish the antibody's high stability and immunoreactivity both in vitro and in vivo. [102]hen using pretargeting techniques, click chemistry facilitates a rapid and strong bond between the pretargeting and the radioactive components, enhancing the delivery of the radioisotope to the tumor site while minimizing exposure to healthy tissues.Bio-orthogonal click reactions allow for quick radiolabeling (10-15 minutes), which is advantageous when using radionuclides with short half-lives, such as 18 F. [91] Pretargeted 18 Fand 64 Cu-PET imaging of in vivo models have been used to create high-contrast imaging through the bio-orthogonal click reaction between Tz and TCO. [103]A pretargeted PET imaging technique based on the bio-orthogonal Diels-Alder click reaction between transcyclooctene and tetrazine was created by Zeglis et al.A 64 Cu-labeled tetrazine radioligand, and an anti-CA19.9antibody modified with transcyclooctene was utilized to image CA19.9-expressingBxPC3 pancreatic cancer xenografts in mice.The pretargeting strategy demonstrated substantial tumor uptake (4.1 ± 0.3%ID/g) and tumor-to-background ratios of the ARCs at 4 h post-injection while lowering the radiation dosage to normal tissues compared to directly labeled antibodies.Furthermore, the two arms of an anti-EGFR and anti-CD105 bispecific antibody were conjugated using click chemistry between Tz and TCO, and this antibody was then utilized for PET imaging.By connecting two antibody Fab fragments-an anti-EGFR Fab and an anti-CD105 Fab-via bioorthogonal "click" ligation of trans-cyclooctene and tetrazine, Luo et al. created a bispecific immunoconjugate, known as Bs-F(ab) 2 . [104]They used a transcyclooctene-modified anti-CA19.9antibody.Finally, pretargeted radioimmunotherapy investigations using TCO-modified anti-CA19.9and 177 Lu-Tz radioligand have been conducted using click chemistry. [105]mong different radiolabeling approaches, enzyme-mediated radiolabeling excels in specificity but faces challenges in synthesis complexity, [106] while click chemistry-mediated radiolabeling is lauded for its efficiency and stable conjugate production, [107] and pre-targeting strategies are distinguished by their superior tumor targeting and reduced systemic toxicity. [108]These approaches are complemented by the exciting fields of AI for predictive modeling and innovative BBB-crossing techniques, which collectively signal a robust trajectory toward clinical application, as evidenced by recent literature. [66,108]
Conclusion
Synthetic ARCs are a vital radiotheranostics tool with the potential to revolutionize cancer treatment.However, the majority of created ARCs have only been evaluated in small patient cohorts or during preclinical phases.To prove the theranostic utility of the clinically employed ARCs, and to translate some of the potentially designed ones into clinical use, more research is required.The development of new antibody therapies is transforming disease treatment, leading to the creation of more advanced antibody engineering techniques, including ARCs, which can aid physicians in improving their clinical judgment and realizing genuinely individualized medicine, ultimately benefitting the patient and society as a whole.
Non-engineered antibodies like J591 continue to be widely used in clinical practice despite their known limitations.This prevalence is largely due to their long history of use, which has established a broad base of data supporting their safety and efficacy.Clinicians and patients alike tend to favor treatments with which they are familiar, and regulatory approvals for these antibodies further support their continued use.
On the other hand, engineered antibodies are increasingly recognized for their potential to overcome some of the limitations of traditional antibody-based therapies.These modifications can improve pharmacokinetics, making them better suited for solid tumor imaging and treatment while maintaining the specificity and affinity characteristic of intact antibodies.However, engineered antibodies often face hurdles in clinical adoption due to ongoing clinical trials, limited availability, and higher production costs compared to their non-engineered counterparts.
While further research is needed to fully establish the advantages of engineered antibodies, we believe they hold promise for addressing the shortcomings of conventional antibody therapies.As more data become available and these products reach regulatory approval, we anticipate an increase in their clinical application.
Additionally, certain challenges of all antibody-based therapies regarding synthesis and delivery remain to be solved, and many approaches have been explored to this end.AI-based antibody behavior prediction has the potential to be a key component in the behavior prediction of designed antibodies. [109]Large-scale information may be analyzed by machine learning algorithms to forecast the interactions between antibodies and various antigens.This capability can be very helpful in tailoring therapies for individual patients.Predicting the stability and effectiveness of antibodies under different physiological settings is another important aspect of AI's contribution to the success of antibody therapy. [110]urthermore, a major barrier to getting therapeutic medicines into the brain is the BBB. [111]Cutting-edge tactics to improve radioimmunotherapy agent distribution across the BBB are many and varied. [67]First, nanocarriers can target brain tumors using radioactive isotopes that are encapsulated in them, thereby enabling larger concentrations of the therapeutic substance there with less systemic toxicity.Second, trojan horse approaches entail coupling antibodies to naturally bridging molecules like insulin or transferrin in order to expedite the delivery of therapeutic medicines to the brain.Last, by temporarily rupturing the BBB, osmotic agents or targeted ultrasound can facilitate the passage of antibodies. [111]he development of mRNA-based gene delivery platforms is at the forefront of translating engineered antibodies for use in clinical radioimmunotherapy.These platforms are necessary for the deployment of novel antibodies, such as transient, Fc-free, multivalent, and multispecific variants.Major advancements include the development of chimeric antibodies that enhance immune responses against tumors and the clinical successes of bispecific antibodies for the management of diseases like rheumatoid arthritis and certain types of cancer.These developments are essential to advancing radioimmunotherapy from research to practical clinical settings and ushering in a new age in cancer therapy, together with AI's involvement in anticipating antibody behavior and creative methods for overcoming the blood-brain barrier.
Undoubtedly, the preclinical evidence generated by numerous innovations, such as site-specific bioconjugation techniques, sdAb-based radioimmunotheranostics, and innovative approaches to pretargeted imaging and therapy, is cause for excitement.Moving these technologies from the lab to clinical settings is essential, and while it may seem easier said than done, generating clinical data as quickly as possible is what ultimately will advance their impact on patient care.Further promising preliminary research is likely to proliferate, especially as the crosspollination of ideas among immunotherapy, antibody-drug conjugates, and radiotheranostics is further encouraged.
In both the therapeutic and diagnostic domains, antibodybased radiopharmaceuticals have a promising future.Their ability to provide accurate imaging facilitates early disease diagnosis, which is essential for prompt treatment, and their capacity to deliver radioactive agents directly to cancer cells while sparing healthy tissue makes them promising therapeutic agents for targeted therapies with few side effects.Particularly exciting is the development of theranostics, which combines therapeutic and diagnostic properties in a single agent.This approach exemplifies individualized medicine, as it enables the simultaneous imaging and treatment of malignancies.The dual utility of antibodybased radiopharmaceuticals is expected to increase as research advances, with theranostics setting the standard for innovation and providing a window into the future of integrated patient care.The advancement in this field is not about choosing between diagnostics and therapy but rather integrating both to enhance patient outcomes.
In summary, the future of modified antibodies for radioimmunotherapy will be characterized by a concentrated effort to improve site-specific bioconjugation for increased safety and efficacy, adjust pharmacokinetics for improved tissue targeting, and modify Fc interactions to improve therapeutic indices.In addition, efforts are ongoing to identify novel molecular targets and develop bispecific constructs to improve targeting accuracy and reduce off-target effects.Protein engineering developments will improve control over antibody properties, guaranteeing more efficient tumor delivery.Collectively, these research projects are paving the way for more individualized and effective radioimmunotherapy.
Figure 1 .
Figure 1.Overview of the antibody fragments, radionuclides, and conjugating methods discussed in this review.
Shifting to radioimmunotherapy, Tsai et al. assessed an anti-prostate stem cell antigen (PSCA) minibody (A11 Mb) tagged with 131 I and 177 Lu in a human prostate cancer xenograft model for pharmacokinetics and therapeutic effectiveness.The 177 Lu-labeled minibody ([ 177 Lu]Lu-DTPA-A11 Mb) provided a lower radiation dosage to the tumor than the 131 I-labeled minibody ([ 131 I]I-A11 Mb), and the minibody demonstrated quicker clearance from blood and normal tissues than full antibodies.A single dosage of [ 131 I]I-A11 Mb used in radioimmunotherapy demonstrated dosedependent tumor suppression with low toxicity and improved
Figure 2 .
Figure 2. A) Commonly used antibody fragments.B) [ 18 F]FBEM-GAcDb immunoPET in human CD20 transgenic mice, 4.3 MBq/12 μg.The signal in the kidneys and the bladder increased over time indicating clearance/excretion of the tracer primarily through the kidneys and into the urine.Increased gallbladder (GB) and GI tract activities suggest secondary excretion of radio metabolites through the hepatobiliary system.Reproduced with permission.[34]Copyright 2018, Springer Nature.C) [ 131 I]I-A11 Mb and [ 177 Lu]Lu-DTPA-A11 Mb inhibit PSCA-positive cell growth in vitro.Reproduced with permission.[36]Copyright 2020, Springer Nature.D) Whole-body images of 1 patient at various times after injection of89 Zr-IAB22M2C (1.5-mg minibody dose).All images show the most intense activity within the spleen, followed by marrow, liver, and kidneys.Reproduced with permission.[40]Copyright 2020, SNMMI.
Figure 3 .
Figure 3. A) Schematic representation of the enzyme-mediated bioconjugation of a chelator to a Fab using SrtA at the C-terminal recognition sequence (LPETG).The threonine-glycine link is broken by the enzyme upon recognition of the short -LPETG-amino acid motif, and a cysteine residue found in the active site is converted to a thioacyl intermediate.The enzyme subsequently creates a new amide bond by accepting an incoming nucleophilic N-terminal glycine.It is possible to modify the C-terminus of the substrate protein by sourcing the incoming glycine from either the cleaved peptide or another peptide in solution that has the necessary N-terminal glycine.B) PET/CT MIPs (scale given in SUV) following administration of [ 89 Zr][ZrL1]-Fab528 and [ 89 Zr][ZrDFOSq]-cetuximab. Reproduced with permission.[48]Copyright 2021, RSC.
Figure 6 .
Figure 6.Biodistribution of 89 Zr-AMG 211.A) 89 Zr-AMG 211 healthy tissue biodistribution at 3 h post tracer administration for the different dosing cohorts used for imaging before (blue) and during (green) AMG 211 treatment.Data are shown as median SUVmean, error bars.B,C) Nonlinear regression curve showing mean SUVmean in the blood pool measured in the thoracic aorta per PET scan time B) before AMG 211 treatment and C) during AMG 211 treatment.Reproduced with permission.[64]Copyright 2019, AACR.D) Imaging workup of patient 2 with a history of sigmoid cancer and synchronous liver metastases, treated with chemotherapy and intra-arterial chemotherapy.E) Liver CT and MRI scans performed in the presence of a progressive increase in serum CEA show stable residual liver lesions with no evidence of tumor activity.F) FDG-PET was negative.G-I) The immuno-PET (iPET) revealed multiple liver tumor foci.Surgical specimens of the liver lesions confirmed the diagnosis of metastases from the sigmoid cancer.Reproduced with permission.[69]Copyright 2020, Springer Nature.
Table 1 .
Radionuclides used in antibody radionuclide conjugates.
Table 2 .
Radionuclides used in antibody radionuclide conjugates for RIT.
|
2024-06-15T06:16:08.796Z
|
2024-06-14T00:00:00.000
|
{
"year": 2024,
"sha1": "ca1c364805e68144fc8e1524fde34ccb30a5025e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "6da99aaa2da1502d70065f40782016b104fb6df6",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
90022506
|
pes2o/s2orc
|
v3-fos-license
|
Impacts of Ageratina riparia (Regel) R. M. King & H. Rob. on natural regeneration of sub-montane forests at Knuckles Forest Reserve, Sri Lanka
Forest gaps and margins of sub-montane forests in the Knuckles Forest Reserve (KFR) are invaded by Ageratina riparia. It creates a dense cover and prevents the penetration of sunlight to the ground, which may affect the seedling establishment of indigenous species in invaded areas. Six forest gaps and four footpaths inside sub-montane forests were sampled for A. riparia cover, density of forest species, soil moisture, soil root density, and canopy openness. Soil seed bank experiments were conducted during wet and dry seasons. The percentage cover of A. riparia decreased significantly when moving from the center of gaps/footpaths into the forest interior. Mean density and species diversity of forest species decreased with the increase of percentage cover of A. riparia. Low root density of forest species was observed in areas with high density of this invasive species. Higher seedling emergence of A. riparia from soil seed bank was observed along footpaths (~1500 seedlings m -2 ) than in forest gaps (~750 seedlings m -2 ) during the wet season. A. riparia seedling emergence was higher during the dry season (~22%) than the wet season (7-11%). Lower number of forest seedlings emerged in locations with higher percentage of A. riparia seedlings. Availability of light affects the establishment of A. riparia inside forests. Native species, Psychotria zeylanica and Symplocos cochinchinensis can be used to restore forests invaded by A. riparia.
INTRODUCTION
Although Sri Lanka is a small tropical island, a wide range of topographic and climatic variations in the country have led to evolution of many types of ecosystems, with high level of biodiversity per unit area, that is higher than most of other countries in the region (Bambaradeniya, 2002). When considering the forest cover of Sri Lanka, the richest biodiversity of the country is in the southwest of the country which include, 670 km 2 of montane forest, 740 km 2 of lowland forest, a total of 1,410 km 2 (roughly half being primary forest), or a mere 9% of original forest covering almost 16,000 km 2 (Myers, 1990). Sub-montane forests are located at 1,000-1,500 m elevation above the mean sea level and cover 1.1% of the total land area (Bastiaanssen and Chandrapala, 2003). These forests with high biodiveristy are important due to many ecosystem services provided by them including protecting of important watersheds and providing habitats for many endemic flora and fauna (Doumenge et al., 1995;Weerawardane, 2005).
The introduction of invasive species has become a major problem to biodiversity in many tropical countries of the world and some consider it as the second greatest global threat to biodiversity, after habitat destruction (Gould and Gorchov, 1999;Kairo et al., 2003). Special characteristics unique to invasive species include the high dispersal capacity, physiological tolerance shown by the plant in its' immediate stresses in new habitats (environmental gradients such as temperature, photoperiod, the climate, resident species as competitors, predators, etc.) or the phenotypic plasticity (Maron et al., 2004), production of small, short lived seeds and germination without permanent short juvenile periods (Goodwin et al., 1999), high reproductive allocation, rapid vegetative growth rates, high potential for acclimation (McDowell, 2002) and production of allelopathic compounds (Orr et al., 2005). A large number of species extinctions were reported to have occurred due to the introduction of invasive species while others contribute to the degradation of catchment areas and irrigation systems causing severe economic losses (Mooney et al., 1989;Marambe et al., 2001;Vila and Weiner, 2004;Weerawardane, 2005). Invasive species that have escaped from the cultivations have infested lawns as pests, displaced native plant species, reduced wildlife habitats, clogged important water ways and have altered the processes in natural ecosystems (Marambe et al., 2001), thus it is important to prevent the introduction of new species to pristine environments.
Usually non-native invasive species colonize degraded habitats. Invasive species that are shade tolerant also can establish on disturbed forest habitats (Fine, 2002;Brown et al., 2006). The disturbances in forest ecosystems influence soil conditions, propagule availability, species composition and community structure of the forest (Fine, 2002). Invasive species tend to dominate disturbed areas, as they are capable of tolerating harsh conditions. If invasive species dominate the initial stage of regeneration of a forest, it is most likely to limit the establishment of native species by competing aggressively with them for resources (Lichstein et al., 2004). Many studies have documented reduced number of native species establishment in areas invaded by non-native species (El-Ghareeb, 1991;Pyšec and Pyšec, 1995;Dunbar and Facelli, 1999;Martin, 1999). In forest ecosystems, light is a critical factor affecting the growth of new seedlings of forest species and in disturbed areas invasive plants may suppress the native plant establishment by reducing the light availability (Wyckoff and Webb, 1999;Levine et al., 2003;Lichstein et al., 2003). Reduction of soil moisture, alteration of soil conditions, alteration of soil fauna and microbial communities are some of the other effects of these invasive species in an ecosystem, which lead to reduce the native species establishment (Melgoza et al., 1990;El-Ghareeb, 1991;Belnap and Philips, 2001;Ehrenfeld et al., 2001).
The mist flower or A. riparia (Regel) R. M. King & H. Rob. (Asteraceae), which is a native plant to Central America has been introduced to Hakgala Botanic Gardens of Sri Lanka in 1905 and in 1918 it was reported outside the botanic gardens (Weerawardena, 2005;Wijesundera, 1999). Since then, it has become a weed in the hill country of Sri Lanka (McFadyen, 2003). It is observed in the margins and interior of the disturbed montane and submontane forests of Sri Lanka. A. riparia is a moderately shade tolerant, shrubby perennial that grows very fast and attains a height of about 0.3 -0.5 m at elevations between 1000-1500 m in Sri Lanka. Thus, seedlings of native plant species may get smothered by the growth of A. riparia due to lack of sunlight (Humphries et al., 1991). The most important character that makes this plant a serious threat to the native plant species is that many juvenile plants of A. riparia grow from a single primary stem, developing many branches and they intertwines with the adjacent plants giving a blanket effect (Zancola et al., 2000). A. riparia is found along road sides, footpaths inside the forest and forest gaps in the Knuckles Mountain Range of Sri Lanka at elevation approximately 1,200-1,300 m a.s.l. Moreover, no or few seedlings or saplings of native forest species are observed in A. riparia invaded sites. In the areas where the ground layer of the forest had been cleared for cardamom cultivation, the invasion is greater than in the areas with a forest ground cover.
Investigation of threats caused by this invasive species on forest regeneration is crucial, because of the serious threats it may cause for the functioning of sub monatne forest ecosystems. No study has been conducted in Sri Lanka on the ecology of A. riparia to our knowledge. We hypothesize that dense stands of A. riparia have a negative impact on natural regeneration of forest species in these forests by preventing the establishment of seedlings of native plant species. The objectives of the study were to determine the distribution of A. riparia in submontane forests, the effect of A. riparia on regeneration of sub-montane forests species, the seasonal effect on the soil seed bank composition in sub-monatne forests infested by A. riparia, estimate the root density of native tree seedlings and A. riparia, and determine soil moisture content and canopy openness in areas infested by A. riparia.
Study Sites
The study was conducted in sub-montane forests at Riverston area from September 2011 to September 2012 at Knuckles Forest Reserve (KFR), Sri Lanka (7° 21'to 7° 24'N, 80° 45'to 80° 48.5' E). The KFR which is situated in Kandy and Matale administrative districts of central Sri Lanka covers an area of approximately 21,000 ha, and it spans to the upland and highland peneplains (Bambaradeniya and Ekanayake, 2003). The reserve which is high in biodiversity is an important watershed in the country, with several streams draining from the east of the reserve into the lower Mahaweli system (eg: Hasalaka oya and Heen ganga) from the south-west into the upper Mahaweli system (eg: Hulu ganga) (Bambaradeniya and Ekanayake, 2003). The KFR along with Peak Wilderness Protected Area and Horton Plains National Park was declared as part of Central Highlands World Heritage Site in 2010 by UNESCO as this region includes the largest and least disturbed remaining areas of the submontane and montane rainforests of Sri Lanka, with high endemism and biodiversity and they provide habitats for many threatened plant and animal species. (World Heritage Committee, 2012). However, the biodiversity of KFR is greatly threatened by cardamom cultivation, tea and paddy cultivation and introduction of invasive species (Bambaradeniya and Ekanayake 2003). The common invasive alien plants in the area include Austroeupatorium inulifolium, Lantana camara, Clidermia hirta, Tithonia diversifolia and Eupatorium riparium (Ageratina riparia) (Bambaradeniya and Ekanayake, 2003).
EXPERIMENTAL DESIGN
Six forest gaps and four footpaths infested with A. riparia were selected from the interior of the sub-montane forests at Riverstone (Figure 1). The angles of all the slopes were approximately 50°. The direction of the footpaths was recorded using a compass. Quadrats (1 m × 1 m) were established across each footpath at 0 m, 5 m and 10 m distances on either side of the footpath (in perpendicular to the footpath) towards the forest interior ( Figure 2). Quadrats (1 m × 1 m) were established on the forest gaps at the center, edge and 5 m away from the edge into the forest on both sides, along the north-south direction.
Source: Google Earth
VEGETATION SAMPLING
The percentage cover of A. riparia was estimated visually as a percentage from the total area of the quadrat (Klimes et al., 2003). The number of individuals of seedlings (less than 1 m height) of different species was recorded. Unknown species were identified using the reference collections at the Department of Botany, Faculty of Science, University of Peradeniya and by comparing with the herbarium specimens at the National Herbarium at Royal Botanical Gardens, Peradeniya, Sri Lanka.
SOIL ROOT DENSITY
A total of 24 soil samples (15 × 15 × 10 cm 3 ) were collected from four footpaths from six quadrats in each location. Soil samples were collected into polythene bags and were sealed. Then the samples were mixed well and were sieved (< 2 mm) under the running water. The root samples were carefully separated (roots of forest species and A. riparia) and weighed (Lata et al., 2000). The root samples were dried in the oven at 75 0 C until a constant weight was obtained (Sheng sand Hunt,1991).
SOIL MOISTURE CONTENT
Soil samples were taken using a soil corer (3.2 cm diameter) up to a depth of 6 cm. Two soil samples were taken from the center of the forest gaps and one soil sample each was collected from all other quadrats. Soil was collected in polythene bags, labeled and sealed tightly. All soil samples were oven dried at 105 °C for 24 hours after taking their fresh weights. Samples were reweighed, and the dry weight of the soil samples were obtained. The soil moisture content was expressed as the ratio between the mass of water and the mass of wet soil (wet mass basis) (Angelis, 2007).
CANOPY OPENNESS
The canopy openness was determined by taking hemispherical photographs (CoolPix 5400 Nikon camera and a Fisheye lens). A single view of the canopy was taken at each location using a vertically oriented fisheye lens (center of the forest gaps and the footpaths) using a tripod. All the photographs were taken from 10.00 a.m. to 12.30 p.m. to reduce any errors due to reflection of light. Canopy openness was analyzed using the Hemiview software (Vincent, 2001).
SOIL SEED BANK
A soil sample was collected from each quadrat in footpaths and forest gaps except from quadrats placed in the center of gaps, where two samples were collected, using a soil corer (5 cm depth and 3.2 cm diameter) during the wet (February 2012) and dry (June 2012) seasons at KFR. The samples were laid separately on sterilized soil on trays in the plant house at the Department of Botany, University of Peradeniya, Sri Lanka. Each tray was separated into six sub divisions using Aluminum sheets and one division was kept as a control. The positions of the seed bank trays were changed weekly. Soil on each division was turned and mixed until no more seedlings emerged from samples. The seedling emergence was recorded for three months until no more seedlings emerged from the soil samples in both seasons. The procedure was repeated for both dry and wet seasons.
DATA ANALYSIS
The mean percentage cover of A. riparia was calculated for each distance for footpaths and forest gaps separately. The mean number of seedlings of the forest species at the footpaths and forest gaps were calculated for each distance. One-way ANOVA was carried out using MINITAB (2003) to compare the percentage cover of A. riparia and mean number of forest seedlings, for each distance for the two locations. One-way ANOVA was used to compare the soil moisture content, soil root density, root moisture content and the canopy openness.
RESULTS
The percentage cover of A. riparia decreased significantly (p = 0.001, df = 2, F = 9.78) when moving away from the center of forest gaps and footpaths into the forest interior ( Figure 3). The lowest seedling density of the forest species was observed at the center of the forest gaps and 0 m distance of the footpaths, where the highest percentage cover of A. riparia was recorded [ Figure 3 (b)]. However, the highest seedling density of forest species was observed at the edge of the forest gaps [ Figure 3 (a)]. The number of forest species decreased with the increased cover of A. riparia. The shrub species, Psychotria zeylanica had the highest mean density of 0.75 ± 0.31 m -2 seedlings in the footpath at 0 m distance, and 1.2 ± 0.3 seedlings m -2 at the center of the forest gaps with the presence of the A. riparia (Figure 4). However, the regression values indicated that there was no relationship between the mean density of forest seedlings and the percentage cover of A. riparia (footpaths R 2 = 0.114, forest gaps R 2 = 0.018).
A total number of 45 different species of seedlings were identified from the footpaths and forest gaps invaded by A. riparia (Table 1, Appendix 1). The mean density and the composition of the forest species varied when moving away from the footpaths and forest gaps towards the forest interior. Psychotria zeylanica (Rubiaceae) had the highest abundance followed by Symplocos cochinchinensis (Lour.) S. Moore (Symplocaceae) (Table 1 and Figure 3). A higher root density was recorded for the seedlings of forest species than that of the A. riparia at all the distances in gaps and at footpaths. However, the root density of the seedlings of forest species is lower in the quadrats where the root density of A. riparia is high. The soil moisture content was not significantly different when moving away from the footpaths and the forest gaps into the forest. The canopy openness was higher in forest gaps than in footpaths.
Seedling emergence of A. riparia from soil seed bank was higher during the dry season ( Figure 5: a and c) (~22%) than in the wet season ( Figure 5: b and d) (9%). Lower seedlings density of forest species was observed from soil seed bank in locations with high seedlings density of A. riparia. However, there was no relationship between the seedling density of A. riparia and that of tree and shrub species that emerged from the soil seed banks. The A. riparia seedlings density at the footpaths (2021 seedlings m -2 at 0 m distance) were lower than that of forest gaps (2280 seedlings m -2 at center of the gap) during the dry season.
DISCUSSION
A. riparia is not capable of growing in undisturbed forests where the canopy is closed (Zancola et al., 2000, Frohlich et al., 1999, Barton et al., 2003. In our study A. riparia was only recorded in disturbed areas such as forest gaps, footpaths and roadsides at margins or interior of sub-montane forests. According to Tripathi and Yadav (1987), forest leaf litter can inhibit the seed germination and seedling growth of A. riparia due to allelopathic compounds, which are released by the decay of the litter or may be due to the competition for the limited resources for the seedling growth (physical intervention) of A. riparia (Xiong and Nilsson, 1997). Zancola et al., (2000) also reported that A. riparia volume index significantly negatively correlated with the forest leaf litter biomass. Moreover, branches of adjacent A. riparia intertwine with each other producing a blanket effect and produce a 100% ground cover (Zancola et al., 2000). Due to this dense cover, seeds of forest species may not be able to reach the soil to initiate germination. Although there was no relationship between the mean density of forest species and the percentage cover of A. riparia, lower number of forest species and lower mean densities were observed in quadrats with higher percentage of A. riparia. The high shade inside the forest can suppress the growth of the seedlings of native species and the invasive species. Moreover, cardamom cultivation may influence the seedling establishment of forest species in the forest interior. Although cardamom cultivation is legally prohibited inside the KFR, advanced regeneration of sub-montane forests is frequently cleared by locals to enhance the growth of cardamom plants. Thus, management practices used during cardamom cultivation and spread of A. riparia play a crucial role on the establishment of sub montane forest species in the KFR.
The native species P. zeylanica has shown the capability to survive in the presence of A. riparia. Root systems of certain species can affect the other species by certain mechanisms such as allelochems or alteration of microbial populations which are associated with the neighboring plants (Christie et al., 1978). Lower root density of forest species was observed with higher percentage cover of A. riparia. Hence, A. riparia seem to exert high competition for below ground resources such as water and nutrients thereby limiting the growth of forest seedlings.
The center of the forest gaps is wetter than the forest interior and the soil moisture content can vary based on the distance from the gap and gap orientation (Ziemer, 1964;Gray et al., 2002). Gálhidy et al., (2006) reported that the soil moisture content in the center of the gap always has the maximum value, regardless of the size of the gap and is usually higher than the forest interior. However, our results indicate that the center of the forest gaps had similar moisture content to the forest interior probably due to small size of the gaps in our study. The results suggest that establishment of A. riparia has reduced the soil moisture content in invaded areas. Sp.65 U26 -1 0 Figure 4: (a) The graph of the mean density of forest species and the percentage cover of A. riparia vs the distance from the forest gaps (b) The graph of the mean density of forest species and the percentage cover of A. ripria vs the distance from the footpaths. Figure 5: (a) The graph of seedling density vs. the distance from the forest gaps during the dry season (b) The graph of the seedling density vs. the distance from the forest gaps during the wet season (c) The graph of the seedling density vs. the distance from the footpaths during the dry season (d) The graph of the seedling density vs. the distance from the footpaths during the wet season.
A. riparia tends to prefer areas with partial shade but not open areas, such as areas adjacent to grasslands. According to Zancola et al. (2000), the availability of sunlight had a significant positive relationship with the volume index of A. riparia plant biomass. Similarly, higher availability of light in forest gaps than footpaths may have resulted in higher cover of A. riparia in the forest gaps than the footpaths.
The invasion of A. riparia in the forest interior may be related to the type and the frequency of disturbance. Footpaths are disturbed frequently compared to forest gaps. Such frequent disturbances may have a negative effect on the growth of this invasive species. A. riparia prefers the edges of footpaths probably due to low light intensities than that of the center of the footpath and also due to less disturbances. Another possible reason is that undisturbed forest could be observed from one meter or less away from the edge of the footpath. Along most footpaths, A. riparia is grown only as a thin strip (a width of approximately 0.5 m) parallel to the footpath.
Understanding the disturbance regimes and regeneration patterns in forests is essential to determine the health of the forests and to recommend restoration strategies to conserve biodiversity and ecosystem services. The natural regeneration in tropical sub-monatne forests at KFR is affected by cardamom cultivation as well as the spread of A. riparia. The invasive, A. riparia has suppressed the growth of seedlings of forest species by altering the microhabitat conditions and resource availability along footpaths and in canopy gaps. Since A. riparia threatens the regeneration of the forest species in sub-montane forests, their spread must be controlled. Based on the results of the current study, native forest species including Psychotria zeylanica and Symplocos cochinchinensis can be used to restore these affected sub-montane forest patches.
Prevention of the spread of A. riparia could be carried out by several methods, including the use of chemicals (herbicides), mechanical control and the biological control. The aerial application of the herbicides before the production of mature seeds by the plant could be effective in restricting further spread of the invasive species (Land protection, 2006). The mechanical control could be done by physically uprooting small plants and disposing them either by burning or putting into black plastic bags to rot down. Cultivation, grubbing, hoeing and burning along with replanting of competitive pastures or replacement of A. riparia by native flora will successfully control A. riparia (Land protection, 2006). Biological control could also be done using agents such as gall fly, Procecidochares alani Steyskal; the plume moth or defoliator, Oidaematophorus sp., and the leaf spot fungus, Cercosporella ageratinae (Nakao et al., 1981) and Entyloma ageratinae sp. nov. (Barreto et al., 1988). Use of chemicals can pollute the water sources of local communities and biological control needs thorough testing prior to introduction of new species to areas with high endemism like the KFR. Thus, in order to have the minimal impact on the functions of the natural ecosystem, we recommend mechanical control of A. riparia.
|
2019-04-02T13:12:34.194Z
|
2017-12-12T00:00:00.000
|
{
"year": 2017,
"sha1": "4b4106bdaceb037159ff370a6407b57f6ef6e01f",
"oa_license": "CCBY",
"oa_url": "http://cjs.sljol.info/articles/10.4038/cjs.v46i4.7471/galley/5924/download/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0d45de708478fb844cfcf1077bf6d1ac15e17008",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
195344698
|
pes2o/s2orc
|
v3-fos-license
|
Influence of degree of specific allergic sensitivity on severity of rhinitis and asthma in Chinese allergic patients
Background The association between sensitizations and severity of allergic diseases is controversial. Objective This study was to investigate the association between severity of asthma and rhinitis and degree of specific allergic sensitization in allergic patients in China. Method A cross-sectional survey was performed in 6304 patients with asthma and/or rhinitis from 4 regions of China. Patients completed a standardized questionnaire documenting their respiratory and allergic symptoms, their impact on sleep, daily activities, school and work. They also underwent skin prick tests with 13 common aeroallergens. Among the recruited subjects, 2268 provided blood samples for serum measurement of specific IgE (sIgE) against 16 common aeroallergens. Results Significantly higher percentage of patients with moderate-severe intermittent rhinitis were sensitized to outdoor allergens while percentage of patients sensitized to indoor allergens was increased with increasing severity of asthma. Moderate-severe intermittent rhinitis was associated with the skin wheal size and the level of sIgE to Artemisia vulgaris and Ambrosia artemisifolia (p < 0.001). Moderate-severe asthma was associated with increasing wheal size and sIgE response to Dermatophagoides (D.) pteronyssinus and D. farinae (p < 0.001). Moderate-severe rhinitis and asthma were also associated with increase in number of positive skin prick test and sIgE. Conclusions Artemisia vulgaris and Ambrosia artemisifolia sensitizations are associated with the severity of intermittent rhinitis and D. pteronyssinus and D. farinae sensitizations are associated with increasing severity of asthma in China. Increase in number of allergens the patients are sensitized to may also increase the severity of rhinitis and asthma.
Background
The prevalence of asthma and allergic rhinitis symptoms varies considerably across the world [1,2]. In China, the prevalence of allergic rhinoconjunctivitis symptoms varies from 8.7 to 24.1% documented by self-reported telephone interviews conducted between 2004 and 2005 in 11 cities [3]. The prevalence of respiratory allergy is increasing in China [3,4] and an international comparative study found that in the city of Guangzhou, the prevalence of asthma symptoms among children aged 13-14 years increased from 3.4% in 1995 to 4.8% in 2001 [4] and to 6.1% in 2009 (unpublished data).
Atopic sensitization is a risk factor for the development of upper and lower respiratory symptoms [5,6]. Exposure to allergens the patients are sensitized to may exacerbate symptoms of rhinitis and asthma by promoting airway inflammation, airflow limitation, and airway hyperreponsiveness (AHR). Sensitization to indoor allergens correlates well with indoor allergen exposure in pre-school and school-age children [7,8]. Furthermore, exposure and sensitivity follows a dose-dependent relationship [9]. Evidence supporting this relationship is particularly strong for house dust mite (HDM) sensitization [9]. Allergic rhinitis can also be caused by pollens from grasses and trees which are the most important sources of outdoor sensitizing allergens [10,11]. We have previously performed an epidemiological study of the prevalence of sensitization in patients with asthma and/or rhinitis in mainland China [12]. For indoor and outdoor allergens, we found that house dust mite sensitization was consistently associated with asthma whereas Artemisia vulgaris and Ambrosia artemisifolia pollen sensitizaions were associated with the development of rhinitis [12].
Both rhinitis and asthma are diseases of variable severity. Many studies have shown that the degree of allergic sensitivity as reflected by elevated serum allergen-specific IgE levels or allergen skin wheal size is related to asthma severity [13,14], however, other studies [15,16] did not find this relationship.
Thus, the influence of the degree of allergic sensitivity on the disease severity of allergic asthma and rhinitis remains uncertain. The aim of this study was to investigate the relationship between size of skin test or level of serum specific IgE and the severity of asthma and rhinitis in Chinese patients based on data from a recently conducted nation-wide multicentre epidemiology study.
Study population and definitions
The study was a cross-sectional epidemiologic survey, conducted from February 2006 to March 2007 in 17 cities with 24 participating centers from northern, eastern, south western and southern coastal regions of China. The study covered mid-temperate, warm-temperate, subtropical and tropical zones of China. Patients aged 5 to 65 years attending outpatient clinics at 24 centers, and diagnosed as rhinitis and/or asthma, were invited to participate in this survey. By evaluating their history, questionnaire and relevant tests, rhinitis was defined as having symptoms of sneezing, or a running, itchy or blocked nose when the patient did not have a cold or flu. Asthma was defined by a history of recurrent dyspnea, wheezing or cough episodes, positive airway reversibility testing (FEV 1 increasing ≥12% and 200 ml after inhalation of 400 mg of salbutamol) or positive airway responsiveness testing (FEV 1 decreasing ≥20% when ≤ 7.8 μmol of cumulative dose of histamine is administered). The study was approved by the Ethics Review Board of each study center and all patients gave written consent before the study.
Questionnaire
The standardized questionnaire was administered by the trained physicians or research nurses face-to-face with questions regarding demographic characteristics, family history of allergic diseases, symptoms of rhinitis, wheezing or coughing, eczema and burning or itchy eyes, smoking habits, environmental exposure factors, animal pet ownership and dietary habits. Questions about impact of allergic symptoms on daily activities, work or school, night-time sleep, and use of medications for controlling the symptoms were also documented.
Assessment of severity of rhinitis and asthma
According to the Allergic Rhinitis and its Impact on Asthma guidelines [17], rhinitis was classified as "mild" and "moderate/severe" depending on the severity of symptoms and their impact on sleep, daily activities, school and work evaluated by the questionnaire. Severity of asthma was classified according to the 2006 version of Global Initiative for Asthma guidelines [18].
Skin prick test (SPT)
The sensitivity to thirteen common aeroallergens was tested including Dermatophagoides (D.) pteronyssinus, D. farinae and Blomia tropicalis, dog, cat, Periplaneta americana, Blatella germanica, Artemisia vulgaris, Ambrosia artemisifolia, mixed grass and tree pollen, mould mix I and IV. Allergen extracts and control solutions were obtained from ALK (Horsholm, Denmark). Histamine (10 mg/ml) and diluent were used as positive and negative controls. SPT was performed on the volar side of the forearm. The wheal reaction after 15 minutes was measured as the mean of the longest diameter and the length of the perpendicular line through its middle. A positive skin reaction was defined as a wheal size 3 mm greater than the negative control. The result was also expressed as skin index (SI = mean size of allergen wheal/mean size of histamine wheal). Atopy was defined as the presence of at least one positive skin reaction to any allergen tested.
We originally recruited 6411 questionnaires and 6393 skin test reports. Among the 6411 questionnaires, 107 were invalid for lacking proper diagnosis, incompletely answering the questionnaire or missing skin test report. Of the 6393 skin test reports, 89 were rejected for missing questionnaire data, wrong codings, or missing the histamine and normal saline readings. Hence, we restricted our final valid data with 6304 patients.
Serum specific IgE Analysis
Among the 24 centers, 14 of them obtained serum samples from their subjects for sIgE analysis. With the written consents, peripheral blood was obtained from patients in the above centers only after completing the questionnaires and skin prick tests. Finally, 2268 out of the 6304 patients (806 with rhinitis alone, 773 with asthma alone and 689 with both rhinitis and asthma) from four regions provided blood for measurement of serum allergen-specific IgE (sIgE). Ten ml of blood from each subject was coagulated at room temperature, centrifuged, stored at -20°C. The sIgE against D. pteronyssinus, D. farinae, cat, dog, Periplaneta americana, Blatella germanica, Penicillium, Cladosporium, Fusarium, sycamore, willow, cottonwood, elm, grass pollen, Artemisia vulgaris, Ambrosia artemisifolia was measured with the ADVIA Centaur ® immunoassay system (Bayer Healthcare LLC, Tarrytown New York, USA) [19]. The analysis for sIgE was defined to be positive if the measurement was ≥ 0.35 kU/L.
Quality control
Standardized protocol, questionnaire, allergen skin prick testing set, and operating procedures were used by all the centers. All questionnaire interviewers and performers of skin prick testing were trained before the study. Results of questionnaire and skin prick tests were sent every month to Guangzhou, where the data were input and analyzed. Quality control reports were then prepared for each center. Each completed questionnaire and skin test report was verified by the center supervisor and the results were double-checked by the principal investigator and fed back to each center. All questionnaires and skin test data were coded and input into a programmed database by two persons independently. The entered data were checked for out-of-range values and logic mistakes.
Statistical analysis
For all analyses p < 0.05 was regarded as statistically significant. Prevalences of sensitization to various groups of allergens are presented. The differences of the sensitization rate between different severities of rhinitis and asthma were determined by chi-square tests. Skin prick test mean wheal diameter were used as raw data. The relationship between quantitative mean skin wheal diameter and severity of rhinitis or asthma was analyzed using logistic regression. Fitted predicted probability curves of moderate-severe rhinitis and asthma according to the wheal size of skin sensitizations were plotted using the results from the logistic regression. For the quantitative evaluations, the OR are presented for different skin prick test mean wheal diameters expressing the increased risk of severity of rhinitis and asthma associated with increasing skin wheal size. For associations between sIgE concentrations and different severities of rhinitis and asthma, we calculated the prevalence of rhinitis and asthma severities with different sIgE levels against D. pteronyssinus, D. farinae, Artemisia vulgaris and Ambrosia artemisifolia sIgE and the statistical significance of the differences were determined by using chi-square tests. All data were categorized and analyzed using the Statistical Package for the Social Sciences (SPSS Inc. Chicago, IL, USA) for Windows Release 13.0 and Microcal Origin 6.0 (Microcal Software Inc., Northampton, MA, USA).
Results
Of the 6304 patients, 967 subjects had mild intermittent rhinitis, 452 had moderate-severe intermittent rhinitis, 1729 had mild persistent rhinitis and 1154 had moderatesevere persistent rhinitis. Asthma was under control in 741 patients while 441 patients had intermittent asthma (step 1), 735 with mild persistent (step 2), 948 with moderate persistent (step 3) and 915 with severe persistent asthma (step 4). Patients with moderate-severe intermittent rhinitis had significantly higher prevalence of sensitization to dogs, Artemisia vulgaris, Ambrosia artemisifolia, mixed grass pollen and mixed tree pollen (p < 0.001) by skin prick tests. They also showed significantly greater percentage of multiple sensitizations (p < 0.05). Prevalence of sensitization to D. pteronyssinus, D. farinae, blomia tropicalis, dog and cat was increased with increasing of disease severity in patients with asthma. Furthermore, with increasing severity of asthma, there is higher proportion of patients with multiple sensitizations (Table 1).
Serum specific IgE against 16 common aeroallergens was measured in 2268 patients in whom 175 were classified as mild intermittent rhinitis, 281 as moderate-severe intermittent rhinitis, 596 as mild persistent rhinitis and 339 as moderate-severe persistent rhinitis. For asthma patients, 405 were at mild intermittent stage, 313 at mild persistent, 335 at moderate persistent and 628 at severe persistent stage. D. pteronyssinus and D. farinae were found to be the most prevalent allergens followed by Artemisia vulgaris and Ambrosia artemisifolia with sIgE measurements in patients with rhinitis and asthma. Significantly higher percentage of patients with moderatesevere intermittent rhinitis was sensitized to Artemisia vulgaris (p < 0.001), Ambrosia artemisifolia (p < 0.001), willow (p < 0.01), elm (p < 0.05) and grass pollen (p < 0.05). Elevated levels of sIgE against D. pteronyssinus and D. farinae in patients with asthma were associated with increasing the severity (p < 0.001). Multiple sensitizations was significantly associated with increasing in level of asthma severity (p < 0.001) ( Table 2).
Allergen skin test sizes and severity of rhinitis and asthma
Using allergen skin prick test wheal size as a continuous variable, the risk of having moderate-severe rhinitis in our patients was at around 40%-42.5% when they were not sensitized to Artemisia vulgaris ( Figure 1A) or Ambrosia artemisifolia ( Figure 1B), or any tested allergen ( Figure 1C). But the risk increased significantly with increasing skin wheal size to Artemisia vulgaris (OR 1.12, 95% CI 1.07-1.14, p < 0.001) and Ambrosia artemisifolia (OR 1.19, 95% CI 1.13-1.41, p < 0.001) corresponding to OR of 4.29 and 4.85 at 10 mm wheal size, and 11.52 and 23.71 at 20 mm, respectively ( Figure 1A-1B). Similarly, when patients were not sensitized to D. pteronyssinus ( Figure 1D) and D. farinae ( Figure 1E), or to the tested allergens ( Figure 1F), the probability of having moderatesevere asthma was at around 66%, but the risk increased for 1.21-fold per mm increase in skin wheal size to D.
Allergen sIgE levels and severity of asthma and rhinitis
Among patients with rhinitis, we found that significantly higher percentage of patients with moderate-severe intermittent rhinitis had higher level of sIgE to Artemisia vulgaris and Ambrosia artemisifolia (p < 0.001) but not to D. pteronyssinus and D. farinae (Figure 2). For asthma patients, sIgE levels against D. pteronyssinus and D. farinae, but not Artemisia vulgaris and Ambrosia artemisifolia, were significantly associated with increasing of asthma severity (p < 0.001) (Figure 3).
Discussion
In this nation-wide multicentre epidemiologic study of more than 6300 asthmatic and rhinitis patients with varying disease severity in China, we found D. pteronyssinus and D. farinae sensitizations were significantly associated with severity of asthma while Artemisia vulgaris and Ambrosia artemisifolia sensitizations were related to severity of rhinitis. Furthermore, multiple allergen sensitization was also associated with severity of rhinitis and asthma as determined by either skin prick test or sIgE measurements.
In this paper, our data show that severity of asthma was significantly correlated with skin index of reactivity to D. pteronyssinus, D. farinae and Blomia tropicalis. Furthermore, we also found that elevated levels of sIgE to D. pteronyssinus and D. farinae correlate significantly with increasing severity of asthma. Our findings support the concept that sensitization against indoor allergens may affect asthma severity [13,20]. Allergens induce sensitizations in persons who are in high risk and repetitive exposure to the allergens may lead to allergic inflammatory reactions in the airway mucosa [21]. Airway inflammation may be variably associated with changes in airway hyperresponsiveness, airflow limitation, respiratory symptoms, and disease chronicity [22]. Our finding that patients who had HDM sensitization were more likely to have more severe asthma, compared to those without sensitization, is consistent with many other studies in children or adults [23]. Platts-Mills et al. [24] reported that load of house dust mites is associated with the onset of respiratory allergic conditions, especially bronchial asthma, and that there exists a threshold of HDM exposure to induce symptoms of asthma. Even exposure to low levels of mite allergens (0.02-2.0 μg/g dust) was found to be a significant risk factor for sensitization [25]. A few studies found several species of HDM in indoor environment in China [26,27] and relatively high levels of HDM group 1 allergens (> 10 μg/g dust) has been detected in a very high proportion of dust samples from southern China [28].
Not surprisingly, we demonstrated quantitative association between the size of skin test and the specific IgE levels to pollens especially Artemisia vulgaris and Ambrosia artemisifolia and moderate-severe intermittent rhinitis. Although we did not analyze the data by stratification of the patients with regions and seasons in this paper, we predict that these patients are mainly from the northern parts of China undergoing clinical sampling during the season from July to September [12].
One recently published study [11] demonstrated that sIgE levels to birch-and grass-pollen at baseline as well as during the pollen season were associated with seasonal symptom severity of rhinitis and use of rescue medications. In contrast, adult patients with seasonal allergic rhinitis have been investigated by several studies in this respect. Some investigators found a positive association between sIgE levels and clinical symptoms [29,30], although symptoms were also dependent on other factors, such as the ease of histamine release by basophils.
Other studies did not find strong associations or reported inconsistent findings [31,32]. This inconsistency may be explained by differences in allergens, age or other characteristics of the patient populations studied. At least this seems to be the reason for a marked variability in the outcome of a variety of studies investigating the capacity to predict symptomatic allergy from sIgE levels in children [33]. We therefore assume that some of the above-mentioned differences among studies in respiratory allergies may be explained by the varying parameters of the allergens studied, the age of the patients and the measurements of clinical disease severity. Surprisingly, we failed to find the relationship between HDM skin test size and specific IgE levels and severity of any type of rhinitis, especially persistent rhinitis, however, our finding supports the facts that outdoor allergens affect rhinitis significantly [13,20]. Many studies have shown that pollen such as Artemisia vulgaris and Ambrosia artemisifolia is a larger allergen compared with HDMs and is mainly deposited in the upper airway where it induces local inflammatory or pathological changes, whereas enzymatic activity of pyroglyphid mites seems to be important in the pathogenicity of lower airway and systemic inflammations [34,35]. We have extended this observation by demonstrating the same associations for Chinese weed grass pollens Artemisia vulgaris and Ambrosia artemisifolia within the group of patients defined as atopic using standard definitions [17]. These findings also indicate that IgE-mediated sensitization is not dichotomous in its relation to the expression, severity and temporal pattern of upper and lower respiratory allergic diseases.
In this study, we also found by both skin test and sIgE measurements that patients with sensitizations to multiple allergens were significantly more likely to have more severe rhinitis and asthma. Our results are in agreement with the study by Simpson et al. [36]. They investigated a group of adults with asthma showing that sensitization to dust mite, cat, dog, and mixed grasses as well as multiple sensitizations were all independently associated with asthma. The data of another study [13] suggested that the development of specific IgE response to multiple indoor allergens is an important factor in the persistence of bronchial obstruction in children with asthma.
In summary, the results of the current study emphasize the importance of sensitization to indoor allergens in asthma severity and to outdoor allergens in severity of rhinitis. Sensitization to more than one allergenic source also significantly increases the possibility of developing moderate-severe rhinitis and asthma.
|
2018-04-03T04:55:05.433Z
|
2011-07-15T00:00:00.000
|
{
"year": 2011,
"sha1": "7082daf3eb40cac609a94771d0afbcd7a6d36d6f",
"oa_license": "CCBY",
"oa_url": "https://respiratory-research.biomedcentral.com/track/pdf/10.1186/1465-9921-12-95",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7082daf3eb40cac609a94771d0afbcd7a6d36d6f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247461582
|
pes2o/s2orc
|
v3-fos-license
|
Novel Salmonella Phage, vB_Sen_STGO-35-1, Characterization and Evaluation in Chicken Meat
Salmonellosis is one of the most frequently reported zoonotic foodborne diseases worldwide, and poultry is the most important reservoir of Salmonella enterica serovar Enteritidis. The use of lytic bacteriophages (phages) to reduce foodborne pathogens has emerged as a promising biocontrol intervention for Salmonella spp. Here, we describe and evaluate the newly isolated Salmonella phage STGO-35-1, including: (i) genomic and phenotypic characterization, (ii) an analysis of the reduction of Salmonella in chicken meat, and (iii) genome plasticity testing. Phage STGO-35-1 represents an unclassified siphovirus, with a length of 47,483 bp, a G + C content of 46.5%, a headful strategy of packaging, and a virulent lifestyle. Phage STGO-35-1 reduced S. Enteritidis counts in chicken meat by 2.5 orders of magnitude at 4 °C. We identified two receptor-binding proteins with affinity to LPS, and their encoding genes showed plasticity during an exposure assay. Phenotypic, proteomic, and genomic characteristics of STGO-35-1, as well as the Salmonella reduction in chicken meat, support the potential use of STGO-35-1 as a targeted biocontrol agent against S. Enteritidis in chicken meat. Additionally, computational analysis and a short exposure time assay allowed us to predict the plasticity of genes encoding putative receptor-binding proteins.
Introduction
Salmonellosis is one of the most frequently reported zoonotic foodborne diseases [1,2]. The causative agent, Salmonella, can be transmitted to humans along the farm-to-fork continuum, commonly through contaminated foods of animal origin [3]. Salmonella spp. are estimated to cause 93.8 million cases of acute gastroenteritis and 155,000 deaths globally [4]. The Centers for Disease Control and Prevention (CDC) estimates that Salmonella spp. cause 1.2 million illnesses, 23,000 hospitalizations, and 450 deaths in the United States each year, resulting in an estimated USD 400 million loss in direct medical costs [2]. In Europe, during spective of using them for the biocontrol of Salmonella [11]. To this end, comparative genomics have the potential to strongly increase our understanding of phage diversity and function [24] and of genome packaging strategies [25,26]. Significantly, lifestyle identification is critical for determining the role of individual phage species within ecosystems, their effect on host evolution, and their safety for use in biocontrol. Lifestyle classification includes temperate and virulent categories [27,28]. Temperate phages have high genomic plasticity, which may involve the mechanisms of gene flows caused by a recombination of the genome concatemers that are packaged into the virion [25,26]. Aditionally, the modular organization of temperate phage genomes, recognized as a mosaic of genomic regions, with similar sequences within pairs of dissimilar genomes, drive gene exchange between phages of different phylogenetic origins [27]. This gene flow tends to occur between recombinases, transposases, and nonhomologous end joining, suggesting that both homologous and generalized recombination contribute to gene flow [27]. On the other hand, it should be noted that there are phages with low gene flux, which are usually virulent phages, with genes encoding functions involved in cell energetics, nucleotide metabolism, DNA packaging and injection, and virion assembly [27].
The packaging type corresponds to a powerful molecular motor, and it is composed of the portal protein (which provides a portal for DNA entry), the large terminase subunit whose ATPase activity drives DNA translocation, a small terminase subunit (which recognizes the viral packaging site) [29], and the action of headful nucleases (which cut the viral genomes before and after DNA packaging) [30][31][32]. Therefore, the phylogenetic origin of the phage, the type of terminase, and the packaging mechanisms should be related [26]. For example, representative termini of different phages have been described: 5 cos (lambda), 3 cos (HK97), pac (P1), head without pac site (T4), DTR (T7), and host fragment (Mu) [26]. Some Salmonella phages have been classified within the Siphoviridae family (currently siphoviral morphotype), and they present a type of pac site-directed headful packaging mechanism, including S. enterica-phages 9NA, FSL_SP-062, FSL_SP-069, and Sasha, Sergei and Solent [33]. Standardization, especially for circularly permuted phages, will facilitate the comparison of phage genomes and the identification of homologs [26].
Siphoviral phage genome organization is modular and prone to horizontal gene exchange; nevertheless, related functionalities can be recognized between different modules across phage genomes [28]. The amino acid sequences in these gene products share conserved protein domains (CDDs), which contain common sequence patterns or motifs, characterized as functional and/or structural units in a polypeptide sequence [28]. In molecular evolution, these domains may have been used as building blocks, and may have been recombined in different arrangements to modulate protein function, so their analysis can derive important information about genetic plasticity in the phage genome [34][35][36]. In addition, this modular conformation is manifested in other adhesin-like tail genes, which have been presented in the podovirus Salmonella P22 and siphovirus. In these genes, a common genetic origin has been demonstrated, with minor mutations in the central segments that could impact the folding of the encoded proteins [37]. The aim of this study was to comprehensively characterize a newly isolated Salmonella phage, STGO-35-1 (vB_Sen_STGO-35-1), for which in silico genomic (via comparative genomics) and phenotypic analysis may identify promising and rapid approaches for better understanding phage-based biocontrol.
Isolation and Characterization of Phage vB_Sen-STGO-35-1 2.1.1. Propagation Conditions
A Salmonella phage, which we named vB_Sen_STGO-35-1 (STGO-35-1), was isolated from a backyard chicken flock, using Salmonella Enteritidis strain DR028 as an isolating and propagating host [38]. The phage was isolated as previously described [38]. Briefly, the isolation consisted of the enrichment of 1 g of a cloacal swab in a 100 mL culture of four S. enterica serovars (Infantis, Heidelberg, Typhimurium, and Enteritidis) grown in tryptic soy broth (TSB, Becton-Dickinson, Franklin Lakes, NJ, USA). Cultures were diluted tenfold, incubated at 37 • C for 18 ± 2 h, and then centrifuged. The supernatants were filtered through 0.22 µm filters to generate a primary lysate. Subsequently, the lysate was mixed with each host individually, with 100 µL of phage lysate and 300 µL from a 1:10 v/v dilution of an exponentially growing culture of each serovar. The mixture was added to soft trypticase soy agar (TSA; 0.7% Becton-Dickinson, Franklin Lakes, NJ, USA), plated on TSA, and incubated for 18 ± 2 h at 37 • C. From that plate, one lysis plaque was selected to be purified with the host for at least three subsequent passages. For propagation, plaques with confluent lysis were flooded with 10 mL of sodium-magnesium (SM) buffer (50 mM Tris-HCl pH 7.5, 0.1 M NaCl, 0.01 mM MgSO 4 ) and filtered (0.22 µm). This lysate of the original phage is referred to hereafter as the wild type of STGO-35-1.
The host range assay (Table S1), for this phage, was performed using the doublelayer agar method, as previously described [38], against 23 different Salmonella serovars. Briefly, the characterization of phage host range was performed by spotting 5 mL of phage lysates (approximately 3 × 10 5 PFU/mL) on a host cell lawn prepared with a 1:10 dilution of an overnight culture of the host strains in 4 mL of soft agar (0.7% TSA). Plates were incubated for 16 to 18 h at 37 • C and then examined for lysis (either present or not present), considering that clean and turbid plaques indicate the presence of lysis and that the absence of plaques indicates no lysis. Experiments were performed in two independent replicates.
One-Step Growth
A one-step growth curve was plotted according to a previously described standard protocol [39]. Briefly, S. Enteritidis was infected with the STGO-35-1 phage at a multiplicity of infection (MOI) of 0.01 and incubated at 37 • C. Then, after an incubation time of 10 min, two 100 µL samples were centrifuged at 13,500 rpm and were collected every 10 min. The supernatant was separated to determine both the viral titer (PFU/mL) and the bacterial cell count (CFU/mL). The burst size was calculated through the quantification of the infective centers (average three higher viral titers/three lower viral titers). The latency period was calculated as the mean between these time points (immediately post-lysis) and the previous time point (immediately pre-lysis). This assay was conducted in triplicate.
Transduction Efficiencies
Transduction efficiencies were determined by estimating the frequency of transduction by quantifying the phage-mediated acquisition of a resistance gene on S. Typhimurium 14,028 s, as previously described [40]. The transduction frequency was calculated as the number of infectious centers CFU/PFU mL. Experiments were performed in duplicate.
Microscopic Characterization of Phage vB_Sen-STGO-35-1
Transmission electron microscopy (TEM) was used to ascertain the morphological traits. For this, a first scan of the sample was carried out to measure the viral particles, as recommended by Ackermann [41]. Briefly, phage particles, which were previously purified and precipitated with polyethylene glycol PEG8000 (Sigma-Aldrich, St. Louis, MO, USA), were used and stored in SM buffer. Next, phage particles were washed with 0.1 M ammonium acetate and centrifuged at 21,000× g in a microcentrifuge (Thermo Fischer Scientific, Waltham, MA, USA) and were deposited onto 150-200 mesh carbon-coated Formvar film copper grids and stained with 1% phosphotungstic acid (PTA, pH 7.4) and imaged with a JEOL 1400 Flash TEM at magnifications of 50,000× to 100,000× at 85 kV. Images were analyzed using Fiji3 [42].
Genomic and Phylogenetic Analyses of Phage vB_Sen-STGO-35-1
For genomic and taxonomic characterization, DNA was analyzed using the phenolchloroform method, and then precipitated with ethanol, as previously described [38]. To eliminate exogenous genomic material, we treated phage stocks (titer > 5 × 10 10 PFU/mL) with 2 mM CaCl2, 5 µg/mL DNase-I (Promega BioScience, Madison, USA) and 30 µg/mL RNase-A (Sigma-Aldrich, Darmstadt, Germany) for 30 min at room temperature. To inactive enzymes, we incubated samples at 65 • C for 10 min and then 2 mg/mL Proteinase K (Promega BioScience, Madison, MA, USA) was added. The rest of the Sambrook and Russel protocol was followed [43]. Then, DNA concentration and quality were determined using a Maestro Nano Pro-Spectrophotometer (Maestrogen Inc., Hsinchu, Taiwan), as previously described [15].
The assembled genome was re-oriented to begin at the large terminase subunit, and reads were mapped to the assembly to check that coverage across the new junction was consistent with the rest of the assembly. The re-oriented assembly was annotated with RASTtk (with the pipeline customized to run "annotate-proteins-phage" before "annotateproteins-kmer-v2") [50] and ARAGORN v1.2.41 was used identify tRNA genes [51]. A genome map of STGO-35-1 was generated with Artemis and DNAPlotter [52]; subsequently, lifestyle prediction (temperate or virulent) was conducted using BACterioPHage LIfestyle Predictor (BACPHLIP v0.9.6), which detects the presence of conserved domains and uses these data to predict lifestyle using a random forest classifier on a dataset of 634 phage genomes [24], and PhageTerm [25] was used to predict the packaging mechanism and the characteristics of the termini. This software uses raw reads of a phage sequenced with a sequencing technology using random fragmentation and its genomic reference sequence to determine the termini position, first segments the genome according to coverage using a regression tree.
In addition, the phage genome was compared to nucleotide sequences available from the GenBank-NIH database. This analysis was carried out with a complete alignment of the nucleotide sequences using BLASTn, and relatedness was evaluated using JSpeciesWS [53], which considered the best query score, highest identity (close to 100%), and best probability (e-value under 0.0). The similarity between sequences was plotted using EasyFig [54]. Subsequently, a phylogenetic analysis [55] was carried out using COBALT [56], with STGO 35-1 as the reference. A max seq difference (>0.5) of 0.75 was considered for this analysis. Additionally, a phylogenetic analysis based on the large terminase subunit was also conducted using COBALT [55,56]. The evolutionary distance between two sequences was modeled as the expected fraction of amino acid substitutions per site given the fraction of mismatched amino acids in the aligned region using the model proposed by [57]. The max seq difference used was 0.75, and the minimum distance to conform the groups was 0.01.
Structural Proteome Analysis of Phage vB_Sen-STGO-35-1
Structural proteome analysis on STGO-35-1 phage particles was conducted using LC-MS/MS, as described previously by Wagemans et al. [58]. Briefly, phage lysate was centrifuged for 10 min at 13,000 rpm. Then, supernatant was filtered through 0.22 µm syringe-attached filters. This procedure was performed three times and, subsequently, the phage lysate was precipitated with polyethylene glycol PEG8000 (Sigma-Aldrich, St. Louis, MO, USA) and stored in SM buffer. Purified lysates were analyzed in the Department of Biosystems, KU Leuven, Belgium. Phage proteins were extracted from a PEG-purified phage stock (10 11 PFU/mL) using a chloroform:water:methanol extraction (1:1:0.75). The protein pellet was resuspended in SDS-PAGE loading buffer (40% glycerol, 200 mM Tris-HCl pH 6.8, 4% sodium dodecyl sulphate (SDS), 0.4% bromophenol blue, 8 mM EDTA, 5% beta-mercaptoethanol and separated on a 12% SDS-PAGE gel. After Coomassie staining, gel slices were picked across the whole lane and processed for mass spectrometry analysis using LC-MS/MS on an Easy-nLC 1000 liquid chromatograph (Thermo Scientific), coupled to a mass calibrated LTQ-Orbitrap Velos Pro via a Nanospray Flexion source (Thermo Fisher Scientific) using sleeved 30 µm ID stainless steel emitters, as described previously by Shevchenko et al. [59] and Ceyssens et al. [60]. The raw data were analyzed using SEQUEST v1.4 (Thermo Fisher Scientific) and Mascot v2.5 (Matrix Sciences). The amino acid sequence analysis of phage structural proteins (putative proteins/functions), previously predicted with LC-MS/MS, was conducted with HHpred multiple sequence alignment [61] against Protein DataBank (PDB), uniProt, NCBI_CDD conserved_domain_database, and SCOpe [62]. Then, the amino acid sequence that presented the best probability, score, identity, and query was selected.
Assay in Chicken Meat
An assay to evaluate the reduction in S. Enteritidis in artificially contaminated chicken meat was conducted as previously described [13]. For this, a piece of 500 g of chicken meat was bought from local retail and transported to the laboratory under refrigerated conditions. Before the experiments, molecular screening was performed to confirm the absence of S. enterica and phages similar to STGO-35-1 in the chicken meat used for the assays. We use InvA-PCR, as previously described, to test for Salmonella presence [63]. Primers that targeted STGO-35-1 were designed with PrimerBLAST (https://www.ncbi.nlm.nih.gov/ tools/primer-blast/, accessed on 9 March 2022): forward primer 5 GGACGCGTAGCT-TAATTGGT 3 and reverse primer 5 GTGGACACGGACGGATTTGA 3 of a tRNA gene of STGO-35-1 were used. Both PCR tests were conducted before the assays.
Chicken meat was prepared by cutting approximately 2 cm 2 pieces, which were deposited on a Petri dish. The surface of each piece was inoculated by spotting 50 uL of a concentration of 4 × 10 6 CFU/mL of a strain of S. Enteritidis resistant to nalidixic acid (SE nalr ), which was then allowed to air dry for 30 min at room temperature under aseptic conditions. Subsequently, 50 uL of phage lysate was added to the surface of the meat pieces at 4 × 10 6 PFU/mL (MOI = 1) and the cultures were cultivated as mentioned below. Two controls were prepared, one control with only 50 uL of SE nalr in the chicken meat pieces in the absence of phage, and another control with only the phage added in the absence of SE nal , and both controls were dried in the same conditions. Inoculated chicken meats were incubated at 4 • C for 7 days. Each day, samples were collected to determine bacterial concentrations and phage titers. For this, three different pieces of meat of approximately 2 g were used in each sampling time. Meat pieces for Salmonella and phage quantifications were placed in tubes with 5 mL of 0.8% NaCl, incubated at room temperature for 10 min, and mixed by shaking at 50 rpm. Samples were then centrifuged at 12,000 rpm for 30 min, and bacteria pellets were resuspended in 1 mL of TSB, and the mixture was further used to quantify the bacterial concentration using plate counts on nalidixic acid-added TSA plates, which was expressed as CFU/mL. The meat pieces were also resuspended in 5 mL of buffer SM and then separated from the supernatant, which was filtered with a 0.22 µm filter, following the addition of 1% chloroform; this was conducted to quantify the phage titer via spot-testing in double agar with the original host, which was expressed in PFU/mL. The experiment was repeated three times for each experimental group. The daily difference between the CFU/mL of S. Enteritidis in comparison to controls was statistically tested using an ANOVA test (with a p < 0.05 considered to be a significant difference). The statistical software Infostat was used for this analysis (released 2016: https://www.infostat.com.ar, accessed on 9 March 2022).
Genome Stability Testing
To determine the stability of phage STGO-35-1, we followed two approaches: (i) an in silico analysis and prediction of RBPs putative proteins and (ii) an exposure assay followed by sequencing. For the in silico prediction of RBPs, we used amino acid sequences and structural proteome data, as described previously (Section 2.1.6). The proteins that represented the best prediction were compared with similar sequences via Clustal-O alignment [64]. From this, each similar sequence selected as the main RBP was analyzed through comparison to the database NCBI_CDD [62]. We analyzed putative conserved domains in three portions of the protein (amino terminal, central, and carboxy terminal). With this information, the most similar sequence was selected using a Markov model in the Modeler program [65]. Modeler provided the most probable three-dimensional structure of the protein and described the surface amino acidic residues available to interact with different molecules of interest (such as bacterial receptors). In addition, it provided information on the origin of the crystallized structure in a Protein Data Bank (PDB) format For the exposure assay, both phage and S. Enteritidis were co-cultured at an MOI of 0.1, with a phage concentration of 4 × 10 7 PFU/mL and S. Enteritidis at 4 × 10 8 CFU/mL. This mixture was incubated for 60 min at 37 • C, and then the mixture was blended with soft agar (TSA 0.7% Becton-Dickinson, Franklin Lakes, NJ, USA), and it was then inoculated on TSA (Becton-Dickinson, Franklin Lakes, NJ, USA) and incubated for 18 ± 2 h. The morphology of each lysis plaque was visually examined after one day. Three different plaque morphologies (P1, P2, and P3) were selected and propagated (as described in Section 2.1.1) for further comparative genomic analysis. For this purpose, phage DNA was purified and sequenced (following the previously described protocol, Section 2.1.5). Mutations were identified using McCortex (v.0.0.3) [66], pipeline ("vcfs" argument, links and joint calling, both bubble and breakpoint callers, and a kmer size of 81) with the wild-type phage assembly as the reference. The wild-type phage reads were also included as a control to account for spontaneous mutations. The genes containing mutations were further explored with InterProScan [67] and HMMER [68]. The mutations found in P1, P2, and P3 were classified by: (i) the effect of the mutation on amino acid substitution (based on charge and polarity, as described by Hanada et al. [69]; this included radical nonsynonymous substitution (RNS), synonymous substitution (SS), or conservative nonsynonymous substitution (CNS); and (ii) frequency as compared to frequency in control phage (wild type).
Phenotypic Characterization of STGO-35-1 Phage
STGO-35-1 was isolated on the S. Enteritidis strain DR028 (Table S1), drawn from a backyard poultry flock in central Chile. The isolated phage generated clear lysis plaques, with an average plaque size of 2 mm, after three consecutive purifications with its S. Enteritidis host. The phage was purified, and a host range analysis of STGO-35-1 showed a narrow host range (Table S1). The phage was capable of lysing 4 out of 23 tested strains, representing Salmonella serovars Enteritidis (Group D1 O:9), Braenderup (Group C4 O:6), Panama (Group D1 O:9), and Agona (Group B O:4), all serovars of public health importance [6]. The host range of Salmonella serovars demonstrated by this phage was narrow relative to other phages isolated at the same time from backyard poultry production systems. However, this narrow host range in vitro does not determine the efficacy of a phage for application in biocontrol [38], and the ability to lyse S. Enteritidis, the most common Salmonella serovar, is relevant for biocontrol purposes; additionally, a phage specific to this important serovar could represent a promising precision tool. The one-step growth curve analysis of Salmonella phage STGO-35-1 under standard growth conditions indicated a burst size of 122 (±10) viral particles with a latency period of 30 min (±10 min) (Figure 1). Parameters such as burst size and latency period have been previously described as relevant to characterize the lytic capacity of a phage [70]. TEM analysis showed a capsid and long flexible tail ( Figure 2).
Phenotypic Characterization of STGO-35-1 Phage
STGO-35-1 was isolated on the S. Enteritidis strain DR028 (Table S1), drawn from a backyard poultry flock in central Chile. The isolated phage generated clear lysis plaques, with an average plaque size of 2 mm, after three consecutive purifications with its S. Enteritidis host. The phage was purified, and a host range analysis of STGO-35-1 showed a narrow host range (Table S1). The phage was capable of lysing 4 out of 23 tested strains, representing Salmonella serovars Enteritidis (Group D1 O:9), Braenderup (Group C4 O:6), Panama (Group D1 O:9), and Agona (Group B O:4), all serovars of public health importance [6]. The host range of Salmonella serovars demonstrated by this phage was narrow relative to other phages isolated at the same time from backyard poultry production systems. However, this narrow host range in vitro does not determine the efficacy of a phage for application in biocontrol [38], and the ability to lyse S. Enteritidis, the most common Salmonella serovar, is relevant for biocontrol purposes; additionally, a phage specific to this important serovar could represent a promising precision tool. The one-step growth curve analysis of Salmonella phage STGO-35-1 under standard growth conditions indicated a burst size of 122 (±10) viral particles with a latency period of 30 min (±10 min) ( Figure 1). Parameters such as burst size and latency period have been previously described as relevant to characterize the lytic capacity of a phage [70]. TEM analysis showed a capsid and long flexible tail (Figure 2). Transduction efficiencies were tested to rule out whether this phage could transduce resistance gene to a significant level, which would have been a key obstacle against its use as a biocontrol agent. The transduction efficiency studies demonstrated that STGO-35-1 is not capable of transducing an antimicrobial resistance gene, with tested volumes of 1, 5, and 20 µL of the phage at a concentration of 10 8 PFU/mL (Table S2). As for the positive control (phage P22 HT int), antibiotic-resistant colonies (km r ) were observed, which increased in number along with the volume of phage transducer used. The frequencies of transduction did vary for the control phage from 3 × 10 −5 to 2 × 10 −6 PFU/CFU. While we used an MOI that could have missed some level of transduction, and the control was a
Comparative Genomics of Phage STGO-35-1
The assembly had a length of 47,483 bp, G + C content of 46.5%, and an average read coverage of 2045× (Figure 3). An analysis of the packaging strategy of this phage was conducted through PhageTerm [25]. General information on the configuration and mapping Transduction efficiencies were tested to rule out whether this phage could transduce resistance gene to a significant level, which would have been a key obstacle against its use as a biocontrol agent. The transduction efficiency studies demonstrated that STGO-35-1 is not capable of transducing an antimicrobial resistance gene, with tested volumes of 1, 5, and 20 µL of the phage at a concentration of 10 8 PFU/mL (Table S2). As for the positive control (phage P22 HT int), antibiotic-resistant colonies (km r ) were observed, which increased in number along with the volume of phage transducer used. The frequencies of transduction did vary for the control phage from 3 × 10 −5 to 2 × 10 −6 PFU/CFU. While we used an MOI that could have missed some level of transduction, and the control was a high transduction mutant, further assays to test for low levels of transduction are necessary before using phages as part of biocontrol.
Comparative Genomics of Phage STGO-35-1
The assembly had a length of 47,483 bp, G + C content of 46.5%, and an average read coverage of 2045× (Figure 3). An analysis of the packaging strategy of this phage was conducted through PhageTerm [25]. General information on the configuration and mapping showed mapping reads of 83%, while general controls were 253 of the whole genome coverage, and with the presence of preferred termini with terminal redundancy and the occurrence of partially circular permutations, which is consistent with the "Headful" strategy of packaging. The pac site is located in the P1 EcoRI fragment with approximately 20 bp. [25]. The headful mode of packaging pac is concluded when we have a single obvious terminal only on one strand [25]. The pac site-directed headful packaging mechanism has been described in 9NA, FSL_SP-062, FSL_SP-069, Sasha, Sergei, and Solent [33]. Other phages similar to 9NA have also been described, which correspond to P22, P1, SPP1, Sf6, and ES18 [71][72][73][74][75][76]. This mechanism consists of the sequential encapsidation of DNA from a cleavage that produces the set of redundant and permuted molecules found in the progeny phages, as described for the phage Escherichia coli bacteriophage P1 [30,31]. However, it should be noted that the limitation of this method is related to the protocol used to prepare the nucleic acid libraries before sequencing [25].
Eighty-nine features were identified, including eighty-eight coding sequences (CDSs) and one tRNA (Table S3). Forty-three annotated CDSs were identified as homologs of known phage genes, including sixteen genes encoding putative proteins involved in phage structure; nine encoding DNA-associated putative protein/enzyme replication, repair, and recombination; and five genes responsible for lytic activity, in addition to the identified tRNA-Met (CAT). The presence of tRNA genes in phages has been associated with interactions between bacterial hosts, and could participate in the translational processes, but its particular role remains unknown [77]. Additionally, a positive correlation between the number of tRNA genes and genome length has previously been reported [77]. This implies that longer phage genomes (e.g., 80 kb) would have more tRNA than average-sized genomes (e.g., 35 kb). The predicted phage tail genes were organized in a module, between nucleotide positions 24,475 and 38,191. This module was flanked by a tRNA-Met gene and putative helicase-encoding gene (Figure 3).
Genes encoding putative lysis-associated putative proteins, including a lysozyme muraminidase, a lysin_SAR-endolysin (gp18), and a putative holin, were also identified. The genome has genes whose products are associated with DNA metabolism, including a putative restriction alleviation, a DNA polymerase III, a putative transcriptional regulator, a single-stranded DNA-binding putative protein, an exonuclease, a putative ATP-dependent helicase, a DNA primase, and the phage terminase (large subunit) (Figure 3). The remaining 58 CDSs encode putative proteins of unknown function (hypothetical proteins or phage proteins; see Table S3). No genes related to a temperate lifestyle, toxin production, virulence, or antibiotic resistance were identified. BACPHLIP [24] predicted a virulent lifestyle (with 96.25% probability). It is possible to think that the low probability of a temperate lifestyle (3.45%) is explained by a low gene flow, given, for example, by the genes coding for putative transcriptional regulators, DNA-binding proteins, phage-associated recombinases, exodeoxyribonuclease VIII, and DNA helicases (CDS 43, 67, 68, 69, 72, and 73, described in Table S3). Therefore, the combined evaluation of the experimental lifestyle assessment (Section 2.1.3), together with the bioinformatic prediction obtained by BACPHLIP, present conclusive evidence to indicate that the STGO-35-1 phage is a virulent phage infecting Salmonella Enteritidis [24] (Figure 3). According to the current criteria of the International Committee on Taxonomy of Viruses (ICTV), to belong to the same genus, phage genomes should have at least 50% nucleotide identity, a similar G + C%, similar tRNA numbers, and similar coding sequences. In addition, a comparison of predicted proteomes and phylogenetic analysis must be performed [56,57]. On the basis of these criteria, we conclude that the STGO-35-1 phage represents a new species belonging to the class Caudoviricetes, order Caudovirales, and with a siphoviral morphotype, and we consider it to be an unclassified siphovirus (NCBI txid196894).
Future studies should consider the criteria proposed by ICTV to evaluate all these unclassified siphoviruses [22]. Further description of new siphoviruses is necessary, as On the basis of average nucleotide identity (ANI), STGO-35-1 was found to be similar to 20 available siphoviral phage genomes (Table S4). The most similar phage was Salmonella phage Akira, with an ANIb of 88.53% across 54.15% of aligned sequences (47.94% ANI across the entire genome) (Table S4). Other similar phages included Salmonella phage D10, Escherichia phage C1, and Shigella phage DS8 (Table S4). Through a whole-genome phylogenetic analysis of STGO-35-1 and the 20 similar phages, a tree with minimal differences of 0.06 was obtained ( Figure 4); the closest group to STGO-35-1 consisted of Salmonella phage KFS-SE2, Salmonella phage Akira, and Salmonella phage 64795_sal3. The whole-genome alignment of the most closely related phages to STGO-35 (Figure 4) demonstrates the similarity between their genomes and the conformation in modules (capsid proteins, tail proteins, DNA metabolism proteins, and lysins).
Structural Proteome Analysis of Phage vB_Sen-STGO-35-1
We experimentally verified computationally predicted STGO-35-1 structural putative proteins ( Figure S3). Eighteen CDSs were predicted to encode structural putative proteins (Table S3), while 18 putative proteins were also identified via mass spectrometry with a sequence coverage between 4.91% and 40.7% (Table 1). The identified peptides correspond to five capsid putative proteins, three minor and one major capsid putative protein, one lysine putative protein, and six tail putative proteins, including the tail tube, tape measure, minor tail, tail tip protein L, putative phage tail, and the tail spike putative proteins. Finally, one hypothetical protein and one DUF5681 family protein were retrieved (Table 1). Finally, a phylogenetic analysis of genes encoding the putative terminase of Salmonella phage vB_Se_STGO-35-1, which belongs to the phage terminase, a large subunit, the PBSX family, and super family cl12054 (pfam04466). This terminase showed a close relationship between the following siphoviral terminase sequences: NCBI Protein ID:DAH84866.1, Siphoviridae sp. ID: DAK93255.1, and E. coli ID: EIH4118707.1 ( Figure S4). No further details were obtained on the origin of the most related terminase sequences because some of these sequences do not correspond to the complete genome of a phage.
According to the current criteria of the International Committee on Taxonomy of Viruses (ICTV), to belong to the same genus, phage genomes should have at least 50% nucleotide identity, a similar G + C%, similar tRNA numbers, and similar coding sequences. In addition, a comparison of predicted proteomes and phylogenetic analysis must be performed [56,57]. On the basis of these criteria, we conclude that the STGO-35-1 phage represents a new species belonging to the class Caudoviricetes, order Caudovirales, and with a siphoviral morphotype, and we consider it to be an unclassified siphovirus (NCBI txid196894).
Structural Proteome Analysis of Phage vB_Sen-STGO-35-1
We experimentally verified computationally predicted STGO-35-1 structural putative proteins ( Figure S3). Eighteen CDSs were predicted to encode structural putative proteins (Table S3), while 18 putative proteins were also identified via mass spectrometry with a sequence coverage between 4.91% and 40.7% (Table 1). The identified peptides correspond to five capsid putative proteins, three minor and one major capsid putative protein, one lysine putative protein, and six tail putative proteins, including the tail tube, tape measure, minor tail, tail tip protein L, putative phage tail, and the tail spike putative proteins. Finally, one hypothetical protein and one DUF5681 family protein were retrieved (Table 1). Putative protein detail in Table S2. The color coding of genes indicates putative functional category, previously used in the genetic map (Figure 3), corresponding to: capsid (green); lysozyme (black); CDSs and hypothetical proteins (gray); tail proteins (red).
Later, the assay in chicken meat showed a significant reduction (p < 0.05) of 2.5 log 10 of S. Enteritidis in chicken meat treated with phage STGO-35-1 ( Figure 5). We observed that the phage concentration remained relatively stable after a first increase in day 1, with an average of 5 × 10 8 PFU/mL (7.7 Log 10 ) ( Figure 5). The reduction of S. enterica serovar Enteritidis observed here was similar to that previously reported after phage application in chicken meat [78][79][80]. Importantly, in this assay, only one phage was tested, which can be further used in conjunction with other different phages to achieve higher reductions. Additionally, we observed that the phage tested here remains "viable" and stable for long periods of time at low storage temperatures [81]. Phage characterization was conducted at 37 • C, and the assay in chicken meat at 4 • C, which was under the same conditions found at retail. While it is not clear if phage STGO-35-1 could propagate at this condition, we observed an increase in the viral titer on day 1, and our results in meat showed that phage titers were similar in the control and treated groups, since both increased on day 1. This observation may indicate an initial lysis of another host in the control group. Furthermore, the immobilization of phage and bacteria on the food surface has been described, but further research is needed to better understand the interactions of Salmonella-phage-food components [81], and future studies should analyze the emergence of phage-resistant mutants as well [81].
Finally, post-harvest interventions in food safety and chicken meat have been compared with the work of Hungaro et al. [82], who compared phage reductions against chemical interventions in chicken meat and found that 2% of lactic acid, as well as 100 ppm of peroxyacetic acid, only reduced 0.8 log CFU/cm 2 each, demonstrating that phage treatment was more efficient than tested chemical interventions. Importantly, the 2.5 log reduction found here is above the reduction accomplished by current chemical interventions. Additionally, dose-response models for Salmonella in chicken meat have shown that ingesting 4 logs of Salmonella is the level that causes illness; therefore, reducing 2.5 logs in chicken meat could have a substantial public health impact that needs to be further determined and quantified [83]. Finally, post-harvest interventions in food safety and chicken meat have been compared with the work of Hungaro et al. [82], who compared phage reductions against chemical interventions in chicken meat and found that 2% of lactic acid, as well as 100 ppm of peroxyacetic acid, only reduced 0.8 log CFU/cm 2 each, demonstrating that phage treatment was more efficient than tested chemical interventions. Importantly, the 2.5 log reduction found here is above the reduction accomplished by current chemical interventions. Additionally, dose-response models for Salmonella in chicken meat have shown that ingesting 4 logs of Salmonella is the level that causes illness; therefore, reducing 2.5 logs in chicken meat could have a substantial public health impact that needs to be further determined and quantified [83].
Genome Stability
The prediction of RBPs showed that in this phage, the genes encoding tail putative proteins are in the tail module of the genome, between the nucleotide positions 24,475 and 33,389. The identified tail putative proteins corresponded to the phage tail tube (gp55), tail tape measure (gp60), phage minor tail (gp61), tail tip protein L (gp62), putative phage tail (gp64), and tail spike protein (gp65). Of these, six tail proteins were previously identified using MS/MS ( Table 2). The protein sequences with the highest identity to homologs in public databases (PDB, uniProt, NCBI_CCD and SCOpe) were gp60 tail tape measure-2 (94.74%) and gp65 tail spike (93%). Gp60 was identified as an approximately 81 kDa protein, with 31.3% of LC-MS/MS sequence coverage (Table 2). This putative protein controls the tail length by blocking the tail tube polymerization, and it is probably released from the tail shaft during infection to facilitate DNA translocation into the host cell and possibly stabilized by the covering tail assembly proteins [84,85]. Gp60 is associated with the Oantigenic polysaccharide polymerization gene (rfbD) an endorhamnosidase-type Salmonella phage receptor, which has been identified in Salmonella as belonging to serogroups A, B, and D1 [86]. Gp65 is approximately 72 kDa in size with 35.8% of sequence coverage (Table 2), and protein BLAST analysis indicated that this tail spike was 93.4% similar to the tail spike (QAX98701.1) of unverified Salmonella phage Segz_1. Subsequently, HHpred [61] showed its high level of identity with the P22 tail spike putative protein (TaxId:10754/b.80.1.6) [87,88], with 100% of probability and an E-value of 4.8 × 10 −192 . The conserved domains of this putative tail spike protein were aligned by Clustal-O [64], with the P22 tail superfamily (pfam09251), and the alignment showed a distance of 0.2. The central portion (amino acids 129 to 559) are 93.22% identical to the tail spike protein of Salmonella phage Segz_1 and the best match was with the putative conserved domains of P22 tail superfamily (pfam09251) ( Figure S1). This tail spike-like protein has been associated with the adsorption process in Salmonella P22 (Podoviridae) [88] and Det7 (Myoviridae) phages [87], specifically in the binding of endorhamnosidase (rhamnosyl residues) of Salmonella enterica (MJP01973.1) and residues of the lipopolysaccharide O-antigen [87], and both RBP predictions suggest that the LPS of S. Enteritidis is the receptor for phage STGO-35-1.
The relation found between the tail spike putative proteins of phage STGO-35-1 and the tail spike putative proteins of phage P22 of podoviruses has previously been described ( Figure S1) [88]. In such sense, a crystal structure analysis of 9NA and P22 revealed that both phages use similar tail spikes for LPS recognition. Together with the high homology present in tail spike-like proteins (gp65 in STGO-35-1) and their distinct phylogenetic origins, identified previously by Merrill et al. [26], this may support the hypothesis of plasticity of some gene-encoding products, with mechanisms of mosaicism, which could be driven by encoded recombinases, but with a low gene content flux rate in STGO-35-1 [27].
In order to obtain an experimental approximation of the plasticity of this phage, we evaluated putative protein plasticity in STGO-35-1 after exposure to S. Enteritidis. We selected three variants (plaque 1, 2, and 3) using lysis plaque morphology after exposure ( Figure S5), and observed two conservative nonsynonymous substitutions in gp65, with frequency percentages ranging from 67.67 to 100% (Table 3). On the other hand, the mutations observed in gp60 and gp65 from the plaque 1, 2, and 3 variants ( Figure S5) suggest that their encoded proteins could be acting as main RBPs and might be subject to selection based on host availability/resistance [89][90][91][92]. Furthermore, mutations in gp65 occurred at two different positions in the genome (Table 3), which is consistent with the reported observed plasticity of the P22 tail sequence [89,91]. Therefore, it is possible, from this prediction analysis, to identify the sequence plasticity of proteins that possibly function as RBPs. In addition, many mutations have been identified in the gene encoding the tail spike of Salmonella phage P22, which affects the folding and stability of this protein. These mutations primarily altered the amino acids located in the central domain of the tail spike putative protein, which suggests a high plasticity of this protein domain [86,93] (Table 3).
Another finding that suggests genetic plasticity in STGO-35-1 is explained by the use of the same headful packaging strategy among these three phages (9NA, P22, and STGO-35-1 [33]), but with a distinct phylogenetic origin, given the modular organization of phage genomes [27]. We selected variants using lysis plaque morphology because of evidence suggesting that the change in morphology is related to the infectivity of tail phage proteins, such as T-even phages [89][90][91][92]. The variants found could suggest small adaptive changes of the phage to its host [89]. The mutations found here occurred randomly under laboratory conditions in the presence of the host, as previously described [84,85]. Additional studies have shown a change in specificity on the original RBPs [16,17,81], which could be related to overcoming bacterial resistance to phage action (bacterial receptor switching). A recently published study [17] demonstrated the in vitro evolution of phages can be used to expand the host range and limit the emergence of phage-resistant bacteria during phage-based control of Listeria monocytogenes.
Our study found the importance of identifying RBPs and their plasticity. In the future, determining the mutation rate of these particular genes in complex systems, such as chicken meat or other environments, is necessary to better understand Salmonella-phage interactions in the environments in which phages will be applied.
Conclusions
The phage described in this study, STGO-35-1, belongs to the siphoviral morphotype (formerly family Siphoviridae), and has been described using an approach involving comparative genomic and phenotypic tools that have been used to identify the genomic unit of diversity inside the siphoviral morphotype. In the future, this may contribute to the classification of new similar phages. In addition, the short exposure time assay allowed us to predict the sequence plasticity of proteins that possibly function as RBPs. Finally, their successful biocontrol trial in chicken meat supports the potential use of STGO-35-1 as a biocontrol agent for targeting S. Enteritidis.
Supplementary Materials:
The following are available online at https://www.mdpi.com/article/10 .3390/microorganisms10030606/s1: Figure S1. Alignment of amino acid sequences of the tail spike (gp65); Figure S2. Adsorption rate of phage STGO-35-1 on S. Enteritidis DR028 strain; Figure S3. SDS-PAGE analysis of phage structural proteins; Figure S4. The terminase phylogenetic tree was constructed to describe the closeness between amino acid sequences of long terminase from different phages.; Figure S5. Morphological plaque; Table S1: List of Salmonella isolates used; Table S2: Transduction frequency of phage STGO 35-1; Table S3: Annotated CDSs of wild-type Salmonella STGO-35-1 phage; Table S4: Relatedness and taxonomical classification of similar siphoviral morphotype sequence of phages. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The authors of this study assure that the data shared are in accordance with the consent provided by the participants on the use of confidential data.
|
2022-03-16T15:19:27.621Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "5f596d0de04db8a778aecdc570c961a1e3bb3fc5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/10/3/606/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "924cdbe117a1b553935e67c9923525a80b50c80f",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232079394
|
pes2o/s2orc
|
v3-fos-license
|
Circulating cell-free DNA, peripheral lymphocyte subsets alterations and neutrophil lymphocyte ratio in assessment of COVID-19 severity
Cell destruction results in plasma accumulation of cell-free DNA (cfDNA). Dynamic changes in circulating lymphocytes are features of COVID-19. We aimed to investigate if cfDNA level can serve in stratification of COVID-19 patients, and if cfDNA level is associated with alterations in lymphocyte subsets and neutrophil-to-lymphocyte ratio (NLR). This cross-sectional comparative study enrolled 64 SARS-CoV-2-positive patients. Patients were subdivided to severe and non-severe groups. Plasma cfDNA concentration was determined by real-time quantitative PCR. Lymphocyte subsets were assessed by flow cytometry. There was significant increase in cfDNA among severe cases when compared with non-severe cases. cfDNA showed positive correlation with NLR and inverse correlation with T cell percentage. cfDNA positively correlated with ferritin and C-reactive protein. The output data of performed ROC curves to differentiate severe from non-severe cases revealed that cfDNA at cut-off ≥17.31 ng/µl and AUC of 0.96 yielded (93%) sensitivity and (73%) specificity. In summary, excessive release of cfDNA can serve as sensitive COVID-19 severity predictor. There is an association between cfDNA up-regulation and NLR up-regulation and T cell percentage down-regulation. cfDNA level can be used in stratification and personalized monitoring strategies in COVID-19 patients.
Introduction
COVID-19 was declared as an outbreak in January 2020. Shortly, it became a pandemic, and by mid-March 2020, COVID-19 had generated 24 times more cases than the severe acute respiratory syndrome (SARS) outbreak, 1 which highlights the importance for personalized severity predictors biomarkers of COVID-19.
Circulating cell-free DNA (cfDNA) is extracellular DNA found in plasma or serum. Accumulation of cfDNA in plasma may be the result of excess release from massive cell destruction, insufficient elimination of dead cells, extracellular DNA traps formed during inflammation, or a combination of all these causes. 2 The cfDNA is detected in plasma and other body fluids. Release of cfDNA into the circulation is likely due to cellular breakdown mechanisms, such as apoptosis and necrosis, as well as active DNA-release mechanisms. 3 In the context of many pathological conditions, cfDNA is an excellent biomarker candidate for clinical application, mainly circulating cell-free mitochondrial DNA. 4 Leukocytes are a major source of cfDNA which is produced from apoptotic and dying cells in COVID-19 patients and can be brought up by dying lymphocytes. More than 60% of patients with COVID-19 had lymphopenia, which can produce abundant free DNA in these patients. 5 Phagocytic cells usually remove apoptotic debris, decreasing the consequences of the presence of dead cell materials. 6 In a state of disease, when cell death exceeds the clearance capacity, this phagocytic system is overwhelmed. 7 Previous data suggested a link between the level of inflammation and the amount of cfDNA released from damaged cells in the circulation, which gives a potentiality of cfDNA in determination of COVID-19 severity.
The level of lymphocytes is thought to be a means of early identification of risk factors for severe COVID-19. Neutrophil-to-lymphocyte ratio (NLR) was claimed to serve as an indicator for the systematic inflammatory response with COVID-19. 8 NLR is calculated as the absolute count of neutrophils divided by the absolute count of lymphocytes. High NLR points to a predominance of inflammatory factor. 9 Dynamic changes in peripheral blood lymphocyte subsets were also reported in COVID-19 patients. 1 Still, the link between the peripheral lymphocyte subsets alterations and cfDNA needs to be explored.
Hence, our aim was to find out if levels of plasma cfDNA and NLR may serve as monitoring biomarkers and predictors of severity in COVID-19 disease, and to assess their role as strategies for stratification of patients with COVID-19. Also, we aimed to investigate the presence of a link between cfDNA level, NLR, lymphocyte subsets alterations and COVID-19 severity.
Type of study
This was a comparative cross-sectional study.
Study participants
A total of 64 acute respiratory syndrome coronavirus 2 (SARS-CoV-2)-positive patients were enrolled in this study. Sample size was calculated using PASS 11 program, setting power 80% and alpha error at 0.05. Patients were subdivided into severe group (n ¼ 34) and non-severe group (n ¼ 30). Patients were recruited from Ain-Shams University Specialized Hospital.
Candidates were informed about the aim of the study and gave their informed consent before enrolment in the study. The study was done after approval from the Research Ethical Committee of Faculty of Medicine Ain-Shams University.
Inclusion criteria: adult patients >18 yr with positive result of real-time reverse transcriptase-PCR assay (RT-PCR) for nasal swab specimens for SARS-CoV-2 RNA. Cases were diagnosed based on the interim guidance of the World Health Organization. 11 Nonsevere patients met all following conditions: (a) history of exposure to a confirmed SARS-CoV-2 patient, (b) fever or other respiratory symptoms, and (c) typical chest computed tomography image abnormities compatible with viral pneumonia. Severe patients additionally met at least one of the following conditions: (a) Shortness of breath, respiration rate ! 30 times/min, (b) oxygen saturation (resting state) 93%. 12
Assessment of participants
Personal history: name, age, sex, occupation, contact with a known positive case of severe acute SARS-CoV-2 infection. History of present illness: onset (date of diagnosis), disease severity (non-severe, severe with critical needing intensive care unit (ICU) admission or mechanical ventilation), duration of the disease symptoms (namely, fever, cough, shortness of breath and fatigue) (in d). Drug history and past history: history of drug intake especially immunomodulators, chronic diseases especially respiratory disease (e.g. bronchial asthma).
Sample collection
Approximately 10 ml of venous blood was drawn from each COVID-19 patient and divided into five aliquots; the first aliquot was 2 ml blood transferred to a plain tube for serum ferritin, C-reactive protein (CRP), liver enzymes and creatinine. The second aliquot was 2 ml transferred to a heparin tube for flow cytometry to be analysed within 24 h. The third aliquot was 2 ml blood transferred into an EDTA tube for complete blood count. The fourth aliquot was transferred into citrate tube for D-dimer measurement. The fifth aliquot was transferred into EDTA tube, centrifuged at 1370 g for 10 min, then plasma was removed carefully and recentrifuged at 18,894 g for 10 min. The supernatant was then transferred to microcentrifuge tubes and stored at -80 C for extraction and measurement of cfDNA by real-time quantitative PCR (qPCR)
Flow cytometry assay
Flow cytometry assay was conducted at Clinical Pathology Department, Al-Zahraa Hospital, Al-Azhar University, using four-colour FACS Calibur (BD Biosciences, San Jose, CA). Cell Quest Pro software (BD Biosciences) was used for data analysis. The compensation setting was established before acquiring the samples using colour-calibrated beads. Acquisition count was raised to 100,000 events, to consider analysis of lymphopenic samples in COVID-19 cases, isotype control was acquired to detect positive cut-off.
Three tubes were used with 50 ml of fresh blood sample each. The first tube was incubated with 5 ml cocktail of mouse-stained anti-human controls IgG1 FITC/IgG2a PE (catalogue no. 34240, lot no.90642). The second tube was incubated with 5 ml of FITCconjugated anti-human CD3/PE-conjugated antihuman CD16 þ CD56 cocktail (catalogue no. 95131, lot no. 6012680, BD Biosciences, USA). The third tube was incubated with 5 ml of FITC-conjugated anti-human CD3/PE-conjugated anti-human CD19 cocktail (catalogue no. 349217, lot no. 79439, BD Biosciences, USA). All tubes were incubated for 20 min. Then, lysis reagent (BD Biosciences) was added for destruction of red blood corpuscles for 8 min before using FACS buffer for washing, and the sample was then centrifuged at 500 g.
Regarding the gating strategy for identification of lymphocyte subsets, initial gating was taken from the lymphocyte area on the forward scatter/side scatter (FS/SS) [R1]. Then T lymphocytes were identified as being CD3 positive, NK as CD16þCD56 positive and CD3 negative while T-natural killer cells (TNK) were identified as CD16þCD56 positive and CD3 positive. Finally, B cells were CD19-positive cells ( Figure 1).
Real-time PCR for cfDNA quantitation
Real-time PCR for cfDNA quantitation was conducted at the Clinical Pathology Department, Al-Zahraa Hospital, Al-Azhar University. Extraction of cfDNA was done manually from 200 ll of plasma using QIAamp DNA Blood Mini kit (Qiagen, Germany) according to the protocol provided by the manufacturer; it was eluted in 30 ml of elution buffer and concentrations were measured using QIAXpert (Qiagen, Germany).
Plasma cfDNA was determined by real-time quantitative PCR for b-actin gene found in all nucleated cells. The sequence of primers was forward (5 0 -3 0 ): GCGCCGTTCCGAAAGTT; reverse (5 0 -3 0 ): CGGC GGATCGGCAAA. The PCR was performed using Quanti Tect SYBR Green Master Mix (Qiagen, Germany) on a Rotor-Gene Q detection system (Qiagen, Hiden Germany). The real-time PCR was carried out in 25 ll of total reaction volume containing 12.5 ml SYBR green master mix (Qiagen, Hiden, Germany), 1 ml of each primer, and 10.5 ml of the extracted plasma supernatant. Fluorescence measurements were made in every cycle. The cycling conditions used were as follows: PCR initial active step at 95 C for 15 min, then 40 cycles which includes denaturation at 94 C for 15 s, annealing at 55 C for 30 s then extension at 70 C for 30 s. Melting curve analysis was performed to confirm specificity of the PCR product. The absolute DNA concentration was calculated according to the standard curve generated by serial dilutions of genomic DNA ranging from 0.00001 to 100 ng/ml.
Statistical analysis
Data were coded and entered using the Statistical Package for the Social Sciences (SPSS) version 26 (IBM Corp., Armonk, NY, USA). Data were summarized using mean AE SD for quantitative parametric data, median (interquartile range) for non-parametric data. In addition frequency (count) and relative frequency (percentage) were used for categorical data. The comparisons between quantitative variables were done using student t-test for parametric and Mann-Whitney test for the non-parametric measures. For comparing categorical data, Chi square (v 2 ) test was performed. The correlations between quantitative variables were done using Spearman correlation coefficient. P Values less than 0.05 were considered statistically significant. Receiver operating characteristic (ROC curve) was constructed, with area under curve (AUC) analysis performed to detect best cut-off value of different parameters for differentiating severe and non-severe COVID-19 infections. P Values less than 0.05 were considered as statistically significant.
Results
Our study included 64 COVID-19 patients divided into two groups: group 1 included medium and ICU-admitted severe cases of COVID-19 (n ¼ 34) with median age of 60 yr, and group 2 included non-severe and non hospitalized cases [home isolated] (n ¼ 30) with median age of 27 yr. The majority of patients were females (56.3%), 50% were asymptomatic, 15.6% had fever, 20.3% had cough, 4.7% had fever with cough, 6.3% had diarrhoea, and 3.1% had fever with diarrhoea. By the end of the study only two patients (3.1%) required mechanical ventilation and died. Ten patients (15.5%) were on continuous positive airway pressure therapy, while all other severe patients were on high flow nasal cannula or venturi. The median duration between first symptoms and hospital admission in severe cases was 4.5 d. Hypertension, diabetes mellitus, and bronchial asthma were the most reported co-morbidities in 34.4%, 26.6%, and 14.1%, respectively. Comparative data between severe and non-severe cases are shown in Table 1. The correlation study of cfDNA with other parameters showed significant positive correlations with CRP, ferritin, total leukocyte count (TLC), absolute neutrophil count (ANC), B cells (%) and NLR. Significant negative correlations were also found with absolute lymphocyte count (ALC), platelets, T cells (%), NK (%) and TNK (%) ( Table 2). The comparison of mean values of cfDNA in severe and non-severe groups showed significantly higher values in severe cases than in non-severe cases (P < 0.001) (Figure 2). On comparing alteration of lymphocytes in severe and non-severe groups, we found that the frequency of circulating T lymphocytes was significantly decreased in severe cases compared with non-severe cases (P < 0.001) and the frequency of B cells was significantly higher in severe cases than in non-severe cases (P < 0.001) (Figure 3).
The output data of performed ROC curves revealed that the cfDNA at a cut-off of !17.31 ng/ml and AUC of 0.96 yielded a specificity of 73% and sensitivity of 93% with P < 0.001. The NLR at cut-off of !3.1 and AUC of 0.98 yielded a specificity of 93% while sensitivity was 91%, with P < 0.001. ALC at cut-off of !1.5 Â 10 3 /mm 3 and AUC of 0.092 yielded the lowest sensitivity of 76% with specificity of 93% and P < 0.001 to differentiate severe from non-severe cases (Table 3, Figure 4).
Discussion
The current coronavirus pandemic is the most dramatic healthcare crisis linked to acute and highly infectious disease. The main goal of all the predictive and monitoring strategies is the challenge to adapt the number of severely sick people to the limited capacity of corresponding health systems, providing adequate care and therefore keeping the morbidity level at a minimum. 13 The fact that Egypt is facing this COVID-19 pandemic surge with limited hospital beds, resources, and health care personnel, highlights the importance of biomarkers in predicting severity to prevent the pandemic scenario of China and Europe.
Inflammatory responses contribute to immune response imbalance. Therefore, circulating biomarkers which are able to define inflammation and immune status are potential predictors for the severity of COVID-19. 8 Accumulation of cfDNA in plasma may be the result of excess release from massive cell destruction, and inflammation. 2 This justifies our aim to investigate the utility of plasma cfDNA in assessment of COVID-19 severity. Besides, our objective was to investigate the presence of a link between cfDNA level, NLR, alterations in lymphocyte subsets and COVID-19 severity.
The current study showed a significant increase in cfDNA level among severe cases, with mean AE SD ¼ 24.562 AE 4.387 as compared with non-severe cases with mean AE SD ¼ 15.018 AE 4.068 (P < 0.001). This was in line with Zuo et al., who reported that cfDNA was higher in hospitalized patients receiving mechanical ventilation as compared with hospitalized patients breathing room air. 14 This can be explained by cfDNA that can be released to the circulation as a result of inflammatory process, 2 either due to cellular breakdown mechanisms, as apoptosis, or due to active DNA-release mechanisms by neutrophils. 3 cfDNA can also be a reason for amplification of the inflammatory process. Liu et al. reported that cfDNA can produce severe inflammation mediated by C-type lectin receptors encoded within the NK complex family, MHC class I genes, and type I interferons, and some DNA sensors, and those inflammations will end with cytokine storm. 6 Similarly, Wu et al. reported that cfDNA can circulate and activate different immune cells to produce a huge amount of cytokines, leading to cytokine storm. 15 cfDNA was also reported to destroy vascular endothelial cells directly, which will add to the dysfunctions of multiple organs by cytokine storm. 16 The present study revealed a significant increase in CRP in severe cases when compared with non-severe cases, with a median of 29.5 in severe cases and a median value in non-severe cases of 1.5 (P < 0.001). This was in line with Ali and Chen et al., who reported that patients with severe disease courses had a far elevated level of CRP than non-severe patients. 17,18 Wang et al. proposed CRP as a predictive marker for the aggravation of non-severe COVID-19 patients, with an optimal threshold value of 26.9 mg/l. 19 Huang et al. reported that CRP can also be used to monitor disease improvement beside being used as prognostic marker in COVID-19. 20 There was a significant hyperferritinemia (P < 0.001) in severe cases when compared with non-severe cases, which was in agreement with Chen et al. 21 In a metaanalysis performed by Taneri et al. based on combined estimates from 29 studies and 13,620 individuals, serum ferritin was higher in severe COVID-19 individuals compared with moderate cases. 22 Daher et al. reported that hyperferritinemia is linked to systemic level of inflammation, as during the inflammatory state IL-6 stimulates ferritin and hepcidin synthesis. 23 Kernan and Carcillo claimed that ferritin is a direct mediator of the immune system, and reported a feedback mechanism between ferritin and cytokines, as cytokines can induce ferritin expression, and ferritin can induce the expression of pro-and antiinflammatory cytokines, as well. 24 On top of that, G omez-Pastora et al. added that ferritin is a mediator for the inflammatory process, stimulating inflammatory pathways, which will initiate a vicious pathogenic immune loop. 25 The current study revealed a significant increase in TLC and ANC (P ¼ 0.002 and (P < 0.001, respectively) in severe cases, while there was a significant decrease in ALC in severe cases (P < 0.001). This was in agreement with Chen et al. and Henry et al., who reported that patients with severe COVID-19 had significantly increased TLC, ANC and decreased lymphocyte compared with non-severe disease. 21,26 A comparison of platelet count showed a non-significant difference between severe cases and non-severe cases (P ¼ 0.07); this was in disagreement with Henry et al. 26 and Zhao et al., 27 who reported that early decrease in blood platelet count is associated with poor prognosis.
The NLR was significantly higher in severe cases (P < 0.001). This was in line with Liu et al., who further added that NLR was an independent risk factor of the in-hospital mortality for COVID-19 patients. 28 A meta-analysis performed by Lagunas-Rangel stated that increased NLR levels reflect an enhanced inflammatory process which suggests poor prognosis. 29 This increase in NLR may be due to increased ANC in severe cases, as neutrophils get activated and migrate from the venous system to the immune system. Neutrophils release large amounts of reactive oxygen species (ROS) that can free the virus from the cells. Thus, Ab-dependent cell-mediated cytotocicity (ADCC) may kill the virus directly, expose virus Ag, and stimulate cell-specific and humoral immunities. 30 Neutrophils can also be triggered by virus-related inflammatory factors, such as IL-6 and IL-8, TNF-a and IFN-c factors, produced by lymphocytes. The human immune response triggered by viral infection mainly relies on lymphocytes, whereas systematic inflammation significantly decreases T lymphocytes. Thus, virus-triggered inflammation increased NLR. 8 There was a significant decrease, in severe cases, in circulating T lymphocytes, TNK and NK cells (P < 0.001), P ¼ 0.04, and P < 0.01, respectively), while the frequency of B cells was significantly higher in severe cases than in non-severe cases (P < 0.001). This was in line with Chen et al. 21 Qin et al. reported that the number of T cells decreased in severe cases of COVID-19. The increase in the frequency of B cells could be due to the more significant decrease in T lymphocytes in severe cases, as well. 31 He et al. reported that all lymphocyte subsets, including T-, Band NK cells, were significantly lower in the severe group, but emphasized that only T-lymphocyte count correlated with the disease course of patients in COVID-19 pneumonia. 32 Wang et al. also reported that peripheral lymphocyte subset alteration was associated with clinical characteristics of COVID-19. 33 There was a significant decrease of haemoglobin (Hb) in severe cases (P < 0.001), which could be explained by the fact that angiotensin II regulates normal erythropoiesis and stimulates early erythroid proliferation. Binding of SARS-CoV-2 to the host ACE2 increases dysregulation of erythropoiesis through the downstream angiotensin II pathway. 34 Cheng et al. also reported that there was an increased erythroid turnover in COVID-19 patients and that a portion of plasma cfDNA was derived from destruction of erythroid cells. They added that possible causes of anaemia in COVID-19 patients included: (a) excessive inflammation and cytokine storm, (b) haemophagocytosis in relation to inflammation, and (c) consumption in microthrombi. 18 Correlation studies revealed that the cfDNA positively correlated with inflammatory marker CRP (r ¼ 0.577; P < 0.001), which highlights the role of cfDNA in the inflammatory state. Endothelial cell inflammation is documented in patients with COVID-19. Rauch et al.'s study revealed the presence of vascular inflammation and severe endothelial injury as a direct consequence of SARS-CoV-2 infection and ensuing host inflammatory response in COVID-19. 35 Ng et al., in addition, reported that levels of the neutrophil extracellular traps (NETs) contributing to circulating cfDNA formation correlated with CRP as well. 36 The cfDNA positively correlated with ferritin (r ¼ 0.58; P < 0.001). This could be explained by excess intracellular iron interacting with molecular oxygen, generating ROS which contributes to oxidative damage of cellular components of different organs and apoptosis which is a source of cfDNA. Programmed cell death mediated by iron-dependent peroxidation mechanisms in inflammatory pathologies is called ferroptosis. 37 On top of that, it was documented that hyperferritinemia was linked to coagulopathy, as oxidized iron accelerates serum coagulation by interacting with proteins of the coagulation cascade. 38 Our study showed weak correlation of cfDNA with TLC, while a moderate positive correlation was seen with ANC (r ¼ 0.276, P < 0.05) and ANC (r ¼ 0.482; P < 0.001). This was in line with Ng et al., who stated that NET markers which are cfDNA, citrullinated histone H3, and neutrophil elastase correlated with TLC and neutrophils. 36 During the inflammatory process there is activation of neutrophils and an increase in the production of NETs, which are microbicidal proteins, and oxidant enzymes, those released by neutrophils to contain infections. 2 The NLR at cut-off of ! 3.1 and AUC of 0.98 yielded a sensitivity of 91% while specificity was 93% (P < 0001). (C) ALC at cut-off of !1.5 Â 10 3 /mm 3 and AUC of 0.0.92 yielded 76% sensitivity with 93% specificity (P < 0.001) to differentiate severe from non-severe cases. cfDNA, cell-free DNA; NLR, neutrophil-to-lymphocyte ratio; ALC, absolute lymphocyte count.
concentration of circulating histones is directly proportional to the degree of inflammation and end organ dysfunction in trauma or sepsis-like conditions. 14,39 Our study revealed an inverse correlation of cfDNA with ALC (r ¼ -0.683; P < 0.001); this was in line with Laurent et al., who stated that haematopoietic cells, including lymphoid and erythroid cells, are the major components of cfDNA. 40 There was a positive correlation of cfDNA with NLR (r ¼ 0.698; P < 0.001), which can be attributed to activation and migration of neutrophils to the bloodstream during COVID-19 and their release of large amounts of ROS which can induce cell DNA damage and release of circulating cfDNA. Likewise, associated lymphopenia due to lymphocyte apoptosis contributes to more release of cfDNA. NLR was considered as an independent biomarker for indicating poor clinical outcomes by Yang et al. 8 There was a negative correlation of cfDNA with percentage of T cell (r ¼ -0.438; P < 0.001) and NK (r ¼ -0.317; P < 0.05), while no correlation was seen with TNK. Interestingly, there was a significant positive correlation with B cells (r ¼ 0.556; P < 0.001). It is worth mentioning that He et al. claimed T cells as being an independent predictor for COVID-19 severity. The study revealed an inverse weak correlation of cfDNA with platelets (PLTs) (r ¼ -0.279; P < 0.05). 32 This can be explained by the virtue of the fact that COVID-19 disease is characterized by hyper-inflammation and endothelial injury as a direct consequence of intracellular SARS-CoV-2 infection and ensuing host inflammatory response. Injury to endothelial cells thus contributes to the release of cfDNA, which then promotes coagulation leading to the widespread formation of microthrombi, provoking microcirculatory failure or large-vessel thrombosis leading to consumption of PLTs in thrombi. 35 The cfDNA at cut-off !17.31 ng/ ml and AUC of 0.96 yielded a specificity of 73% and sensitivity of 93% with P < 0.001 to differentiate severe from non-severe cases. The NLR at cut-off of !3.1 and AUC of 0.98 yielded a specificity of 93% and sensitivity of 91%, with P < 0.001; ALC at cut-off of !1.5 Â 10 3 /mm 3 and AUC of 0.0.92 yielded the lowest sensitivity of 76% with specificity of 93% and P < 0.001, which indicates that cfDNA is the most sensitive among the studied markers in discriminating severity, proposing it as a monitoring marker and a target of therapy.
This study was limited due to lack of determination of tissues of origin of the cfDNA; this needs to be addressed in future studies. Despite this limitation, our study proposed a minimally invasive severity indicator which can provide immediate insights into the dynamics of COVID-19. The sample size may not be enough to generalize our findings. Nonetheless, we compared non-hospitalized with hospitalized patients. More comprehensive investigations are therefore needed to confirm and further refine the observations reported herein. Nevertheless, the conclusions of this study are consistent with the conclusions of other scholars.
Conclusion
Excessive release of cfDNA can serve as a sensitive COVID-19 severity predictor. There is an association between excessive release of cfDNA and NLR increase and T-cell percentage down-regulation. cfDNA level can be used as a strategy for stratification and monitoring of COVID-19 patients. Altered distribution of lymphocyte subsets is a feature of COVID-19. The study proposes cfDNA as a marker for personalized prediction, a monitoring biomarker for COVID-19 severity, and as a target for novel therapeutic interventions in COVID-19. The output data of the ROC curve showed an accepted discriminative power of cfDNA, NLR and ALC to differentiate between severe and non-severe infections with COVID-19.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
|
2021-03-02T06:22:33.183Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "c363a4680854c74fccc7ed938ce496ae614b20a1",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1753425921995577",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1601e0b036576645f03b436cdadc80f5d5d4279",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118417909
|
pes2o/s2orc
|
v3-fos-license
|
Dark Matter that can form Dark Stars
The first stars to form in the Universe may be powered by the annihilation of weakly interacting dark matter particles. These so-called dark stars, if observed, may give us a clue about the nature of dark matter. Here we examine which models for particle dark matter satisfy the conditions for the formation of dark stars. We find that in general models with thermal dark matter lead to the formation of dark stars, with few notable exceptions: heavy neutralinos in the presence of coannihilations, annihilations that are resonant at dark matter freeze-out but not in dark stars, some models of neutrinophilic dark matter annihilating into neutrinos only and lighter than about 50 GeV. In particular, we find that a thermal DM candidate in standard Cosmology always forms a dark star as long as its mass is heavier than about 50 GeV and the thermal average of its annihilation cross section is the same at the decoupling temperature and during the dark star formation, as for instance in the case of an annihilation cross section with a non-vanishing s-wave contribution.
Introduction
The first stars, also referred to as Population III stars, are the first luminous objects in the Universe. They contribute to the reionization of the interstellar medium, they provide the heavy elements (metals) that eventually become part of the later generations of stars, and they may be the seeds of the very massive black holes observed in quasars.
It was shown in [1,2,3] that the first stars to form in the Universe may be powered by the annihilation of dark matter particles instead of nuclear fusion. These dark-matter powered stars, or dark stars for short, constitute a new phase of stellar evolution. Besides the assumption that dark matter is made of weakly interacting massive particles (WIMPs) that can self-annihilate into ordinary particles, three conditions are necessary for the formation of a dark star.
The first condition is that the density of dark matter at the location of the (proto)star must be high enough for dark matter to efficiently and rapidly annihilate into ordinary particles, releasing a large amount of energy. The first stars are believed to form at the center of dark matter halos when the Universe was young (redshift z ∼ 10-50) and denser than today. Not only the dark matter density at the center of those early halos was high, but as the baryonic gas contracted into the first protostars, more dark matter was gathered around the forming object by the deepening of the gravitational potential (gravitational contraction). Cosmological parameters and the evolution of the gas density completely determine the resulting density of dark matter at the location of the first protostars. Analytic and numerical evaluations [1,2,4] lead to a resulting density which is high enough to satisfy the first condition for the formation of a dark star.
JHEP00(2010)000
The second condition is that a large fraction of the energy released in the dark matter annihilation must be absorbed in the gas that constitutes the (proto)star. The fraction f Q of annihilation energy deposited into the gas depends on the nature of the annihilation products. Typical products of WIMP annihilation are charged leptons, neutrinos, hadrons, photons, W and/or Z bosons, and Higgs bosons. The latter (W, Z, and Higgs) decay rapidly into leptons and hadrons. The hadrons themselves, which are mostly charged and neutral pions) decay rapidly into charged leptons, neutrinos, and photons (although a small number of stable particles like protons can also be produced). After ∼ 10 −8 seconds, all unstable elementary particles, including the muon, have decayed away, and only protons, electrons, photons and neutrinos survive. Protons have a large scattering cross section with the protostar medium and are quickly absorbed. Electrons and photons can ionize the medium and/or generate electromagnetic showers. For WIMPs with mass m 0.5 GeV, electromagnetic showers are the dominant process. At the time when the following third condition for a dark star is satisfied, the protostar has a diameter of more than 40 radiation lengths, implying that all the energy released in protons, electrons, and photons is absorbed inside the protostar. Only the fraction of energy carried away by the neutrinos is lost for what concerns a dark star.
The third condition for the formation of a dark star is that the heating of the (proto)star gas arising from the dark matter annihilation energy must dominate over any cooling mechanism that affects the evolution of the (proto)star. In [1], it was shown that the dark matter heating rate Q DM , in energy deposited per unit time and unit volume, is given by the expression where ρ is the dark matter density inside the (proto)star, which is determined by the cosmological model, and σv ds is the average value of the dark matter annihilation cross section σ times WIMP relative velocity v inside a dark star. To the extent that electromagnetic showers are generated, i.e. m 0.5 GeV, all dark star properties depend on the particle physics model only through the quantity Ref. [1] fixed the annihilation cross section to σv ds = 3 × 10 −26 cm 3 /s and examined a range of WIMP masses m from 1 GeV to 10 TeV. In addition, Ref. [1] assumed f Q = 2/3, based on simulations of neutralino dark matter annihilation in the Minimal Supersymmetric Standard Model (MSSM). For this range of Q DM , they compared the heating and cooling rates along protostar evolution tracks from [5], and concluded that there is a time during the evolution of the protostar in which the dark matter heating dominates over all cooling rates. This finding lead to the realization that dark stars may be possible.
In this paper, we examine the possible values of Q DM for a large selection of particle physics models, and verify if the third condition above is satisfied in these models. We find that not all particle dark matter models lead to the formation of dark stars, although the models that do not form dark stars are either tuned to resonant annihilation or rather artificial. Figure 1: Condition for the formation of a dark star, in terms of the protostar gas density and temperature. The gray band shows possible evolution tracks of the protostar obtained through numerical simulations in a ΛCDM cosmology [5]. The red lines show critical curves on which the heating rate from dark matter annihilation equals the total cooling rate of the protostar gas. Critical curves are labeled by the value of f Q σv ds /m in units of cm 3 s −1 GeV −1 . A dark star forms at the intersection of a critical line with the gas evolution track. No dark star can form for f Q σv ds /m < 10 −32 cm 3 s −1 GeV −1 (solid critical line on the right).
The restriction imposed by the third condition for dark star formation is best expressed in terms of a condition on the quantity f Q σv ds /m. Following [1], we have computed the critical lines in the gas temperature-density plane at which the heating rate from dark matter annihilation equals the total cooling rate. These lines are shown in Figure 1 for a wide range of values of f Q σv ds /m, from 10 −18 cm 3 s −1 GeV −1 to 10 −32 cm 3 s −1 GeV −1 . Below the latter value, the heating-cooling critical line no longer intersects the thermodynamic track of the protostellar gas, indicated by the gray band obtained through numerical simulations of the formation of the first stars in a ΛCDM cosmology [5]. In other words, for f Q σv ds /m < 10 −32 cm 3 s −1 GeV −1 , the protostar is expected to contract to a regular Population III star powered by nuclear fusion without passing through the dark star phase. At the other side of the f Q σv ds /m range, the critical line reaches a limiting curve given by the vertical line labeled f Q σv ds /m = 10 −18 cm 3 s −1 GeV −1 . Larger values of f Q σv ds /m give the same vertical line. Thus, as expected, if the annihilation rate is large, a protostar passes through the dark star phase. Therefore the third condition for the formation of a dark star is The choice σv ds = 3 × 10 −26 cm 3 /s in [1] was motivated by the assumption that the dark matter WIMPs are produced thermally in the early Universe. That is, that the WIMPs are generated in matter-antimatter collisions at temperatures higher than T fo ∼ m/20, which is the temperature after which WIMP production "freezes out" and JHEP00(2010)000 the comoving WIMP number density remains (approximately) constant. Ref. [1] used the following simple inverse-proportionality relation between the present WIMP density Ω χ and the annihilation cross section σv fo at the time of WIMP freeze-out, Furthermore, ref. [1] simply assumed that the velocity-averaged annihilation cross section times relative velocities at the time of freeze-out and in a dark star have the same value, σv ds = σv fo . In reality, the relation between Ω χ and σv fo is more complex, and in addition σv ds may differ from σv fo because σv may depend sensitively on the WIMP velocity v. In this regard, we notice that the average WIMP speed at freeze-out is of the order of namely ∼ 30 km/s for a newly-born 1-M ⊙ dark star of 1 AU radius or ∼ 300 km/s for a mature 600-M ⊙ dark star of 5 AU radius. A neutralino in the MSSM provides an example of a more complex relation between Ω χ and σv fo . At the same time, it allows the direct evaluation of both σv ds and f Q , and in general it has σv ds = σv fo . Section 2 explores this case.
Kaluza-Klein dark matter is examined in Section 3, where it is concluded that generically in these models σv ds tends to be larger or comparable to σv fo .
Leptophilic models of dark matter proposed to explain the PAMELA positron excess and the Fermi and HESS cosmic-ray electron-positron data provide another example in which σv ds may not be the same as σv fo . They are examined in Section 4.
Finally, we push f Q down using dark matter particles that annihilate exclusively into neutrinos ("neutrinophilic" models). In these models, even if annihilation produces predominantly neutrinos that escape the forming star, W-and Z-bremsstrahlung processes may generate enough charged leptons to actually form a dark star. We examine this case in Section 5.
MSSM
Because of its many free parameters (more than 100), the MSSM provides a variety of examples in which the annihilation cross section in the dark star differs from the annihilation cross section at the time of freeze-out, or more precisely σv ds = σv fo .
There are several ways in which the equality σv ds = σv fo can be violated in the MSSM [6]. First, the quantity σv may depend on the relative velocity v. This includes three cases: (i) p-wave annihilation in which σv = a + bv 2 is dominated by the bv 2 term at freeze-out (here a and b are constants); (ii) resonant annihilation in which σv follows JHEP00(2010)000 , where δ, γ and c are constants; and (iii) threshold annihilation in which an annihilation channel is kinematically accessible at freeze-out but not in a dark star thanks to the higher particle kinetic energies at freeze-out. Second, the annihilation reactions that determine the freeze-out time may be unrelated to the neutralino-neutralino annihilation that occurs inside a dark star, in that the freezeout temperature may be high enough to convert neutralinos into heavier supersymmetric particles that annihilate much faster (a phenomenon called coannihilation). It is then the annihilation cross section of the heavier supersymmetric particles that determines the neutralino relic density, and this cross section is in general not the same as the neutralinoneutralino annihilation cross section, thus σv ds = σv fo . We remark in passing that resonant annihilation and coannihilations are not rare phenomena in the MSSM, and are actually essential to obtain a neutralino dark matter in the minimal supergravity or constrained MSSM models.
To illustrate these four cases (p-wave annihilation, resonant annihilation, threshold annihilation, and coannihilation), it is sufficient to consider a so-called effective MSSM (effMSSM) with eight free parameters fixed at the electroweak scale [21]. These parameters are: the CP-odd Higgs boson mass m A , the ratio of neutral Higgs vacuum expectation values tan β, the Higgs mass parameter µ, the gaugino mass parameters M 1 and M 2 , the slepton mass parameter ml, the squark mass parameter mq, the ratios Aτ /ml, At/mq and Ab/mq involving the trilinear couplings Aτ , At and Ab of the third generation of sleptons and squarks (the three ratios are assumed to be equal).
We consider the parameter region of the effMSSM in which the lightest neutralino is the lightest supersymmetric particle and its relic density Ω χ is within the cosmological range 0.098 < Ω χ h 2 < 0.122. In this region, we compute σv ds as the value of σv at v = 0. For each point in this region we also compute f Q as the fraction of annihilation energy that does not go into neutrinos. In obtaining f Q , it is safe to assume that the particle cascades after annihilation develop in vacuum, since muons, taus and light mesons produced in the annihilation decay to neutrinos before being stopped in the dark star medium. Figure 2 shows the values of the combination f Q σv ds /m χ obtained in the way just described as a function of the neutralino mass m χ . There are four classes of points: (i) the spread of points along the direction sloping down to the right is due to p-wave annihilation; (ii) the V-shaped feature at m χ ∼ 45 GeV is due to resonant annihilation through the Z boson (other resonant annihilations, through the lightest Higgs boson of mass varying from 115 GeV to 120 GeV, are visible at m χ ∼ 60 GeV), (iii) the "fingers" of points dropping from the p-wave band of points arise from threshold coannihilation, and (iv) the shaded region on the right of m χ ∼ 100 GeV corresponds to possible coannihilation with staus (the dashed line shows the similar boundary for sneutrino coannihilations). These four cases are described in the following.
In p-wave annihilation, the dominant contribution at freeze-out comes from the p-wave term bv 2 in σv. The p-wave contribution to σv, which is instrumental to provide the correct neutralino relic density, is suppressed as far as the evolution of the dark star is concerned. In fact, for a newly-born 1-M ⊙ dark star, one has (v ds /v fo ) 2 ∼ 2 × 10 −9 . Thus in this case, σv ds ≃ a while σv fo ≃ b v 2 ≃ b/20, and in general they differ. Their exact ratio JHEP00(2010)000 , the quantity f Q σv ds /m χ may be too small for a dark star to form. For p-wave annihilation (the band sloping down to the right) and threshold annihilation (the various "fingers" of points dropping from the p-wave annihilation band), dark stars would be able to form. depends on the particle physics parameters contained in the coefficients a and b. In our effMSSM scan, p-wave annihilation gives rise to a spread in f Q σv ds /m of about one order of magnitude (band of points sloping down to the right in Figure 2).
The Z resonance at m χ ∼ 45 GeV provides an example of resonant annihilation. The resonant part of the neutralino-neutralino annihilation cross section is given by where β f is the speed of the final products in units of the speed of light, and g eff contains the coupling constants and the mixing angles of the neutralinos and of the final particles involved. The velocity dependence of (σv) Z can be obtained by writing s = 4m 2 χ (1 + v 2 ), from which one finds, neglecting the mass of the final products, where δ = 1 − m 2 Z /(4m 2 χ ) and γ = Γ Z m Z /(4m 2 χ ). On resonance, that is for 2m χ = m Z or δ = 0 and γ = Γ Z /m Z = 0.0273, the velocity-averaged (σv) Z has very different values JHEP00(2010)000 at freeze-out (v ≃ 0.2c) and in a dark star (v ≃ 0). At freeze-out, the thermal average of (σv) Z on resonance is, using T fo = m χ /20, (σv) Z fo = 0.97 g 4 eff /m 2 χ . On the other hand, in a dark star, one has on resonance, for v = 30 km/s, (σv) Z ds = 1.3 × 10 −13 g 4 eff /m 2 χ . While (σv) Z fo ∼ 3× 10 −26 cm 3 /s to provide the correct relic density, (σv) Z ds is thirteen orders of magnitude smaller. These very different values of σv fo and σv ds give rise to the V-shaped feature in Figure 2 around m χ = m Z /2 ∼ 45 GeV. (Similar resonant features through the lightest Higgs boson appear superposed at m χ = m H /2 ∼ 60 GeV.) Threshold annihilation occurs when the neutralino mass is slightly smaller than half the total mass of the final annihilation products in a specific channel (for example, χχ → W W ). In this case, kinetic energy is required for the reaction to occur. This kinetic energy is available at the time of freeze-out thanks to the relatively high temperature of the neutralinos, but is not available at the lower velocities of neutralinos in a dark star. Therefore, the annihilation into the specific channel (χχ → W W in the example) occurs at freeze-out but not in a dark star. The cross section σv ds is thus smaller than σv fo . In Figure 2 this is illustrated by the "fingers" of points dropping from the p-wave band at m χ ∼ 80 GeV (the W W channel) and m χ ∼ 190 GeV (the tt channel). In neither case the suppression of σv ds is severe enough to bring the points outside the parameter region in which dark stars can form.
For coannihilations, the relic density is determined by an effective annihilation cross section σv eff , which is an average of the annihilation cross sections of all reactions between the neutralino and the coannihilating particles. In minimal supergravity models, which are a subset of MSSM models, coannihilations occur in specific regions of the parameter space in which the stauτ is very close in mass to the neutralino χ, and with more tuning of the parameters when the stopt is very close in mass to χ. In the general MSSM, coannihilations may also occur between the lightest and second lightest neutralino, and between the neutralino and the chargino.
For the sake of illustration in the context of dark stars, we focus on stau coannihilations, because the experimental lower bound on the stau mass (mτ > ∼ 98 GeV) is smaller than the lower bound on squark masses and thus the coannihilation region in parameter space is larger. In the case of stau coannihilations, the effective annihilation cross section is (approximately) Here σv χχ , σv χτ and σv ττ are the total annihilation cross sections for χχ → anything, χτ → anything, andττ → anything, respectively. Since reactions likeττ → τ τ are electromagnetic processes, σv ττ ∼ α 2 /m 2 τ , which is much larger than the cross section for χχ → τ τ , σv χχ ∼ α 2 m 2 τ /m 4 τ . In fact, for mτ = 100 GeV (1 TeV), their ratio is approximately σv ττ / σv χχ ∼ m 2 τ /m 2 τ 3 × 10 3 (3 × 10 5 ). Thus with an appropriate choice of the mass difference mτ − m χ in Eq. (2.3), one can obtain an effective annihilation cross section three or more orders of magnitude larger than the χχ annihilation cross section, and a relic density three or more orders of magnitude smaller than without coannihilations. This argument allows us to estimate a lower limit on σv ds in a dark star using just the JHEP00(2010)000 annihilation cross section for χχ → τ τ without having to compute the relic density in the presence of coannihilations. The annihilation cross section for χχ → τ τ can be computed analytically and can be limited from below by keeping only the diagram withτ exchange and choosing appropriate neutralino and stau mixings. In this way, we obtain (2.4) Then we set mτ = m χ as appropriate for stau coannihilations. Moreover, bremsstrahlung (χχ → τ τ γ) gives a contribution to σv ds that exceeds the lower limit just computed in Eq. (2.4) at large neutralino masses. For m χ = mτ we estimate (2.5) In addition, we compute f Q by examining the fraction of energy that escapes into neutrinos in the decay chains of the τ lepton. Eqs. (2.4) and (2.5) are used to plot the shaded region to the right of m χ ∼ 100 GeV in Figure 2, namely For the bremsstrahlung of gamma rays, we take f Q = 1. Dark stars can form for m χ ≤ 880 GeV. If f Q ≤ 0.86, the bremsstrahlung does not play any role in determining what is the largest possible mass of neutralino forming dark stars, and dark stars can form for m χ ≤ 830 GeV. The annihilation cross section in dark stars can be as low as the lower edge of this shaded region, while the correct relic density is obtained through a much larger effective annihilation cross section. We notice that in most of the shaded region dark stars can still form, except at the higher masses where the shaded region crosses the boundary of the area marked 'no dark star.' Other coannihilations may arise in the MSSM. For instance, one might have coannihilations between the neutralino and a sneutrino or a selectron or a smuon. These may lead to even smaller σv ds than the case of stau coannihilations we use as an example, and so lead to a situation in which dark stars do not form. The worst case for dark stars is coannihilatin with sneutrinos, in that neutrinos are generated in the final state and neutrinos escape from the forming protostar without depositing energy. Similarly to the neutrinophilic case discussed below, three-body annihilation channels need to be considered, in particular internal and final state bremsstrahlung of charged leptons. A simple estimate of Z bremsstrahlung in the final state gives us (2.7) Here we took f Q = 1/2 as a representative value. In the process of virtual internal bremsstrahung, Z can take away a sizable fraction of the energy and f Q may be sizable. Eq. (2.7) is plotted in Figure 2 as the dashed line near the edge of the shaded coannihilation region. In terms of dark star formation, this case is similar to coannihilation with theτ . Dark stars can form up to m χ = 1 TeV. For different choice of f Q , the cross point is at m χ = (2f Q ) 1 3 TeV (m χ = 800 GeV for f Q = 1/4.) We therefore conclude that except in very special cases, namely on top of the Z resonance or for coannihilations of heavy sleptons or sneutrinos (m χ 800 GeV), dark stars can form in the MSSM.
Kaluza-Klein Dark Matter
If the Standard Model lives in five or six dimensions and the extra dimensions are compactified at a radius ≃ 1/TeV, the Kaluza-Klein (KK) number can be preserved in a consistent way with all the interactions involving an even number of odd KK number particles. In this setup, the lightest KK particle (LKP) is stable and can be a good dark matter candidate [7]. In particular, there are two interesting KK candidates for the dark matter: the KK photon (more precisely the KK modes of the U (1) Y gauge boson) and the KK neutrino.
The KK photon annihilates to quarks and leptons through the t-channel exchange of KK fermions, and its relic abundance is compatible to observation for its mass around 1 TeV. If the right-handed KK electron, muon, and tau are nearly degenerate (i.e if the mass difference is < ∼ 1%), the KK mass needed for the right relic density can drop to 700 GeV, since in this case the coannihilation cross section is very small compared to the self annihilation cross section, leading to a smaller effective cross section.
As far as the KK neutrino is concerned, its annihilation cross section to quarks and leptons proceeds via t-or s-channel exchange of gauge bosons, while annihilations to gauge bosons are mediated by t-channel KK lepton exchange or s-channel gauge bosons. If one flavor of the KK neutrino is considered, the correct relic density is obtained for a mass around 1.5 TeV. Including three flavors the effective cross section becomes smaller due to coannihilations between different flavors and the mass leading to the correct relic density is around 1 TeV. An additional coannihilation process with the KK left-handed electron is also possible when the latter has a smaller mass splitting with the KK neutrino, but this effect is almost negligible.
In the case of KK dark matter the s-wave anniilation cross section is always sizable both for the KK photon and for the KK neutrino, so that there is little difference between σv ds and σv fo . Moreover, the temperature in the dark star is very low compared to the freeze out temperature, so coannihilations with other particles give no contributions to the effective cross section with the exception of exact degeneracy of the masses. Anyway, since for KK dark matter the coannihilation cross section is either smaller than the one without coannihilation or the difference between the two is negligible, σv ds is expected to be always larger or comparable to σv fo .
Two interesting exceptions to the scenario described above are resonant annihilation with level-2 KK particles [8] and coannihilation with the KK gluon [9]. In principle these JHEP00(2010)000 effects can enhance the effective cross section at the freeze out temperature, so that, if the latter is normalized to that of a thermal relic, the cross section in the dark star can be suppressed. In particular, the s-channel annihilation at one loop through the exchange of the second KK Higgs with mass m h (2) is discussed in [8]. The canonical value for a thermal relic σv fo = 3 × 10 −26 cm 3 s −1 is obtained for a KK photon with mass m KK ≃ 800 GeV if the relation m h (2) = 2m KK holds up to 5%. This new enhancement changes the cross section only by 10 to 20%. Therefore, the largest possible difference between σv fo and σv ds can be at most 10 to 20%. Unlike the MSSM, there is no p-wave suppression for the KK photon annihilation and the cross section at freeze out temperature is large enough due to the t-channel exchange of KK fermions. The same is true for KK neutrinos through the t-channel exchange of KK Z and KK W bosons. The KK Z and KK W bosons have similar masses, and in this case f Q ≪ 1 is not possible. Thus f Q σv ds /m KK ≥ 10 −32 cm 3 s −1 /GeV is safely satisfied. Coannihilation with the KK gluon is a last interesting possibility [9]. If the KK gluon and the KK photon are degenerate with an accuracy much less than 1%, the correct relic density is obtained for m KK ≃ 5 TeV. By comparing it to the case without coannihilation (m KK = 700 GeV), one can see that coannihilation with the KK gluon can enhance the effective cross section by a factor of 50. Even in the worst scenario in which coannihilation with the KK gluon is effective at the freeze out temperature but is absent in the dark star, f Q σv ds /m KK ≥ 4 × 10 −32 cm 3 s −1 /GeV (assuming a conservative value f Q = 1/3) and the dark star can form.
As a consequence, if σv fo = 3 × 10 −26 cm 3 s −1 is imposed in order to explain the observed relic density, the condition for the dark star formation is always satisfied. We can conclude that KK dark matter that explains the observed relic density can always form dark stars.
Leptophilic Models
Leptophilic DM models [10] have recently become popular in order to explain simultaneously the excess in PAMELA positrons [11], the excess in Fermi-LAT electrons [12], as well as the excellent agreement between the observed antiproton spectrum and the corresponding standard expectation [13]. In order to explain the excesses, the mass of the DM is also constrained to be larger than about 100 GeV. This constraint might be more stringent if the electron FERMI-Lat data are taken into account, m > 400 GeV.
In leptophilic models the DM particles generically annihilate exclusively to charged leptons, either of only one type (electrons, taus or muons) or democratically to all the three families. It is also possible to consider decays to neutrinos, but for simplicity we will not consider this case which would simply imply a straightforward generalization (see Section 5 for annihilation into neutrinos only). In particular, in this section we discuss the case of democratic annihilation to the three lepton families. In this case, using PYTHIA [20] one gets f Q ≃ 0.56 almost constant in the range 200 GeV< ∼ m < ∼ 2 TeV. In order to explain the PAMELA and Fermi-LAT excesses, large annihilation cross sections 10 −25 < ∼ σv gal /cm 3 s −1 < ∼ 10 −23 are needed at the velocity of DM particles in our Galaxy, v gal ≃ 300 km/s. Assuming σv gal = σv fo (s-wave annihilation) these JHEP00(2010)000 values are up to two orders of magnitude larger than the value σv fo ≃ 3 × 10 −26 cm 3 s −1 compatible with a standard thermal relic abundance in agreement with observations. Clearly, the case of p-wave suppression of σv gal would be even worse. Several mechanisms have been devised in order to explain this discrepancy, such as a non-thermal production of the DM particles, a non-standard evolution history of the Universe or an enhancement of the annihilation cross section at low velocities (Sommerfeld effect). In this sense leptophilic models represent another interesting possibility in connection with the formation of a dark star: a very large annihilation cross section throughout the history of the Universe and/or at the low temperatures where dark stars are formed.
Assuming σv gal = σv ds , the previous discussion implies that in a dark star: for 100 GeV < m < 2 TeV. This is shown in Fig. 3, where along with the intervals required to explain the PAMELA and Fermi/LAT excesses the present constraints for the combination f Q σv ds /m are summarized as a function of m. In this plot we have assumed that σv does not depend on the temperature, in order to directly compare constraints relative to different epochs.
In particular, the thick and thin solid line closed contours show the range of values compatible with the PAMELA positron excess [11] and the FERMI-LAT e + +e − data [12], respectively. On the other hand, the two solid open lines represent conservative 2 σ C.L. upper bounds for f Q σv ds /M χ obtained from the flux of e + + e − observed by FERMI [12] (thin line) and the e + + e − flux measured by HESS [15] (thick line).
In the same Figure, we also plot with the dotted line the upper bounds on f Q σv ds /m obtained by comparing the expected gamma-ray flux produced by Inverse Compton (IC) scattering of the final state leptons to the diffuse flux of gamma-rays measured by FERMI at intermediate Galactic latitudes [17].
Finally the long and short dashed line shows the upper bound on f Q σv ds /m obtained by considering the imprint on the Cosmic Microwave Background Radiation (CMB) from the injection of charged leptons from DM annihilations at the recombination epoch [16].
It is clear from Fig.3 that the sizable values of the combination f Q σv ds /M χ are compatible to the formation of a dark star (Eq.1.3) when σv ds = σv gal , with the range m > ∼ 1 TeV disfavored [14] by several constraints. On the other hand, by assuming a Sommerfeld enhancement of the annihilation cross section one may have σv ds > ∼ σv gal depending on whether the enhancement effect is already saturated at the velocity v > ∼ 10 km s −1 inside the dark star, possibly implying in this case an even more favorable situation for the dark star formation. However, in presence of a non-saturated Sommerfeld enhancement at the recombination epoch the CMB constraint could be stronger, since at z ≃ 1100 DM particles are slower (v ≃ 10 −8 c ≃ 10 −3 km s −1 [16]) than inside a dark star. In this case leptophilic DM could explain the PAMELA and Fermi/LAT excesses only by assuming a boost factor of astrophysical origin such as clumpiness. In any case, barring some specific cases such as the presence of resonances in the annihilation cross section associated with JHEP00(2010)000 [15]; the dot-dashed line represents the upper bound on σv from CMB [16]. The dotted line shows the upper bound on σv obtained by comparing the expected gamma-ray flux produced by Inverse Compton (IC) scattering of the final state leptons to the FERMI-LAT measurement of the diffuse gamma ray emission with subtraction of the expected standard background [17].
bound states [18] even in this circumstance the bound in Eq. (1.3) would be easily verified and a dark star would be formed.
Thus we can also conclude that leptophilic models also satisfy the condition to form a dark star.
Neutrinophilic Models
As discussed in the previous sections the most popular examples of thermal dark matter candidates, namely the neutralino in the MSSM and the KK photon or KK neutrino in Kaluza-Klein DM, can easily produce a dark star, provided that the annihilation cross section at the temperature of the dark star is similar to that at freeze out and is not suppressed by mechanisms such as p-wave annihilation or by the fact that the annihilation JHEP00(2010)000 cross section is resonant at the freeze out temperature but not inside the dark star. In this Section we wish to generalize this statement to the general case of a thermal DM candidate, discussing what are the minimal conditions to form a dark star once σv ds is normalized to the canonical value σv fo = 3 × 10 −26 cm 3 /s. Moreover, we will also briefly comment on the case σv ds < σv fo .
For this purpose, we need to give a general discussion of the energy fraction f Q released by DM annihilation into the gas, a quantity that is in general model dependent. Our approach to this problem is to consider in this Section the most conservative case of a DM candidate annihilating exclusively into neutrinos, i.e. "neutrinophilic" Dark Matter.
Naively one would expect that the energy fraction of neutrinophilic DM going into visible particles vanishes: DM annihilation would generate only neutrinos that would escape from the collapsing gas freely. If this were indeed the case, neutrinophilic dark matter annihilations would not be able to support a dark star phase. However Z and W bosons are expected to be produced from bremsstrahlung radiation of the final state neutrinos, so some visible energy, increasing with the mass of the DM particle, is expected to be produced by the decay of the Z and/or W. Electroweak bremsstrahlung in the annihilation of neutrinophilic DM has already been considered in the context of DM indirect detection [19].
Let's first assume that neutrinophilic dark matter has an annihilation cross section in the dark star equal to the cross section that provides a thermal relic density, namely σv ds = 3×10 −26 cm 3 s −1 . At the tree level the branching ratio to νν is 1 if bremsstrahlung radiation is neglected. However, when 2m > m W (or m Z ), on-shell production of W-bosons (or Z-bosons) dominates the bremsstrahlung process, which can be viewed as a three body decay followed by the subsequent decay of the W or Z gauge bosons. Since the visible energy fraction of the W-and Z-boson decays is of order 1, one finds Here E W is the energy of the W boson. In this case, as can be easily checked numerically, the condition for forming a dark star is always fulfilled. On the other hand, when 2m < m W , off-shell bremsstrahlung occurs, which for 2m ≪ m W can be treated as a four-body decay in the limit of a 4-Fermi interaction.
The visible energy fraction f Q is plotted as a function of the DM mass m in Fig. 4. In this Figure we have used PYTHIA [20] to calculate the subsequent decay of the final state particles in the radiative correction, since the value of f Q critical for the formation of the dark star is near the threshold for the production of an on-shell W-boson, where a narrow width approximation or a 4-Fermi interaction are not reliable. From Fig.4 we can conclude that a dark star can be formed if the mass of the neutrinophilic dark matter is larger than ∼ 50 GeV.
If the DM particle is a scalar or a Majorana fermion, it is possible that the annihilation cross section in the dark star is significantly different from the cross section at decoupling. A simple example is a scalar neutrinophilic dark matter particle φ annihilating by t/uchannels through the exchange of a heavy fermion. In particular, the s-wave contribution JHEP00(2010)000 Figure 4: Energy fraction f Q released by neutrinophilic dark matter annihilation inside a dark star as a function of the DM particle mass m. The black curve represents f Q for an ideal neutrinophilic DM model where the W and Z bremsstrahlung effects are calculated using PYTHIA [20]. The blue dashed line shows the constraint given in Eq. (1.3). Only models above the blue line can form a dark star.
vanishes if the heavy fermion mediating the annihilation is a Dirac fermion and if it interacts chirally with the dark matter φ and the neutrinos, i.e.
where M is the mass of the heavy fermion and m is the mass of the dark matter scalar. In this case, the annihilation cross section is purely p-wave and is given by In this case, since the average velocity of the dark matter in the dark star is ∼ 30 km/s, one has σv ds = σv fo (v ds /v fo ) 2 ∼ σv fo v 2 ds (m/T fo ) ∼ 10 −32 cm 3 /s. So p-wave annihilating neutrinophilic DM cannot make a dark star.
Notice that in our evaluation of neutrinophilic DM annihilation we have not included internal bremsstrahlung of charged particles. Internal bremsstrahlung would increase the annihilation cross section of neutrinophilic dark matter, and extend the mass limit for the formation of dark stars to values below ∼ 50 GeV, but would depend on the specific particle physics model.
Strictly speaking a purely neutrinophilic DM model is not natural, since in most specific scenarios other tree-level decay channels contributing to the visible energy are usually expected, and a contribution of the latter at the level of ∼ 10 −5 level or larger would be sufficient to form a dark star. Thus neutrinophilic DM represents a limiting case, allowing to show that, as long as σv ds = σv fo , any thermal DM candidate heavier than m > ∼ 50 GeV can lead to the formation of a dark star.
Conclusions
The first stars to form in the Universe may be powered by the annihilation of weakly interacting dark matter particles [1]. In this paper we explored several popular examples of JHEP00(2010)000 thermal dark matter models in order to discuss whether they can satisfy the conditions for the formation of a dark star: the neutralino in an effective MSSM scenario; leptophilic models that might explain recent observations in cosmic rays; the KK-photon and the KK-neutrino in UED models; a conservative neutrinophilic model where the dark matter particles annihilate exclusively to neutrinos. We find that in general models with thermal dark matter lead to the formation of dark stars, with few notable exceptions: heavy neutralinos in the presence of coannihilations; annihilations that are resonant at dark matter freeze-out but not in dark stars; neutrinophilic dark matter lighter than about 50 GeV. In particular the discussion of the latter conservative scenario allows us to conclude that a thermal DM candidate in standard Cosmology always forms a dark star as long as its mass is heavier than ≃ 50 GeV and the thermal average of its annihilation cross section is the same at the decoupling temperature and during the dark star formation, as for instance in the case of a cross section with a non-vanishing s-wave contribution.
Therefore, we can conclude that the formation of a first generation of stars powered by dark matter annihilation is an almost inevitable consequence of thermal dark matter when a standard thermal history of the Universe is assumed and if the mechanism of Ref. [1] is at work. So a dark star is always there whenever there is thermal dark matter.
|
2010-04-09T08:12:18.000Z
|
2010-04-08T00:00:00.000
|
{
"year": 2010,
"sha1": "5ab5e3627544f38014fd59354f7fbf2915336ed9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1004.1258",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5ab5e3627544f38014fd59354f7fbf2915336ed9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
262847157
|
pes2o/s2orc
|
v3-fos-license
|
Education in the Anthropocene: assessing planetary health science standards in the USA
The environmental crises defining the Anthropocene demand ubiquitous mitigation efforts, met with collective support. Yet, disengagement and disbelief surrounding planetary health threats are pervasive, especially in the USA. This scepticism may be influenced by inadequate education addressing the scope and urgency of the planetary health crisis. We analysed current K-12 science standards related to planetary health throughout the USA, assessing their quality and potential predictors of variation. While planetary health education varies widely across the USA with respect to the presence and depth of terms, most science standards neglected to convey these concepts with a sense of urgency. Furthermore, state/territory dominant political party and primary gross domestic product (GDP) contributor were each predictive of the quality of planetary health education. We propose that a nation-wide science standard could fully address the urgency of the planetary health crisis and prevent political bias from influencing the breadth and depth of concepts covered.
Introduction
Global environmental change due to overpopulation, overexploitation of resources, climate change, biodiversity loss and interrelated factors endanger the future of humanity [1,2].Their effects include food and water crises, health declines and increasing rates of natural disasters [3][4][5].Moreover, these effects are disproportionately pronounced for impoverished and underprivileged communities [6][7][8] and will continue to intensify in magnitude and inequity without rapid, ubiquitous intervention [9].Mitigating the catastrophic impacts of anthropogenic planetary change will require informed, urgent and collective action.Hence, comprehensive education of concepts surrounding human influences on the biosphere is crucial, especially in countries which have the largest influence on the biosphere, such as the USA [10].Without proper understanding of global health crises, disengagement and disbelief surrounding planetary health threats ensue, posing serious barriers to solutions as these represent key predictors of individual and collective action [11].Adequate public school education on planetary health topics offers a critical tool to promote widespread support for mitigative action by informing citizens [12], fostering climate change concerns among parents [13] and ultimately stimulating collective sustainable behaviours [14,15].
The nascent field of planetary health seeks to address the consequences of anthropogenic environmental change for 'the health of human civilization and the state of the natural systems on which it depends' [16].While many universities in the USA offer curricula on planetary health and related subjects, only a minority of USA citizens (34.6%) attain a bachelor's degree, whereas most (89.7%)successfully complete secondary education [17].Though education on planetary health topics like climate change has mixed results among adults [18], education fosters concern and mitigation behaviours in adolescents [19], which can positively influence parental beliefs and actions [13].Moreover, universally accessible education will empower and reduce inequities for communities most affected by planetary health threats [6,8] and achieve solidarity from those who are less impacted [20].
It is essential that every student receives an unbiased education on planetary changes that characterize the Anthropocene.Therefore, teaching a comprehensive and frequently renewed understanding of these subjects in public schools should be an objective of the USA education system.Several studies have sought to assess if and to what degree subjects related to planetary health are being taught in USA public schools.Research evaluating how topics such as evolution [21], sustainability [22], ecology [23] and global climate change [24] are portrayed in state standards and curricula identify high variation among states in how thoroughly concepts are portrayed and discussed.In one study of high school textbooks, the language used to describe research on climate change was found to be vague and often contained no explicit cause-effect language that connected human activities with climate change [25].In many cases, scientists' views on human-induced climate change are framed as controversial or portrayed with uncertainty and doubt.A set of white papers recently produced by the National Center for Science Education reviewed how climate change is addressed in all 50 state science standards, and found that some standards even ask students to 'debate the issue', serving as a means to bring non-evidence-based perspectives into science classrooms [26].Similarly, work assessing how specific topics such as sustainability and the environment are portrayed in the Next Generation Science Standards (NGSS), a set of national standards adopted by approximately one-third of all states and territories, identified abstract language that portrayed the environment as a loosely defined entity rather than an interconnected set of complex biological systems which include humans [27,28].Together, these studies call into question the quality and consistency of sustainability-focused science education in the USA and highlight the need for further investigation.
Here, we evaluate the capacity of the USA K-12 public education system to prepare students to understand, cope with, and help mitigate the current trajectory of planetary health.Using a comprehensive list of major concepts and issues, we gauged the scope of planetary health education in USA science standards.We measured planetary health education quality using key terms indicative of planetary health concepts for all USA state/territory science standards based on: (i) the depth of term presence, (ii) the degree to which terms are described as having anthropogenic causes and/or effects (i.e.human interactions), and (iii) the level of urgency presented with relevant terms.While variation in language use surrounding planetary health concepts in education standards have previously been assessed on a nation-wide scale [24,29], little attention has been paid to attempt to identify potential state-level political and economic drivers of this variation.Here, we used measures of state dominant political party and economic factors to identify state characteristics predictive of planetary health education quality across the USA.We also evaluated how the NGSS performed in comparison to those developed by individual states.
Methods (a) State science standards and metadata
We evaluated state science standards for USA public education as of July 2020 for the presence and framing of topics related to planetary health.Standards provide a crucial backbone for which teachers, textbook publishers, standardized test makers and others use to establish education goals [30].While most states and territories adhere to their own standards, 17 states, 1 USA territory and Washington D.C. have fully adopted the NGSS; a set of national standards meant to improve and unify USA science education developed by Achieve, a nonprofit education organization, in collaboration with the National Research Council (NRC) and other partners [31].The current science standards for all states and territories for which standards could be located (i.e.American Samoa, Guam and Puerto Rico), as well as Washington D.C., were compiled and categorized into those that use NGSS and those that follow their own standards.States were also characterized by the dominant political party of their state legislature the year of science standard adoption [32].Further, we recorded the geographical region [33], major economic industry as determined by the primary GDP contributor [34], level of climate change preparedness [35] and average household income [36] for each state and territory.All science standards, raw data and code are available via Dryad (doi:10.5061/dryad.rn8pk0phr).
(b) Assessment terms and dimensions
All state-level standards, including NGSS, were evaluated for five fundamental concepts and 10 major issues chosen based on terms we found to be critical for a comprehensive understanding of planetary health and the current trajectory of the global climate crisis following a review of recent literature published on sustainability [37], climate change impacts [3] and biodiversity loss [38][39][40] (electronic supplementary material, table S1).Terms were searched throughout the standard to identify the most descriptive text segments (sentences and paragraphs) associated with the topic and were ranked based on three separate categories: (i) Term Presence, (ii) Human Interaction and (iii) the Level of Urgency conveyed within standards (electronic supplementary material, table S2), hereafter referred to as 'dimensions'.We used 'Term Presence' to assess the level to which each search term or phrase was presented within science standards.This dimension was scored from 0 to 3, representing 'absent' (0), 'indirectly mentioned' (1), 'briefly mentioned' (2) and 'described in-depth' (3).We assessed 'Human Interactions' which measured the degree to which each term was conveyed as being affected by humans and/or affecting humans.This dimension was also scored from 0 to 3 based on the level of connectedness; ranging from: 'absent' (0), 'indirect' (1), indicating an indirect or implied connection to humans, 'unidirectional' (2), denoting that the standard explicitly tied the term or phrase to affecting humans or being affected by humans (but not both), and 'bidirectional' (3), indicating the term or phrase was both affected by and affecting humans.Lastly, we evaluated 'Level of Urgency' which quantified how pressing or critical the term was conveyed based on language that indicated urgency or threat to royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 290: 20230975 human well-being.This dimension ranged from 0 to 2, with scores representing 'absent' (0), 'moderate or implied' (1), and 'high' (2).Examples of language indicative of each rank can be found in electronic supplementary material, §1.The five foundational concepts (ecology, evolution, biodiversity, ecosystem and ecosystem services) were not assessed for Level of Urgency, as this dimension was not relevant for these terms.For clarification on the methodological nomenclature used, refer to electronic supplementary material, §2.
(c) Reviewer assessment and statistical analyses
The education standards for each state were assessed by three separate, randomly assigned reviewers.Randomized assignment of state education standards was performed using the RANDARRAY function in EXCEL [41].Reviewers assessed each state education standard in its entirety as it related to biology, including general biology, Earth and planetary sciences and environmental sciences from each state.For NGSS, both the published standards as well as the NRC's K-12 Framework for Science Education from which they were developed [42] were assessed.Final ranks for each term within each standard were calculated as the average rank of the three reviewers.To account for differences between reviewers for Term Presence, the variance was calculated and any ranks with a variance of 0.667 or higher were re-evaluated by the reviewers.Mean dimension scores (i.e.Mean Term Presence Score, Mean Human Interactions Score and Mean Level of Urgency Score) were calculated for each standard as the per cent of the total possible sum of all ranks within each dimension across all terms.
Composite scores were also calculated for each standard as the average of all mean dimension scores (see electronic supplementary material, table S3 and §3 for more details).Mean dimension scores as well as composite scores were also calculated for each term across all unique standards (i.e.all NGSS states considered as an individual standard; electronic supplementary material, table S4).
We performed restricted maximum-likelihood linear mixed models, using the statistical package ASREML-R in R version 4.0 [43], to determine which state characteristics were predictive of variation in mean dimension scores among individual state and territory standards.Modelling was done iteratively, first with all hypothesized predictive factors run individually as a fixed factor, while all other factors were treated as random factors.Wald's significance tests were performed on individual models to identify factors that significantly predicted mean dimension scores (see electronic supplementary material, table S5 for model details).
(a) Planetary health education among states and territories
Composite scores for state/territory science standards varied widely (figure 1a; electronic supplementary material, table S3), ranging from 20.8% (North Carolina) to 73.8% (Mississippi).Composite scores for terms (figure 2; electronic royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 290: 20230975 supplementary material, table S4) also exhibited high variance, ranging from 31.6% (endangered species) to 91.8% (ecosystem).Mean dimension scores for state standards showed strong positive correlations (electronic supplementary material, figure S1), with Term Presence and Human Interactions showing the strongest correlation (adjusted R 2 = 0.671), followed by Human Interactions and Level of Urgency (adjusted R 2 = 0.625), and finally Term Presence and Level of Urgency (adjusted R 2 = 0.480).These correlations between the three dimensions suggest that the inclusion of a term within a science standard also indicates that the term is more likely to be presented with some degree of human interconnection and urgency.While any given individual dimension is thus informative, considering all three dimensions together provides a measure of the overall quality of planetary health education.Tests for significant interactions between mean scores and standard lengths indicated no significant relationships (electronic supplementary material, figure S2).
(b) Interrater reliability
We assessed interrater reliability using Krippendorff's α with data defined as 'ordinal'.Bootstrapped values were calculated using 20 000 replicates along with 95% confidence intervals using the ICR R package [44].Interrater reliability was high, with a 95% confidence interval between 0.87 and 0.89 (electronic supplementary material, figure S3).
(c) Trends across states and territories
Across all terms and individual state/territory standards, Term Presence was moderately high (figure 2; electronic supplementary material, figure S4), with approximately half of the terms being at minimum briefly mentioned (8 out of 15 scored ≥ 66.7%; mean = 72.2%).However, most standards did not describe terms in-depth, and only six had a mean Term Presence score above 83.3%(i.e. on average, more than half of the terms were described in-depth).Human Interactions scores were generally lower (mean = 62.3%) than for Term Presence, still about half of the terms were described as having either direct anthropogenic causes or effects (8 out of 15 scored ≥ 67%).The five foundational concepts had higher Term Presence than the 10 major issues (mean = 78.7%versus 69.0%) while the reverse was true for Human Interactions (mean = 54.9%versus 66.0%).
From our analyses, it is apparent that most concepts required for understanding planetary health challenges lacked language that conveyed urgency.Across all standards and terms, the average Level of Urgency score was 24.9%, the lowest among dimension mean scores (electronic supplementary material, table S4 and figure S4).The terms conservation, extinction and endangered species were associated with the lowest Levels of Urgency (respective mean; 16.7%, 12.0% and 10.6%) while waste/pollution received the highest mean score (41.2%).Many standards associated no urgency with major issues, with 123 of the total 360 terms across all standards receiving a Level of Urgency rank of 0 (electronic supplementary material, figure S4).
(d) Determinants of science standard scores
As major economic industry and dominant political party strongly influence individuals' views on planetary health [45], another goal of our study was to assess whether these factors play a role in driving differences in planetary health education among USA states and territories.Using restricted maximum-likelihood linear mixed models, we examined whether state dominant political party, economic industry, climate change preparedness, average household income or geographical location were significant predictors of mean dimension scores.
Dominant political party was the strongest predictor of the quality of planetary health education across most ranking metrics (figure 3).While it did not predict Term Presence ( p = 0.09), it did predict differences in Human Interactions and Level of Urgency ( p < 0.005; electronic supplementary material, table S5).We found that Democrat-led states received predicted mean scores 18% higher for Human Interactions and 33% higher for Level of Urgency than Republican-led states.Human Interactions (yellow triangles) and Level of Urgency (red squares) is represented by the mean dimension score for each term across all standards and the associated standard error.Also shown is the composite score for each term (dark blue diamonds).All states and territories that adopted NGSS in full were considered as a single standard.
royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 290: 20230975 Nonpartisan states (n = 5) exhibited the largest degree of variation in prediction and typically had the lowest predicted mean compared with Republican-and Democrat-led states.
Of the states which scored below the mean composite score (i.e.59.3%), 86% were Republican-led states while 14% were Democrat-led (figure 1b).This trend is partially explained by the proportion of Democrat-led states that have adopted NGSS (65%) versus Republican-led states (14%), given that NGSS received the third highest composite score.Aside from dominant political party, state/territory major economic industry was the only other significant predictor of the quality of planetary health education (figure 4; electronic supplementary material, table S5).States and territories with economies dominated by agricultural industries had the highest predicted composite score (69.8%), while states with major manufacturing industries had the lowest predicted composite score (25.9%).Interestingly, it appears that states with industries that are dependent on environmental conditions (e.g.agriculture and tourism) have higher predicted composite scores (68.0%) compared with those with industries that are not dependent on environmental conditions (42.9%) (e.g.manufacturing and fossil fuels).
Discussion
We investigated whether topics essential for developing a robust understanding of the challenges of the Anthropocene are present in K-12 science standards across USA states and territories.We found that many topics are present in science standards and conveyed with conceptual depth.However, we found that nearly all major issues of planetary health assessed here are severely lacking any sense of urgency within science standards.Moreover, there was significant variation in the quality of planetary health education across USA states and territories, with dominant political party (i.e.Republican-versus Democrat-led) being the strongest predictor of science standards depth and breadth.While decades of research and discussion have implicated schools as a form of social control and education as inherently political [29], to our knowledge, we are the first study to statistically quantify and connect state dominant political party with variation in the depth and breadth of planetary health education in USA public schools and identify additional state characteristics (i.e.dominant economic industry) associated with planetary health education.
The lack of urgency surrounding aspects of planetary health education across the USA has been noted previously, particularly in reference to climate change [25].Vague language and a lack of concrete discussion on the implications of global change leave students ill-equipped to cope with and mitigate threats to their health and livelihood.At present, a large proportion of the USA population, including both children and adults, fail to recognize the inextricable connections between sustainable ecosystems, human health and ultimately societal stability [46].Neglecting to explicitly define the anthropogenic connections of planetary health and the urgency of its current prognosis does little to convey the deep significance human actions play in the sustainability of our own livelihoods and those of future generations.Our emphasis on urgency seeks to foster optimistic mitigatory and adaptive action within the USA. .Political influence on state/territory science standards for planetary health education.Represented by least-squares (LS) means and the 95% CI of dimension scores across state/territory dominant political party the year state standards were adopted.LS-means and significance were calculated using a linear mixed effects model.Term Presence was not significantly correlated with the dominant political party of the state at time of implementation (Wald's test: p = 0.09), whereas Human Interactions and Level of Urgency were significantly correlated with dominant political party at the time of state standard implementation (Wald's test: p < 0.005).Nonpartisan states show the greatest degree of standard error of prediction (statistical model and results presented in electronic supplementary material, table S5).
royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 290: 20230975 Perhaps one explanation for the variation in how major planetary health issues in USA science standards are conveyed is political influence on standard development and/ or implementation.Our findings suggest that partisanship of state and territory legislatures influence the quality of planetary health education in public school science standards.This relationship is likely a direct result of the heavy politicization of issues like climate change influencing state officials involved in the adoption of state-specific legislation, including public school education standards [47][48][49].In addition to trends in planetary health education between Democrat-and Republican-led states, we observed possible indications of political influence in the process of education standard adoption in the language of several standards.For example, the South Dakota science standards stated that 'not all viewpoints can be covered in the science classroom' when referencing climate change and evolution and requested that 'parents engage their children in discussions' in order to allow students to 'draw their own conclusions'.The current process of standard adoption can result in politically biased standards influenced by non-experts, as has been seen in other science topics such as evolution [30].Nonscience-based political views extending into the classroom and thus shaping the beliefs for millions of Americans is a failure of the public education system.Politicization of scientific topics can drive science scepticism and in turn directly impact individual behaviour, often to the detriment of public health, a phenomenon which has contributed to the severity of the COVID-19 health crisis [50].Removing measurable political bias from state science education standards is essential for addressing and surviving the challenges posed by the Anthropocene.
Economic industries unexpectedly appear to influence K-12 planetary health education standards.Our findings indicate that the major economic industry of the state (as characterized by GDP) was significantly associated with planetary health education quality (figure 4).Interestingly, there appears to be little research assessing the influence of major economic industry on state-level climate change education.However, studies have explored how industries' internal policies respond to the threat of climate change [51] namely, finding that most organizations select the path that is most like the status quo or 'business as usual', avoiding incorporation of more sustainable practices.Perhaps this trend explains the correlation between industry and education standards we observed, with states that benefit economically from maintaining status quo resisting higher-quality standards.Further, it is well known that industries differ in their vulnerability to climate change [52][53][54][55] and that specific occupations can shift ecopsychological views to be more concerned about climate change [56][57][58][59].Hence, policymakers in states with environmentally dependent industries (e.g.farming and tourism) may be more aware of the environmental degradation caused by these systems, which may in turn foster support for more comprehensive planetary health education.Alternatively, the negative environmental impacts of industries such as farming and technology may simply be more apparent to local educators, motivating them to incorporate more extensive planetary health standards.This apparent association between dominant economic industry and planetary health education warrants further investigation to determine the causal nature of the relationship.
As many Americans do not receive institutional education beyond high school [17], we cannot rely on universities to disseminate the cutting-edge knowledge of the impacts of anthropogenic environmental change.In order to better serve all students, we suggest enacting a unified science education standard across the USA which not only encompasses [22].We found that topics that have been heavily politicized (e.g.evolution, climate change and sustainability) were presented in-depth and regularly shown to have human interconnections.Currently, 19 states and territories have adopted NGSS in full and another 13 directly reference them in their standards.Thus, full adoption of NGSS by the remaining 35 states and territories could curtail politicization of science concepts in addition to increasing planetary health education consistency and quality across the nation.We acknowledge that every state faces unique environmental challenges that warrant attention in science standards and that universal adoption of NGSS may seem unappealing to those states that have developed learning objectives dedicated to local issues (e.g.New York and Florida, among others).However, rather than hindering education on state-specific topics, we propose that adoption of NGSS would allow state-level educators to devote more resources to developing supplements dedicated to local issues.While NGSS would serve as a suitable starting point for a universal science standard, this analysis indicates it is still wanting in setting requirements for comprehensive planetary health education.We are not alone in this finding; a report released by the National Center for Science Education and the Texas Freedom Network Education Fund in 2020 identified similar trends among climate-change-related USA science education standards [26].Specifically, both this study and the NCSE report assigned the highest score/ grade to states that have not adopted NGSS; Mississippi in this study and Wyoming, Alaska, Colorado, New York and North Dakota in the NCSE report.These findings suggest that several states have incorporated a more thorough discussion of topics related to planetary health within their standards and may serve as examples for improving NGSS.The composite score for NGSS was 71.8%, indicating a need for improvement on the topics assessed, particularly in relaying urgency (mean Level of Urgency score was 45%).Topics such as habitat loss/degradation, endangered species and extinction had the lowest scores for this dimension in both NGSS and non-NGSS standards, indicating that these topics are not being presented with the urgency necessary for the alarming rates at which they are occurring.Moreover, the framework from which current NGSS standards were developed was published in 2011, meaning information on topics that are subject to very active research, such as climate change and biodiversity loss, requires frequent updating.The pace of planetary health degradation and coinciding research calls for updated standards and a reform process that functions at a similar pace.We suggest implementation of systems to allow for efficient dissemination and incorporation of up-to-date planetary health findings into K-12 science education standards.Enacting universal science standards would facilitate this process as the task of updating standards would be centralized to a single entity rather than dispersed among legislatures, ensuring that students who are expected to prosper in the Anthropocene will be literate in the planetary health issues they will face.
As we have focused solely on state-level science standards, we acknowledge that they do not necessarily dictate classroom curricula across all districts.Some teachers may go beyond state standards regarding the topics under evaluation as a result of well-funded districts, ample professional development opportunities and/or personal values.Teachers that lack resources in their districts may not have the means to meet standards, regardless of their desire to produce wellprepared students.Educators may also face hostile local communities, causing them to avoid discussing potentially controversial topics in the classroom such as climate change, planetary health and evolution.Moreover, internal bias among teachers may influence how these topics are framed in the classroom or if they are even discussed at all [30,60].Systemic solutions such as a mandated, universal science standard would promote equity among teachers in their ability to comprehensively educate students on planetary health topics by obligating districts to adopt curricula that meet those standards.Doing so may also help bridge the gap in planetary health education between K-12 education and post-secondary institutions and promote awareness and activism among all students regardless of their level of education.
Enacting a unified science standard across the USA will not be a simple task.There is a great deal of work that goes into the development of standards and requires collaboration between educators, policymakers and scientists [61].Additionally, the quick and efficient implementation of updated standards would require educators to have access to professional development tools and training needed to teach novel curricula and best serve their students.While enacting this strategy may be a large undertaking, it would better equip students to deal with massive global issues which will need to be addressed within the next decade to truly affect current global trajectories [62].Once a leader of environmental policies and initiatives, many of the first environmental movements took place in the USA in response to intense industrialization and exploitation of natural resources in the nineteenth century [63].Today however, the USA lags behind China and India in renewable energy investment and falls far behind other developed nations including Iceland, Denmark, Norway and France in establishing and meeting climate change initiatives [64,65].The underperformance of the USA response to climate change could very well be connected to the lack of consistent and comprehensive planetary health education across the nation.The unrelenting pace of global change has created unprecedented urgency for all corners of society to adapt, especially in the field of education [66][67][68].While we cannot speak to global trends, assessing planetary health education in the world's second largest greenhouse gas emitter [69] is imperative for anticipating trends in the success of sustainability practices.We affirm that comprehensive, accessible and appropriately urgent public education on planetary health is indispensable for confronting the environmental challenges to come.While the task of updating science standards to keep pace with the current trajectory of planetary change is daunting, we hope that our findings will identify a way forward and begin conversations for decisive action.
Figure 1 .
Figure 1.Planetary health education quality across individual state/territory standards.Educational science standard quality across the United States (a) is represented by each state/territory composite score.The inset box shows composite scores for Washington, D.C. (top left; DC), American Samoa (top right; AS), Guam (bottom left; GU) and Puerto Rico (bottom right; PR).States and territories that have fully adopted NGSS are outlined in bright green.The deviation from the mean for each state/territory science standard composite score (b) is shown along with state dominant political party.
e v o l u t i o n b i o d i v e r s i t y e c o s y s t e m e c o s y s t e m s e r v i c e s n a t u r a l r e s o u r c e s h o r t a g e w a s t e / p o l l u t i o n h a b i t a t l o s s / d e g r a d a t i o n b i o d i v e r s i t y l o s s e n d a n g e r e d s p e c i e s e x t i n c t i o n s u s t a i n a b i l i t y c o n s e r v a t i o n c l i m a t e c h a n g e e x t r e m e e v e n t s / n a t u r a l d i s a s t e rFigure 2 .
Figure2.The distribution of planetary health education quality across all terms and dimensions.Performance of individual terms for Term Presence (teal circles), Human Interactions (yellow triangles) and Level of Urgency (red squares) is represented by the mean dimension score for each term across all standards and the associated standard error.Also shown is the composite score for each term (dark blue diamonds).All states and territories that adopted NGSS in full were considered as a single standard.
Figure3.Political influence on state/territory science standards for planetary health education.Represented by least-squares (LS) means and the 95% CI of dimension scores across state/territory dominant political party the year state standards were adopted.LS-means and significance were calculated using a linear mixed effects model.Term Presence was not significantly correlated with the dominant political party of the state at time of implementation (Wald's test: p = 0.09), whereas Human Interactions and Level of Urgency were significantly correlated with dominant political party at the time of state standard implementation (Wald's test: p < 0.005).Nonpartisan states show the greatest degree of standard error of prediction (statistical model and results presented in electronic supplementary material, tableS5).
u f a c t u r i n g b a n k i n g h e a l t h c a r e f o s s i l f u e l m e d i a i n s u r a n c e f a r m i n g t e c h n o l o g y t o u r i s m primary GDP industry
Figure
Figure 4. Dominant state industry influence on state/territory science standards for planetary health education.Least-squares means, and the 95% CI calculated from the ASREML-r model presented in electronic supplementary material, tableS5.Figure highlights how state/territory science standard composite scores are predicted by largest contributor to GDP.Largest GDP industry of state was determined from data collected by the Bureau of Economic Analysis.
royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 290: 20230975 4. Dominant state industry influence on state/territory science standards for planetary health education.Least-squares means, and the 95% CI calculated from the ASREML-r model presented in electronic supplementary material, table S5. Figure highlights how state/territory science standard composite scores are predicted by largest contributor to GDP.Largest GDP industry of state was determined from data collected by the Bureau of Economic Analysis.royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 290: 20230975topics necessary for a comprehensive understanding of planetary health, but also presents them with the appropriate level of urgency while negating partisan influence of politicized issues.We maintain that ubiquitous adoption of the NGSS would be the best first step towards achieving this goal.In this study, NGSS had the third highest composite score among individual standards, ranking first in Human Interactions, fourth in Level of Urgency and sixth in Term Presence, and is therefore one of the highest performing standards in this analysis.NGSS was developed by a team of administrators, educators and researchers from 26 states evenly representing both major political parties (14 Democrat-led, 12 Republican-led)
|
2023-09-27T13:04:20.590Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "5e2b2ff056df05dac379415a5d63eff12fe38c10",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "RoyalSociety",
"pdf_hash": "5e2b2ff056df05dac379415a5d63eff12fe38c10",
"s2fieldsofstudy": [
"Environmental Science",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208553612
|
pes2o/s2orc
|
v3-fos-license
|
Anlotinib overcomes multiple drug resistant of the colorectal cancer cells via inactivating PI3K/AKT pathway
Background Anlotinib is a multi-tyrosine kinase inhibitor that has been reported to have activity against colorectal cancer. However, the functional mechanisms whereby anlotinib mediates against deadly drug-resistant colorectal cancer (CRC) has not been fully described-specifically, the potential mechanisms that inhibit proliferation and induce apoptosis remain largely unknown. Methods MTT assays were used to detect cell viability and calculate the resistance index. Colony formation was used to evaluate the proliferation of resistant cells. DAPI staining was used to detect cell apoptosis morphologically. Annexin V-FITC with PI staining was used to detect early and late-stage apoptosis of cells. Cell cycle distribution was determined by Flow cytometry. Transwell assays were performed to examine the ability of migration and invasion. Cyclin D1 Survivin, CDK4, Bcl-2, Bax and changes of PI3K/AKT pathway were detected by Western blotting. Compared as a single agent or combined with anlotinib or LY294002, PI3K inhibitor (LY294002) was used to verify whether it inhibited drug-resistant CRC cells by lowering PI3K/AKT. Results HCT-8/5-FU cells showed multiple drug resistance. Drug resistance index of 5-FU, ADM and DDP were 390.27, 2.55 and 4.57, respectively. Anlotinib was shown to inhibit cell viability on HCT-8/5-FU and HCT-8 cells for 24 h and 48 h in a dosage- and time-dependent pattern. Compared with 48 h, intervened with anlotinib (0 μM, 10 μM, 20 μM and 40 μM) for 24 h, the HCT-8/5-FU cells were sensitive to anlotinib, and their sensitivity was greater than that of the parent cell line (HCT-8) at 24 h. Further, anlotinib inhibited the number of cloned cells significantly and had a significant inhibitory effect on cell cycle, mainly by blocking G1 transferring to S phase. Moreover, anlotinib could down-regulate the expression of survivin, cyclin D1, CDK4, caspase-3, Bcl-2, MMP-2, vimentin, MMP-9, and N-cadherin, while up-regulating cleaved-caspase-3, Bax and E-cadherin. Anlotinib inhibited the activity of the PI3K/AKT pathway and induced apoptosis in HCT-8/5-FU cells. Using LY294002, a specific PI3K inhibitor, our experiment found anlotinib can inhibit drug-resistant CRC cells by reducing PI3K and p-AKT activity-induce apoptosis. Conclusions Anlotinib inhibited the proliferation, metastasis and induced apoptosis of HCT-8 / 5-FU cells; and the mechanism could be that anlotinib overcomes multiple drug resistant of the colorectal cancer cells via inactivating PI3K/AKT pathway.
Introduction
Colorectal cancer is a very common cancer associated with high mortality worldwide, the death rate of which ranks third among all malignant tumors [1][2][3][4]. In China, the incidence of CRC is increasing with each passing year by an average of 4% to 5% [5]. In recent years, great advances have been made in therapeutic regimens for treatment of primary CRC, and the mid-survival period has significantly lengthened. Prognosis, however, still remains poor, the long-term overall survival rates of CRC having remained largely unchanged over the past two decades [6,7].
Currently, the main treatment for CRC is surgery. However, some patients with advancedstage or recurrent CRC are not suitable for surgery and, thus, alternatively treated with chemotherapy using 5-fluorouracil (5-FU), Cisplatin (DDP), doxorubicin (ADM), vincristine (VCR) and so on. However, these drugs encountering resistance is one of the prime causes of the failure of clinical treatment. Accordingly, it is essential and extraordinarily pressing to explore novel therapeutic substances.
Anlotinib is a novel small molecule, a multi-target tyrosine kinase inhibitor, that has been used to treat various types of cancers, including CRC, based on its known ability to block angiogenesis [8]. It has proven strong inhibitory activity on many target receptors, for example PDGFR, FGFR and VEGFR [9][10][11]. Recently, it has been shown that many tumor cells are sensitive to anlotinib [8]. However, the mechanism has not been clearly elucidated. Therefore, it is essential to comprehend the intrinsic mechanism of anlotinib in the treatment of MDR colorectal cancer in order to improve its efficacy in clinical therapeutic application.
Anlotinib hydrochloride preparation
Anlotinib hydrochloride was purchased from CTTQ (Chia Tai Tian
Validation of cell resistance
MTT assay was used to validate cell drug resistance. HCT-8/5-FU and HCT-8 cells during logarithmic phase were seeded with the density of 1 × 10 5 /mL cells in 96-well culture plate, 100 μL/well, cultured in 37 ℃ and 5% CO 2 incubator, when cells convergence degree of 50%~60%, remove the original culture medium, and then added different drugs 5-FU (0-25600 μM), ADM Collected 1000 cells were inoculated in 6-well plate for 10 days. When the clone appeared in the hole of culture plate, stopped culturing and washed with PBS 3 times, then added 4% paraformaldehyde to fix for 10 min. Discarded 4% paraformaldehyde and cleaned with PBS, crystal violet staining for 15 min. The colonies were calculated manually in the three different views.
Cell cycle assay
Cell intervention as above. At the end of treatment, using 70% precooling ethanol to immobilize the cells in 4°C overnight and washed 3 times (PBS), 500 μL RI/Rnase A reaction staining in darkness for 30 minutes, and determined in FL2 channel of flow cytometry.
DAPI staining
Cell intervention as above. Added 1 mL 4% paraformaldehyde to fix for 10 min. Put 1 mL DAPI staining solution in the well and cultured in cell incubator. After staining, washed the cells (PBS) 3 times. Observed and took some photographs with microscope (200×).
Early cell apoptosis assays
Cell intervention as above. Annexin-V/PI staining was used to detect the early apoptosis of cells.
Cell treatment was the same as cell colony formation assay. Collected supernatant and the cells, suspended with 500 μL binding buffer. Subsequently, put 5 μL Annexin V/FITC and 5 μL PI in the cell, respectively, cultured in the darkness for 15 min and determined in the FLI channel by Flow cytometry.
Statistical analysis
The statistical analysis of our experiments was carried out on a minimum of three independent experiments. IBM SPSS Statistics 21 was used to conduct statistical analyses. Compared with control group, all data with P values of less than or equal to 0.05 were considered to have statistical significance.
HCT-8/5-FU cells had multidrug resistance but were sensitive to anlotinib
MTT assay was used to validate cell multidrug resistance and to determine the viability of HCT-
Anlotinib induced apoptosis of HCT-8/5-FU cells
In order to confirm whether anlotinib inhibited cell growth by causing apoptosis, we tested its pro-apoptotic activity in HCT-8/5-FU cells via DAPI and Annexin-V/PI staining. Compared to control cells, anlotinib decreased the cell numbers of HCT-8/5-FU and induced dot-like apoptotic body formation in HCT-8/5FU cells in a dosage-dependent pattern, as shown in Figure 3A. To check the pro-apoptosis feature of anlotinib, Annexin V/PI staining was demonstrated by flow cytometry. As shown in Figure 3B, cells were treated with varying concentrations ( 0, 10, 20 and 40 μM) of anlotinib, and the percentage of apoptotic cells were 3.47 ± 0.03%; 5.54 ± 0.77%; 8.02 ± 0.80% and 28.44 ± 2.54%, respectively. Moreover, expression of protein-related apoptosis as Bcl-2, Bax, caspase-3 and cleaved caspase-3 were tested. As shown in figures 5A and 5B, treatment with 0, 10, 20 and 40 μM of anlotinib up-regulated the expression of Bax and cleaved caspase-3, but inhibited Bcl-2 and caspase-3. Together, these data indicated that anlotinib's ability to sensitize HCT-8/5-FU may be due to its pro-apoptotic activity.
Anlotinib inhibited migration and invasion of HCT-8/5-FU cells
To detect the effect of anlotinib on metastasis of HCT-8/5-FU cells, migration and invasion tests were carried out. The experiments showed anlotinib could inhibit the migration and invasion of HCT-8/5-FU cells in a dosage-dependent fashion. As shown in Figures 4A and 4B, treatment with 0, 10, 20 and 40 μM of anlotinib decreased the migratory cells by 106 ± 5, 35 ± 4, 9 ± 2, and 1 ± 1, respectively, and invasive cells by 71 ± 4, 61 ± 4, 49 ± 2 and 8 ± 1, respectively. To investigate the inherent mechanism of the inhibitory activity of anlotinib against endothelial to mesenchymal transition (EMT) in greater depth, the effect of anlotinib on related proteins was investigated. As shown in Figures 5A and 5C, anlotinib was observed to significantly decrease the expression levels of N-cadherin, MMP-2, MMP-9 and vimentin, but increase the expression of E-cadherin.
Anlotinib inhibited the activation of the PI3K/AKT pathway in HCT-8/5-FU cells
Phosphatidylinositol 3-kinase/protein kinase B (PI3K/Akt) is a common cancer-related signaling pathway associated with cell proliferation, apoptosis, survival, migration, invasion, metastasis and other intracellular transport functions during the origination and development of tumor cells [12,13] . To clarify the mechanism of action of anlotinib on the PI3K/AKT pathway, we determined the expression of PI3K, AKT and p-AKT, results as in figure 6A and 6B showing anlotinib significantly inhibited the PI3K/AKT pathway. Next, LY294002 (an inhibitor of PI3K) was used. We found that LY294002 could block the activity of PI3K and p-AKT, as shown in Figures 6C and 6D. These data indicated that the reason anlotinib was able to sensitize HCT-8/5-FU cells that it could block the PI3K/AKT pathway.
Discussion
Despite recent advances in treatment options, colorectal cancer has remained a deadly disease for many years, with an increasing global incidence [14,15]. It is widely known that drug resistance, recurrence and metastasis of tumor cells remain the main reasons for chemotherapy failure. The multidrug resistance of tumor cells make chemotherapy drugs lose their efficacy, which increases the difficulty of tumor treatment and reduces the survival rate of tumor patients [16].
Anlotinib is a new type of oral small molecule, multi-target tyrosine kinase inhibitor (TKI).
On May 9, 2018, the State Food and Drug Administration (CFDA) officially approved the third line therapy program for terminal non-small-cell carcinoma (NSCLC) patients with anlotinib hydrochloride [26,27] . Anlotinib had approved anti-tumor for several cancers, such as AMTC, MRCC, NSCLC, and ASTS [28,29]. Currently, most researches focus on the anti-angiogenic effect of drugs on tumor cells and the therapeutic effect on non-small cell carcinoma [9,10,26,27]. However, relevant data indicated that the effect of anlotinib on the proliferation and apoptosis of drug-resistant cells in colorectal cancer is still rarely reported. Therefore, we believe that the study of anlotinib hydrochloride has great significance for multi-drug-resistant tumor cells.
It is well established that many cell proliferation, survival, adhesion and migration functions are achieved through PI3K/AKT/ERK(MAPK) signaling pathways [30][31][32][33][34]. The pathway of PI3K/AKT cell signaling has significant importance for regulating CRC cell multiplication and EMT [35][36][37][38]. After activation, AKT, as a key protein in the pathway, activates or inhibits its downstream target protein through phosphorylation, playing a key part in adjusting cell growth and proliferative effect as well as inhibiting apoptosis [39][40][41]. In addition, studies have reported that the activation of AKT kinase is essential to a number of events in the pathway of metastasis, containing, tumor microenvironment, tumor immune escape, activation of proliferation, blocking of apoptosis, and activation of angiogenesis [42].
With this knowledge in mind, we studied proliferation and apoptosis, considering that the pathway of PI3K/AKT signaling would become more sensitive in the process of migration and invasion of CRC cells. Hence, we investigated how anlotinib might regulate invasion and migration of HCT-8/5FU cells, with PI3K/Akt signaling pathway continuing to participate in EMT protein expression, so that we might determine the underlying signaling mechanisms. In order to further verify, we also detected cell protein expression and signal transduction pathway activation in CRC through Western blot. Results were that anlotinib hydrochloride could downregulate the expression of survivin, cyclin D1, CDK4, Bcl-2, caspase-3, N-cad, MMP-2, MMP-9 vimentin and up-regulate the expression of proteins Bax , cleaved caspase-3 and E-cad, by inhibiting the activation of the PI3K/AKT pathway ( Figure 6A).
In order to further verify this pathway, we also used PI3K drug inhibitors and measured the changes of PI3K and p-AKT after the addition of inhibitors. The results showed that PI3K inhibitor could reduce the activation of PI3K and p-AKT and that the addition of anlotinib hydrochloride could enhance the inhibitory effect of PI3K and p-AKT ( Figure 6B).
As a multiple-target drug, the potential use of anlotinib certainly warrants further investigation, as it may play an important role in other pathways or molecules as well. Although anlotinib is regarded as a promising prospect agent, further in vivo experiments will be needed to profile its side-effects and adverse reactions before its safe clinical application in patients with colorectal cancer can be assured and widely adopted.
Conclusion
Anlotinib inhibited the proliferation , metastasis and induced apoptosis of HCT-8 / 5-FU cells ; and the mechanism could be that anlotinib overcomes multiple drug resistant of the colorectal cancer cells via inactivating PI3K/AKT pathway .
Data Availability
The data used to support the findings of this study are original and available from the corresponding author upon request.
|
2019-10-31T09:10:07.470Z
|
2019-10-28T00:00:00.000
|
{
"year": 2019,
"sha1": "5e6f2342e1c249280c2d1fd53ec498997814513e",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2019/10/28/821801.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "55a26e9fd75dd4749b4bc2485fb8d6d62542d4cb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
}
|
55609323
|
pes2o/s2orc
|
v3-fos-license
|
Systemic inflammatory response syndrome ( SIRS ) in Mosul : Clinical characteristics and predictors of poor outcome
Objectives: Systemic inflammatory response syndrome is one of the most important causes of intensive care unit (ICU) morbidity and mortality worldwide. The aim of this study is to explore the spectrum of diseases responsible for SIRS admission in Mosul, and to identify the mortality rate and the factors associated with poor outcome. Methods: Fifty patients with sepsis or non-infective SIRS were studied during the period from June 1 to November 3
ocalized inflammation is a physiological protective response which is generally tightly controlled at the site of the injury.Loss of this local control results in an exaggerated systemic response which is clinically identified as systemic inflammatory response syndrome (SIRS).SIRS may be initiated by infection or by non-infectious causes such as trauma, autoimmune reactions, malignancy, cirrhosis and pancreatitis (1) .SIRS associated with suspected or proved infection is called sepsis.Morbidity and mortality of sepsis remain unacceptably high.It is still one of the most prevalent causes of intensive care units (ICU) morbidity and mortality worldwide (2,3) , with as many deaths annually as those of myocardial infarction (4) .
In 1991, in an attempt to stratify the spectrum of sepsis, a consensus conference organized by the American College of Chest Physicians (ACCP) and the Society of Critical Care Medicine (SCCM) was held in USA to clinically define the terms: SIRS, sepsis, severe sepsis, septic shock and multiple organ dysfunction syndrome (MODS).To meet the definition of sepsis, patients need to satisfy at least two out of four SIRS criteria, in association with having a suspected or confirmed infection (5) .The aim of this study is to identify the spectrum of diseases which are responsible for SIRS admission in a medical ICU and the general medical ward in Mosul.We intended to study the severity of illness, the requirement for ventilator therapy, the overall mortality and the factors associated with poor outcome.The study utilized the definitions adopted by the 1992 statement of the ACCP/SCCM consensus conference (which was retained by the 2001 international sepses definition conference) (6) .
Patients and methods
Fifty patients were studied prospectively; they were collected from the medical ICU and the general medical wards in Ibn-Sina Teaching Hospital in Mosul during the period from 1 st June to 30 th Patients included in the study were systematically evaluated.Careful history taking included the details of current symptoms and associated co-morbidities.Enquiry was made regarding occupation, residence and past medical or surgical events.The patients were classified as having either community acquired or hospital acquired illness (those who developed their illness while admitted for other conditions or had been referred from other medical, surgical or obstetric and gynaecological departments after developing the acute illness in their original wards).Patients guaranteed for the suspicion of infection with H1N1 influenza were excluded; as well those less than 10 years old, or those who have stayed less than 24 hours in hospital.The acute physiology and chronic health evaluation II (APACHE II) score was calculated for every patient.This is the most widely used scoring system to assess the severity of illness and the excepted mortality of L critically ill patients.The score utilizes the worst values of 12 physiological variables during the first 24 hours following admission, along with an evaluation of the patient's chronic health prior to admission (7) Chest x-ray was the only imaging which was done as a routine.Ultrasound examination of the abdomen was done for most patients.Few patients had CT-scan, and to a lesser extent, magnetic resonance imaging.
Patients were considered to have an infection if this was microbiologically documented or at least clinically suspected requiring evidence such as the presence of white blood cells in a normally sterile body fluid, acutely inflamed abdominal organ, chest x-ray consistent with pneumonia or a clinical syndrome associated with high probability of infection (6,8) .
Septic shock was defined as acute circulatory failure characterized by persistent arterial hypotension unexplained by other causes.Hypotension was defined by a systolic blood pressure <90 mmHg; mean arterial pressure <60, or a reduction in systolic blood pressure of more than 40 mmHg from baseline, despite adequate volume resuscitation, in the absence of other causes of hypotension.MODS were considered to be a dysfunction of more than one of the above organs, requiring intervention to maintain homeostasis (6) .The source of sepsis and the cause of the non-infective SIRS were determined, and daily follow up was made to record the last stage of sepsis reached, the duration of hospital stay and the final outcome (survival or death).
All variables were expressed as numbers and percentages and were compared with unpaired T-test, ANOVA test, Fisher Freeman Halton test, Fisher Exact test and Chi-square test.The analysis was conducted using the SPSS package 16, p-value <0.05 was considered statistically significant and p-value <0.001 was considered highly significant.
Results
Fifty consecutive patients with SIRS were included in the study.Their age ranged from 12 to 89 years, with mean of 41.52 ± 20.53 years.Twenty six of them were males (52%) and twenty four were females (48%).Their mean duration of illness prior to admission was 7.43 ± 6.68 days.
Sepsis represented the major cause of SIRS in our study (43 patients (86%)), while noninfective SIRS was found in 7 patients (14%).Pneumonia was the leading cause of sepsis in our series; responsible for 21 (48.8%)cases, four of them were nosocomial.Acute pyelonephritis, intra-abdominal, and central nervous system infections were responsible for four cases each (8%).Two patients (4%) were found to have infective endocarditis.Although sepsis was suspected, the source of infection was not established in 3 patients.Community acquired sepsis represented (82%) of cases, the remainder (18%) were hospital acquired.There was no statistically significant difference regarding the severity of illness (assessed by APACHE II score) or mortality between hospital and community acquired cases.Four of seven patients with non-infective SIRS were found to have disseminated malignant diseases (carcinoma of the breast, prostate, teratoma and acute leukemia).Two patients had acute pancreatitis and a woman was diagnosed with active systemic lupus erythematosus (Table 1).
Sepsis was bacteriologically confirmed in 19 patients (44.2%).Confirmation was based on a positive blood culture in 8 patients (18.6% of all sepsis cases), sputum culture in 6 patients, CSF, pleural fluid, ulcer swab, urine and stool culture in one patient each.In the remaining 24 patients (55.8%), sepsis was suspected clinically, supported by laboratory and imaging results.There was no statistically significant difference between patients with positive and negative blood culture results in relation to APACHE II score, days of stay in hospital or mortality.However, patients with positive blood culture reached higher stage of illness (septic shock or MODS) when compared with those of negative blood culture (p=0.217).The diagnosis of SIRS was based on two diagnostic criteria in 19 patients (38%), three criteria in 18 patients (36%) and the whole four criteria in 13 patients (26%).The increasing number of diagnostic criteria on which the diagnosis of SIRS was made was strongly associated with more advanced stage of illness (p<0.001) and higher mortality (p=0.0097).An even stronger association was found between the level of consciousness assessed by Glasgow Coma Scale (GCS) and the subsequent stage of SIRS reached and the mortality rate.Patients with an initially reduced consciousness (GCS of 14 or less) reached higher stage of illness (more commonly passed to septic shock and MODS), and had higher mortality (77.3% versus 22.7%) compared with those having normal GCS on admission (p<0.001)(Table 2).Anaemia was present in 22 patients (44%).These patients had significantly higher mortality than non anaemic patients (68.2% versus 31.8%,p>0.001).On the other hand, elevated ESR had no significant effect on the APACHE II score (p=0.115) or mortality rate (p=0.243),even when reached a level exceeding 70 mm/hr.Hyperglycemia, defined as fasting blood sugar ≥7.8 mmol/l, had developed in 17 patients (34%); of whom 10 (58.8%) were non diabetic before their current illness (stress hyperglycemia).Patients with hyperglycemia had significantly higher APACHE II score (p=0.006),longer stay in hospital (p=0.029) and more advanced stage of SIRS (p=0.0109)(Table3).The mortality rate of these patients (58.8%) was higher than those who remained normoglycemic (36.4%) (p<0.001).Elevated serum urea (>7 mmol/l), rather than creatinine was associated with excess mortality rate (77.3% in those having high blood urea on admission, compared with 22.7% in patients with normal levels) (p=0.014).Serum albumin level correlated significantly with the APACHE II score; the highest scores were encountered in those with serum albumin below 30gm/l (p= 0.007), and these patients reached more advanced stages of septic shock and MODS compared with those having normal serum albumin levels (p<0.001).There was also a non significant association between hypoalbuminaemia and a higher mortality rate and a longer stay in hospital (p=0.101 and p=0.301 respectively) (Table 4).
Overall, the most common organ dysfunction noticed in our study was related to the central nervous system (36% of cases), followed by the cardiovascular system (30%), kidneys (28%), liver (28%), lung (22%) and blood (10%).Two of our patients were already on ventilator therapy for respiratory paralysis caused by Guillain Barre syndrome before the development of sepsis (ventilator associated pneumonia).Eight patients (16%) required ventilator therapy to treat ARDS, or to support comatose patients.APACHE II score showed a very significant association with the stage of SIRS reached and mortality rate (p<0.001for each).The in-hospital mortality rate of our group of patients was (44%).Patients with sepsis had a mortality rate of 39.5%, while patients with non-infective SIRS had 71.4% mortality.This difference did not reach statistical significance (p=0.122).Nine patients (18%) had sepsis, which did not progress further; one of them only died (mortality rate of 11.1%).Twenty patients (40%) reached a stage of severe sepsis (without further progression); of whom four died (mortality rate of 20%).Septic shock and MODS complicated severe sepsis in 6(12%) and 15(30%) of patients; their mortality rate were 50% and 93.3% respectively.Overall, the mortality rate of all patients who reached severe sepsis was 51.2%.Twenty four of our patients were in the ICU (48%), and 26 were in the general medical wards (52%).Despite a higher mean APACHE II score (22.8 versus 15.2) and a more advanced stage of SIRS among patients admitted to the ICU, there was no significant difference in the mortality rate between the two groups (p=0.802).All ventilated patients were in the ICU.
Discussion
Sepsis represented the majority of cases of SIRS in this study (86% of cases).Hernándes et al noticed a similar proportion.They diagnosed sepsis in 79% of their patients, with the remaining 23% had non-infective SIRS (9) .Sepsis was hospital acquired in 18% of cases only.Such cases constituted a much higher percentage in a recent Spanish study (49.5%) (8) .The lower impact of hospital acquired infections reflects the under use of instrumentation (including intravenous catheterization and mechanical ventilation) in our hospital.Pneumonia was the commonest cause of sepsis in both community and hospital acquired cases in our study (48.8%).Almost all recent studies in the field found the lungs (pneumonia) the major source of sepsis (4,8,10- 14) ; this ranged from a percentage of 40% in a large multicentre trial in USA (4) to 86% in a pan-European study published in 2006 (10) .The only notable exception was a recent Mexican study, where abdominal infection predominated over pulmonary infection (15) .Abdominal and urinary tract infections were the second and third causes of sepsis in our study, shared by most other similar studies (8,10,(12)(13)(14) .
Sepsis was more frequently suspected than microbiologically documented.Periera et al from Portugal had a similar percentage of culture proven cases (14) (39% compared with 44% in our study).However in three other larger studies, 60% -64% of sepsis cases were microbiologically documented (8,10,16) .It seems that over-reliance on empirical therapy in our centre has largely replaced a thorough and careful search for microbiological confirmation.Blood culture was positive in 18.6% of cases; a percentage quite similar to two other studies conducted by Rangel-Frausto et al (17) (17%) and Pereira (14) (20%), and a little less than the results of Selberge et al (18) (30%) who tried their best to differentiate sepsis cases from non-infective SIRS in order to compare certain biochemical markers.The low percentage of positive blood culture in general reflect the fact that sepsis does not indicate the presence of viable bacteria in the bloodstream, but rather an uncontained inflammatory response to infection.Moreover, many patients had received frequent courses of antibiotics before being admitted as sepsis (which reduces the chance of positive blood culture) and infections caused by non bacterial pathogens are undetectable by standard cultures.Variation in the number of blood culture positive cases in different studies is also influenced by the location of infection.For example, peritoneal infection results in a more frequent release of bacteria to the circulation compared with pulmonary infection (18) .Positive blood culture was associated with higher prevalence of septic shock and MODS.Rangel-Frausto et al found a stepwise increase in the percentage of positive blood culture with increasing stage of sepsis (17%, 25% and 69% for severe sepsis, septic shock and MODS, respectively) (17) .Two multi-centre trials in Portugal (14) and France (16) found bacteraemia (manifested by positive blood culture) a risk factor for early mortality.Despite the higher mortality rate in blood culture positive patients in our study (75% Vs 38%), the small sample size did not mount a statistical significance.The increasing number of diagnostic criteria on which the diagnosis of SIRS was made strongly correlates with more advanced stage illness and higher mortality.Sprung et al found that fulfilling more than two criteria carries a higher risk of subsequent development of severe sepsis, septic shock and MODS (19) .This finding was confirmed by Rangel-Frausto et al who stated that "SIRS with only two criteria -as initially proposed -is less helpful in defining a subset of ICU and ward patients who are at especially high risk of severe sepsis than SIRS with three or all four criteria" (17) Our findings regarding anaemia in sepsis patients is consistent with the accumulating evidence that anaemia in critically ill patients is common and correlates with poor outcome (20,21) .The mechanism of anaemia in these patients is similar to that of chronic disease anaemia, except that the onset is generally rapid (21) .Despite the deleterious effect of anaemia of critical illness, aggressive treatment with blood products can be as detrimental as no treatment with associated increase in morbidity and mortality (21,22) .The use of erythropoietin stimulating agents is rapidly gaining acceptance as a substitute to transfusion therapy (22) .
High ESR had no relation with severity of illness assessed by APACHE II score or mortality.This could be due to the fact that ESR is a crude indirect measure of acute phase response.Even an ESR higher than 70 ml/hr was not found a poor outcome index in these patients.
Acute hyperglycaemia is frequently present in situations of stress in both diabetic and nondiabetic patients (23.24) .The prevalence of hyperglycaemia in critically ill patients depends on the defining criteria.In one study conducted in a medical ICU, admission blood glucose above 11.1mmol/L was present in 23% of patients (25) .In another study, conducted in a surgical ICU, admission glucose level was >6.1mmol/L in 86%, almost all of patients became hyperglycaemic during ICU stay (26) .Applying our definition of 7.8 mmol/L, a prevalence of 34% in our study is almost similar.The strong association between ICU hyperglycaemia and excess morbidity and mortality noticed in our study was also shown by similar studies.Van der Berghe et al reported dramatic (42%) relative reduction in mortality in a surgical ICU when blood glucose was normalized to 4.4 -6.1 mmol/L by means of insulin infusion (compared with 10 -11.1 mmol/L in the control group) (26) .The benefit of glucose reduction in the medical ICU was less certain (27.28) .
The adverse effect of hypoalbuminaemia in acute illness has been confirmed in a metaanalysis.Hypoalbuminaemia was found a potent and dose dependant predictor of mortality, independent of nutritional status or inflammation.Each 10 gm/L decline in serum albumin concentration significantly raises the odd ratio of mortality by 137%, morbidity by 84% and ICU stay by 28% (29) .However, the use of albumin for volume resuscitation of critically ill patients with serum albumin concentration ≤ 25 gm/L was not associated with reduction of mortality, duration of ICU stay or mechanical ventilation (30,31) .A potential beneficial role of albumin in patients with sepsis requires further study (31) .The association of low serum albumin with disease severity was clearly shown in our study, but significant correlation with mortality rate and hospital stay has not been reached, perhaps because of small sample size.The overall mortality rate of sepsis in our study was somewhat high (39.5%).In recent epidemiological studies, the mortality rate of sepsis has ranged from 9% (17) to 48.2% (13) .In the above mentioned pan-European study (10) , wide variation in mortality of severe sepsis has been noticed in different centres around Europe; being lowest in Switzerland (10%) and highest in Portugal (64%).In comparison, our result of 51.2% mortality rate of these patients seems acceptable.
Despite the more advanced stage of SIRS reached, and the higher mean APACHE II score of our ICU patients compared with those in the general medical wards, there was no significant difference in mortality between these two groups.This result was in agreement with Guidet et al, who found a mortality rate of 49% in severe sepsis patients in the general medical wards and 42% in ICU patients (32) .Blanco et al showed a mortality rate 55% in septic patients in the general wards and 48% in the ICU (8) .The similar mortality rate (despite less severe illness of SIRS patients who remained on the general wards) calls for serious consideration of ICU admission for most cases of SIRS, especially for those who develop severe sepsis.
Additional investigations were ordered according to the requirements for the individual cases; these included: 1-Other biochemical investigations like serum amylase.2-Hepatitis viral serology.3-Sputum Gram stain, Ziehl-Neelsen stain and culture.4-Urine culture.5-Pleural fluid analysis and culture.6-Cerebrospinal fluid examination and culture.7-Wound or ulcer swab and culture.
Table ( 2
): The association of Glasgow Coma Scale with the severity of sepsis and outcome.
Table ( 3
): The association of blood glucose level with the severity of sepsis and outcome.
Table ( 4
): The association of serum albumin with the severity of sepsis and outcome.
|
2018-12-07T08:32:31.096Z
|
2011-12-28T00:00:00.000
|
{
"year": 2011,
"sha1": "8bb647580bcf55d2df2257941225841c3db8c72a",
"oa_license": "CCBY",
"oa_url": "https://mmed.mosuljournals.com/article_32699_5b2b214ffd92b3a09595da2c768d2857.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8bb647580bcf55d2df2257941225841c3db8c72a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218755648
|
pes2o/s2orc
|
v3-fos-license
|
Systematic review with meta‐analysis: IBD‐associated colonic dysplasia prognosis in the videoendoscopic era (1990 to present)
The prognosis of dysplasia in patients with IBD is largely determined from observational studies from the pre‐videoendoscopic era (pre‐1990s) that does not reflect recent advances in endoscopic imaging and resection.
| INTRODUC TI ON
Patients with ulcerative colitis or Crohn's disease colitis have an increased risk of colorectal cancer. The overall cancer prevalence was reported as 3.7% in the landmark meta-analysis by Eaden et al published in 2001. 1 The cumulative probabilities of developing cancer were reported to be 2% by 10 years, 8% by 20 years and 18% by 30 years of disease duration. However, more recent population-based cohort studies and updated meta-analyses suggest that the risk of cancer is lower than previously thought, although overall this rate is still significantly higher than the non-IBD population. [2][3][4][5][6][7] Cumulative risks of colorectal cancer in a more recent meta-analysis were 1% by 10 years, 2% by 20 years and 5% after more than 20 years of disease duration. 3 It is unclear whether this reduced incidence of IBD cancer is related to optimisation in endoscopic surveillance, medical therapy or timely colectomy in the last decade. 8,9 A Cochrane systematic review and meta-analysis observed lower incidence rates of cancer in IBD patients on surveillance compared to IBD patients not on surveillance. 10 The ratio of early stage vs late stage cancers detected was also higher in the surveillance group compared to the nonsurveillance group. 10 Taking tumour stage into account, ulcerative colitis patients are still more likely to die of colorectal cancer than patients without ulcerative colitis, but this risk appears to be declining over time. 7 The purpose of colonoscopic surveillance is to detect dysplasia which is defined as an unequivocal neoplasia of the epithelium confined to the basement membrane, without invasion into the lamina propria. 11 The 21st century has witnessed advances in endoscopic surveillance technology such as high definition imaging and chromoendoscopy, which have led to increased detection of dysplasia. 12 13 Advanced dysplasia resection techniques such as endoscopic submucosal dissection and hybrid techniques have also allowed the resection of flat nonpolypoid lesions, previously destined for surgical management only. 12,14 However, the impact of these advances on rates of metachronous advanced neoplasia developing during surveillance follow-up remains uncertain. As there are no randomised controlled trials comparing endoscopic surveillance with colectomy, international society guidelines have based their recommendations for the management of higher risk lesions on small numbers of observational studies. 8,9,15,16 The small patient cohorts, limited follow-up times, contradictory lesion characterisation terminology and inclusion of data from the pre-videoendoscopic era, have limited the interpretation of outcomes from these studies. 8,9,15,16 The most recent international guidelines are summarised in Table S1.
If dysplasia is able to be completely endoscopically resected, all guidelines recommend continued regular colonoscopic surveillance. 8,9,15,16 If dysplasia is endoscopically unresectable or invisible (ie the dysplasia is detected on random biopsy of the colonic mucosa without any corresponding visible dysplastic lesion detectable by the endoscopist), guidance on management is less clear due to the wide variation in reported progression rates to advanced neoplasia. 8,9,15,16 The management options in these cases are recommended to be 'individualised after discussion of the risks and benefits of surveillance colonoscopy and colectomy' by gastroenterologists and colorectal surgeons with their patients within a multidisciplinary setting. 15 However, acknowledgement of the uncertainty in the prognosis of dysplasia makes these discussions even more challenging.
Laypersons can respond to the communication of uncertain individualised cancer risk estimates with heighted cancer-related worry, 'ambiguity aversion' (ie avoidance of engaging in decision making) and 'risk aversion' (ie avoidance of choosing the option associated with more perceived risk, despite its overall advantage). 17,18 The wide variation in the reported prognosis of dysplasia may in part be due to the inclusion of older data hailing from the pre-videoendoscopic era (pre-1990s). The high-definition image resolution now achieved with the more sophisticated videoendoscopes available, far exceeds what was possible with the older fibre-optic endoscopes. The endoscopic community now recognises that some dysplasia once characterised as 'invisible' in the pre-videoendoscopic era may actually have represented missed flat lesions. These lesions are now more visible and better characterised with higher definition, blue light imaging and chromoendoscopy. 19 Clarity with regard to the risks of colorectal cancer is needed to help patients decide between endoscopic or surgical management for their IBD-dysplasia.
When informing patients of their cancer risk, it may be useful to describe the rate of 'prevalent' (otherwise described as 'synchronous' or 'concurrent') cancers that are found at colectomy performed usually within 6 months of a pre-operative diagnosis of dysplasia.
These incidental cancers detected within the colonic specimen may represent cancers that were missed during the last pre-operative colonoscopy. The risk of progressing to a more advanced neoplastic lesion, if an immediate colectomy is not performed after the dysplasia diagnosis, should also be explained to patients. This is often reported in studies as the 'advanced neoplasia progression rate' which is the incidence of new high-grade dysplasia (HGD) or cancer found during surveillance follow-up performed more than 6 months after a low-grade dysplasia (LGD) diagnosis, or the incidence of new cancer found during surveillance follow-up performed more than 6 months after a HGD diagnosis. 20
| Search strategy and study selection
The present systematic review is reported in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. 23 An a priori protocol was followed and registered at the international prospective register of systematic reviews (PROSPERO registration no. CRD42019105736). An electronic literature search of English language articles was performed using MEDLINE, EMBASE and the Cochrane Library Database, from the dates 1990 to February 2020. The full electronic search strategy can be viewed in Appendix 1 and was designed and conducted with the assistance of an experienced medical librarian. Study eligibility for inclusion was undertaken independently by two researchers (MK and RF) and discrepancies resolved after consensus discussion.
Meta-analyses, randomised controlled and observational studies were included if the intervention and outcome met the eligibility criteria. Reference lists of the included full-text articles were also scanned for additional articles. Conference proceedings were also considered.
The inclusion criteria were as follows: (a) studies of ulcerative colitis, Crohn's colitis and indeterminate colitis patients who had a colectomy or at least one follow-up colonoscopy performed after an initial diagnosis of dysplasia was made on colonoscopy; (b) the cohort could be subdivided according to the dysplasia severity grading (ie indefinite, low-grade and high-grade), endoscopic visibility and lesion morphology (polypoid or nonpolypoid as per the SCENIC criteria 15 ); and (c) reported on the incidence of advanced neoplasia found during surveillance or in the colectomy specimen according to the endoscopic visibility, morphology and grade of the index dysplasia. Articles were excluded if all or most patients included had their dysplasia diagnosed in the pre-videoendoscopic era, which was taken to be before 1990, or it was not possible to extract incidence rates according to visibility, morphology and severity of the index dysplasia from the data provided. Studies reporting on primary sclerosing cholangitis patients were included, except for those cohorts with exclusively these patients as this could skew the data. In the event where a number of publications had been reported by the same group, with duplication of results from the same dataset, the most recent publication was included for analysis of each outcome.
Dysplasia detected on random biopsy of the colonic mucosa without any corresponding visible dysplastic lesion detectable by the endoscopist was termed 'invisible'.
| Data extraction
Data were extracted independently by two investigators (MK and RF). All discrepancies were resolved and consensus achieved after discussion. For each study included, data were retrieved on the country where it was performed, year of publication, enrolment period, study design, subclassification of the IBD, colitis extent, number of patients with dysplasia within the extent of colitis, proportion of endoscopically resected dysplasia, whether the dysplasia histology had been reviewed by a second gastrointestinal expert histopathologist, proportion with multifocal dysplasia or primary sclerosing cholangitis, the mean or median follow-up time, and the number and incidence rate of new histologically proven high-grade dysplasia and colorectal cancer cases found on follow-up or at colectomy.
| Prevalent cancer rate
Incidental cancers found anywhere in the colectomy specimen when colectomy surgery was performed for a pre-operative diagnosis of dysplasia (usually performed within 6 months of the dysplasia diagnosis).
| Advanced neoplasia progression rate
Incidence of new HGD or colorectal cancer found during surveillance follow-up performed more than 6 months after a LGD diagnosis, or incidence of new cancer found during surveillance follow-up performed more than 6 months after a HGD diagnosis. (Table A2). The overall quality of the evidence for each outcome was assessed using the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) criteria 26,27 and are detailed in the summary of findings Tables 1 and 2.
| Statistical methods
Meta-analysis methods were used to pool together the cancer prevalence rates from different studies. The Stata software package (version 15.1) was used. The DerSimonian-Laird random effects method was used for the analysis, regardless of the degree of heterogeneity between the study results. The Freeman-Tukey double arcsine transformation was performed before analysis, which was used to stabilise the variances when the proportions were close to 0 and 1, and therefore the normal approximation to the binomial distribution did not hold.
The heterogeneity between studies was assessed based on the significance of the between-study heterogeneity, and also on the size of the I 2 value. Substantial heterogeneity was assumed if the I 2 value was above 50% or the chi-squared test P value was less than 0.05.
| Systematic search results
The search strategy is described in detail in the PRISMA flow diagram ( Figure 1). The initial search yielded 5095 citations and from these 33 studies were eligible for inclusion. All of these were observational studies: one was a prospective cohort study; 18 were retrospective cohort studies; and 14 were retrospective case series (see Table A2). All studies were from academic centres which were based in the USA (n = 21), UK (n = 4), Japan (n = 4), the Netherlands (n = 2), Korea (n = 1), Italy (n = 1), Belgium (n = 1) and Portugal LGD, low-grade dysplasia. a Prevalent cancer rate is the proportion of patients who have colectomy surgery performed for a pre-operative diagnosis of dysplasia (usually performed within 6 mo of the dysplasia diagnosis) and a colorectal cancer is found incidentally anywhere in the colectomy specimen. b Insufficient studies to evaluate heterogeneity between studies. c I 2 value above 50% assumes substantial between study heterogeneity. longitudinal cohort studies are initially rated as high quality and downgraded by one level or more if one or more of the following apply: 1. High risk of bias in study design as indicated by modified Joanna Briggs Institute Critical Appraisal Checklist median scores of less than eight out of 10 or failure of most of the studies to adequately measure the proportion of dysplasia which was endoscopically resected pre-operatively, which is an important confounding factor. 2. Inconsistency of results between the studies. Where calculated this is reflected by high between-study heterogeneity (I 2 >50%). 3. Imprecision of results if there is a wide 95% confidence interval around the pooled estimate. 4. Indirectness of evidence where the studied population does not correspond to patients who have a colectomy for a pre-operative diagnosis of dysplasia now. This is likely to be in the case of patients with invisible dysplasia as most of the data from these studies are taken from a time period before high-definition imaging and chromoendoscopy were adopted into clinical practice. 5. High risk of publication bias is suspected for all outcomes due to the very small number of studies published and the suspected exclusion of the smallest population studies. The 'advanced neoplasia progression rate' is the incidence of new high-grade dysplasia or colorectal cancer found during surveillance follow-up performed more than 6 mo after a low-grade dysplasia diagnosis, or incidence of new cancer found during surveillance follow-up performed more than 6 mo after a high-grade dysplasia diagnosis. The 'colorectal cancer progression rate' is the incidence of new colorectal cancer found during surveillance follow-up performed more than 6 mo after the dysplasia diagnosis.
c The other studies did not specify number of patients by dysplasia grading but number of polyps by dysplasia grading so number of patients included could not be extracted from these studies. d Grading of Recommendations, Assessment, Development and Evaluations (GRADE) Working group grades of evidence: With prognostic outcomes, longitudinal cohort studies are initially rated as high quality and downgraded by one level or more if one or more of the following apply: 1. High risk of bias in study design as indicated by modified Joanna Briggs Institute Critical Appraisal Checklist median scores of less than eight out of 10; or very small sample size or median follow-up time is less than 36 mo; or failure of most of the studies to adequately measure the proportion of dysplasia which was located within the colitis extent and which was endoscopically resected, which are important confounding factors. 2. Inconsistency of results between the studies with wide variations in incidence rates between studies. 3. Indirectness of evidence where the studied population does not correspond to patients who are being follow-up after a diagnosis of dysplasia: patients with invisible dysplasia as most of the data from these studies are taken from a time period before high-definition imaging and chromoendoscopy were adopted into clinical practice; where included patients in the study had to meet a specific criteria, for example size greater than 10 mm. 4. High risk of publication bias is suspected where there are a small number of studies published or a large proportion of case series selected for a specific intervention like endoscopic submucosal dissection. percent (n = 5) of the studies did not report whether the dysplasia diagnosis was confirmed by a second expert histopathologist.
According to the GRADE criteria, overall the evidence supporting the outcomes produced from these studies was deemed to be of low quality (our confidence in the estimate is limited) or very low quality (we have very little confidence in the estimate and the true prognosis is likely to be substantially different from the estimate).
Further explanation of the grading of evidence is provided in the summary of findings Tables 1 and 2.
Five studies met the inclusion criteria to extract the primary
| Prevalent cancer rate
Three of the included cohort studies and two of the case series reported on prevalent cancer rate and are described in more detail in Table A3. Pooled estimates of the prevalent cancer rates were calculated using meta-analysis methods, and the results are summarised in Table 1.
| Visible HGD
Two cohort studies and one case series reported on the rates of prevalent cancer found in the surgical colectomy specimen resected 29 The studies with the highest prevalent cancer rates only included cases where the dysplasia was located within the extent of inflamed mucosa, rather than proximal to it. 29,30 The pooled estimated cancer prevalence rate from all these studies was 13.7% (95% CI: 0.0%-54.1%). The results are shown in a forest plot in Figure A4. The significant heterogeneity test (P < 0.001) and high I 2 statistic value of 91.9% suggest considerable heterogeneity between studies.
| Invisible HGD
Two cohort studies reported prevalent cancer rates after a pre-operative diagnosis of invisible HGD as 22.2% (n = 8/36) where the dysplasia was specified to be within inflamed mucosa 29 and 3.0% where it was not specified (n = 1/33). 28 The pooled estimated cancer prevalence rate from all these studies was 11.4% (95% CI: 4.6%-20.3%). The number of studies was too small to be able to assess heterogeneity for this patient group.
| Invisible LGD
Two cohort studies and one case series reported on prevalent can- are shown in a forest plot in Figure A6. All of these studies included data mainly from an era when chromoendoscopy and high-definition imaging were not adopted into widespread practice. There was substantial between-study heterogeneity. Although the heterogeneity test did not reach statistical significance (P = 0.12), the I 2 calculated statistic was still 53% and therefore suggestive of substantial between-study heterogeneity.
| Indefinite for dysplasia
Two cohort studies reported prevalent cancer rates, with a preoperative diagnosis of indefinite for dysplasia, of 0.0% (n = 0/22) 32 and 2.6% (n = 1/38). 29 The latter study included a much higher proportion of patients with extensive colitis and primary sclerosing cholangitis. 29 The pooled estimated cancer prevalence rate from these studies was 1.2% (95% CI: 0.0%-6.6%). The number of studies was too small to be able to assess heterogeneity for this patient group. Table 2 presents the summary of findings for the incidence rate of advanced neoplasia progression categorised by dysplasia grade, morphology and the lesions followed up after endoscopic resection (post-polypectomy).
| Visible HGD
There were 14 observational studies reporting on advanced neoplasia progression rates during surveillance follow-up after an initial diagnosis of visible HGD. 33
| Visible polypoid HGD-post-polypectomy
Seven studies (five cohort studies and two case series) reported on cancer progression rates after endoscopic resection of polypoid HGD (Table 3). 33 high-definition chromoendoscopy also showed no progression to cancer over an average follow-up of a median of 4.5 years. 34 In two studies where chromoendoscopy was not used, there was progression to cancer at a rate of 25.0% (n = 3/12) over a median of 1.7 years 37 and 40.0% (n = 2/5) over a median of 4 years. 36 In the former study high-definition white light colonoscopy and polypectomy were performed by accredited endoscopists. 37
| Visible nonpolypoid HGD-postpolypectomy
Six studies (all case series) reported on cancer progression rates after endoscopic resection of nonpolypoid dysplasia, which included a mixture of HGD and LGD polyps making pooled analysis inappropriate ( LGD: 0.0% over HGD in which only 67% could be resected with histologically clear margins, half progressed to cancer. 46
| Invisible HGD
No studies reporting on cancer progression rates during surveillance follow-up of invisible HGD in the videoendoscopic era were found.
These have been described in Section 3.3.6 and Table 3 (polypoid LGD followed up after polypectomy) and Section 3.3.7 and Table 4 (nonpolypoid LGD followed up after polypectomy). In the largest study of visible LGD progression (n = 155) by Choi et al, 13 polypoid morphology was significantly associated with a better prognosis than nonpolypoid LGD. The cumulative incidence of advanced neoplasia for polypoid LGD was 3.5% at 1 year and 6% at 5 years, and for nonpolypoid LGD it was significantly higher at 37% at 1 year and 62.5% at 5 years. 13 However, this discrepancy is not only secondary to the morphology of the lesion. About 93% of the polypoid LGD in this patient cohort was successfully endoscopically resected, vs 39% of the nonpolypoid LGD. The nonpolypoid lesions were also likely to be multifocal compared with the polypoid lesions. Studies like this that have reported on advanced neoplasia progression rates for visible LGD, where not all of the lesions have been endoscopically resected or endoscopic resection has not been specified, are presented in Table A7.
| Visible polypoid LGD-post-polypectomy
Nine studies (seven cohort studies and two case series) reported on advanced neoplasia progression rates while on surveillance followup after endoscopic resection of the index polypoid LGD (Table 3). [33][34][35][36][37]40,48,52,53 The majority of these studies included a mixture of LGD and HGD polyps, again making pooled analysis inappropriate. Resection techniques included hot and cold biopsy, cold snare, piecemeal and en bloc endoscopic mucosal resection. Completeness of resection was based on endoscopist judgement rather than histological assessment. The average polyp diameters were 15 mm or less. LGD At most 4 y 0 0 LGD: 0.0% over at most 4 y follow-up LGD: 0.0% over at most 4 y
follow-up
Note: R0 indicates that resection margins were histologically cleared of dysplasia.
| Visible nonpolypoid LGD-post-polypectomy
Eight studies (one cohort study and seven case series) reported on advanced neoplasia progression rates while on surveillance followup after endoscopic resection of nonpolypoid LGD (Table 4). [41][42][43][44][45][46]52,54 Again, the majority of the studies included a heterogeneous mixture of LGD and HGD polyps in small numbers, with short follow-up times, making pooled analysis inappropriate, but the majority showed no progression to HGD or cancer. 41 40.0% of the LGD progressed to cancer after endoscopic submucosal dissection or hybrid techniques. Again, these lesions were all high risk with size all greater than 10 mm, significant submucosal fibrosis and R0 resection only being achieved in 67%.
| Invisible LGD
Eight studies (seven cohort studies and one case series) reported on advanced neoplasia progression rates during surveillance follow-up after an initial diagnosis of invisible LGD (Table 5). 13
| Indefinite for dysplasia
Six studies (five cohort studies and one case series) reported on advanced neoplasia progression rates while followed up after an initial diagnosis of indefinite for dysplasia (Table 6). 32 The study with the highest progression rates had a very small cohort of seven cases, all of which were invisible. 48 The three largest cohort studies have found lower advanced neoplasia progression rates. 32,58,59 Won-Tak Choi et al followed up 84 patients with a diagnosis of indefinite for dysplasia over a mean of 2.3 years. 59 The advanced neoplasia progression rate was 2.4% and the cancer progression rate was 1.2%.
Lai et al followed up a cohort of 59 patients over a longer mean followup of 6.8 years and found a 13.6% advanced neoplasia progression rate and a 5.1% cancer progression rate. 32 The importance of histological re-review is substantiated in a pooled analysis of 26 patients with invisible indefinite for dysplasia, diagnosed across six Dutch academic centres. 55 Van Schaik et al initially demonstrated a cumulative incidence of advanced neoplasia as 21% at 5 years, but after histological re-review by an expert gastrointestinal pathologist and reclassification of the indefinite for dysplasia, the cumulative incidence reduced substantially to 5% at 5 years. 55
| Prevalent cancer rate
This meta-analysis calculated pooled estimates rates of prevalent cancer found at colectomy performed for a pre-operative diagnosis of visible HGD, invisible HGD, visible LGD, invisible LGD and indefinite for dysplasia to be 14%, 11%, 3%, 2% and 1% respectively. In contrast, studies in which the dysplasia has been diagnosed in the pre-videoendoscopic era have shown much higher rates of preva- some studies may have included dysplasia located outside the extent of inflamed mucosa, 28 whereas certain studies only included dysplasia cases located within the inflamed mucosa 29,30 and thus displayed higher prevalent cancer rates. There is consensus that colonic dysplasia located outside the extent of colitis has a much better prognosis than true colitis-associated dysplasia, and is similar to that of a sporadic adenoma. 9,15 The only category to show satisfactorily low between-study heterogeneity was visible LGD. The GRADE quality of evidence supporting the meta-analysis outcomes was determined to be low for visible LGD and very low for visible or invisible HGD, invisible LGD and indefinite for dysplasia due to the risk of bias in the study design, publication bias, indirectness and inconsistency in the results (Table 1).
| Advanced neoplasia progression rate
In LGD was associated with a 14.6% (n = 58/396) progression rate to advanced neoplasia over a mean surveillance period of 12 years. neoplasia progression rates. 13,34,[41][42][43]45,53 The only studies to show significant advanced neoplasia progression despite use of these adjuncts were in one study where high-definition imaging but no chromoendoscopy surveillance was used after resection of LGD and HGD polyps 37 and in two studies where high-definition chromoendoscopy surveillance was performed after resection of high-risk large nonpolypoid LGD lesions with significant submucosal fibrosis and where R0 resection rates were less than 70%. 46,54 Inconsistency in the outcomes also appears to be associated were significant risk factors for cancer development. Therefore, a patient with a combination of one or more of these risk factors should be surveilled more intensively by expert endoscopists. A lower threshold for elective colectomy should also be considered when deciding management with these higher risk patients.
No studies which reported on cancer progression rates after diagnosis of invisible HGD were identified. This is because most of these patients undergo colectomy rather than continue surveillance, based on the high incidences seen from pre-videoendoscopic era studies. Further data on outcomes of patients who have been followed up with surveillance for invisible HGD would need to be published before clinicians would consider recommending surveillance over colectomy. This review has shown that there remains considerable variation in the prognosis with invisible LGD; however, the rate of progression to cancer appears to have reduced in the most recent time period with routine use of high-definition imaging and/or chromoendoscopy. 51 It is therefore not unreasonable for patients with unifocal invisible LGD to be closely monitored with surveillance rather than proceeding to a colectomy. However, the association of other high-risk factors, such as multifocality, concomitant primary sclerosing cholangitis and family history all need to be borne in mind when decision-making with these patients.
The largest cohorts of patients followed up after a diagnosis of indefinite for dysplasia 32,58,59 showed lower rates of advanced neoplasia progression compared to invisible LGD. The best prognosis was achieved after confirmation of the diagnosis of indefinite for dysplasia by a second expert gastrointestinal histopathologist. 55 Due to the interobserver incongruity in histological interpretation of indefinite for dysplasia, particularly on a background of inflammation, it seems entirely appropriate to repeat a surveillance colonoscopy after a period of therapy to reduce the background inflammation.
| CON CLUS IONS
As endoscopic techniques in IBD surveillance, such as optical characterisation and resection of dysplasia, advance, clinicians have witnessed changes in the reported natural history of these lesions.
Increasing expertise in dysplasia characterisation by histopathologists, as well as optimisation in medical therapy and standardisation of surveillance intervals are also likely to have produced better outcomes.
The results of this meta-analysis suggest that dysplasia detected during surveillance in the videoendoscopic era are associated with lower rates of incidental cancers found at colectomy than previously thought. Dysplasia found within an inflamed colonic segment appears to be associated with higher prevalent cancer rates on colectomy than dysplasia detected proximal to the colitis extent. However, due to the very small numbers of heterogeneous studies involved, the quality of the evidence obtained from this meta-analysis remains low.
The findings from this systematic review suggest that the lowest rates of progression to advanced neoplasia, for dysplasia not managed with immediate colectomy but followed up with surveillance, tend to be where high-definition imaging and/or chromoendoscopy surveillance has been used and endoscopic resection of visible dysplasia has been histologically confirmed. When quoting individualised cancer risks to patients, clinicians need to be aware of the cumulative effects of various risk factors such as active inflammation, multifocality, previous history of dysplasia and the presence of primary sclerosing cholangitis. This review highlights the low quality of evidence on the prognosis of dysplasia in the videoendoscopic era. The lowest quality evidence is for nonpolypoid dysplasia that has been successfully resected. Current evidence is based mainly on retrospective cohort studies and case series with small, heterogenous cohorts and short follow-up times. Interpretation of the data obtained from these studies is therefore limited. Larger, prospective studies are needed in order to make the evidence base used in shared decision-making more robust.
ACK N OWLED G EM ENTS
Jacqueline Kemp, Medical Librarian for Imperial College London, assisted in designing and conducting the full electronic literature search. All the authors approved the final version of the article.
|
2020-05-21T09:14:51.879Z
|
2020-05-20T00:00:00.000
|
{
"year": 2020,
"sha1": "585249ff5de7ae1690c3a60b586d79955f387ad3",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/apt.15778",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "0d2dfc74521c899c6eab330aa3abb9d8e592dd62",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5876165
|
pes2o/s2orc
|
v3-fos-license
|
Individual, social and environmental factors influencing physical activity levels and behaviours of multiethnic socio-economically disadvantaged urban mothers in Canada: A mixed methods approach
Background Existing data provide little insight into the physical activity context of multiethnic socio-economically disadvantaged mothers in Canada. Our primary objectives were: (1) to use focus group methodology to develop tools to identify the individual, social, and environmental factors influencing utilitarian and leisure time physical activities (LTPA) of multiethnic SED mothers; and (2) to use a women specific physical activity survey tool to assess psychosocial barriers and supports and to quantify individual physical activity (PA) levels of multi-ethnic SED mothers in Canada. Methods Qualitative focus group sessions were conducted in West, Central and Eastern Canada with multiethnic SED mothers (n = 6 focus groups; n = 42 SED mothers) and with health and recreation professionals (HRPs) (n = 5 focus groups; n = 25 HRPs) involved in community PA programming for multiethnic SED mothers. Administration of the women specific Kaiser Physical Activity Survey (KPAS) tool was completed by consenting SED mothers (n = 59). Results More than half of SED mothers were employed and had higher total PA scores with occupation included than unemployed mothers. However, nearly 60% of both groups were overweight or obese. Barriers to LTPA included the lack of available, affordable and accessible LTPA programs that responded to cultural and social needs. Concerns for safety, nonsupportive cultural and social norms and the winter climate were identified as key barriers to both utilitarian and LTPA. Conclusions Findings show that multiethnic SED mothers experience many barriers to utilitarian and LTPA opportunities within their communities. The varying LTPA levels among these multi-ethnic SED mothers and the occurrence of overweight and obesity suggests that current LTPA programs are likely insufficient to maintain healthy body weights.
Background
Physical inactivity is common among Canadian women of varying ethnicities and immigrant status [1][2][3][4]. This sedentary lifestyle plays a significant role in the health status of those women who are further disadvantaged by low socio-economic status [5]. A 22 year Canadian cohort study recently identified women, and especially those disadvantaged with respect to income and education, as the likeliest to experience decreasing trajectories of leisure time physical activity over their lifetimes [6]. Additionally, there is also evidence that both motherhood [7,8] and socio-economic status [6,9] are strong predictors of physical inactivity.
In 1974 the Lalonde Report in Canada recognized physical activity (as a sub-domain of lifestyle) as one of the four major determinants of health [10]. Recent research summarizing twenty year trends in leisure time physical activity (LTPA) among Canadian adults has suggested that despite an increase in the proportion of active Canadian adults over the last 2 decades, there has been an increase in the prevalence of self-reported inactivity-related diseases [11]. It has been suggested that more physical activity research should include domain specific physical activities (e.g. occupational, daily living) [11] and how activity in these domains may influence PA behaviour of socio-economically disadvantaged (SED) women [7]. Implicit in these suggestions is the need to address the individual, social and environmental determinants of physical activity so as to reduce the social inequalities and disparities of access [12,13]. Both the Integrated Pan-Canadian Healthy Living Strategy [14] and the Women's Health Surveillance Report of the Canadian Institute of Health Research [9] have detailed specific recommendations for increased research to understand and to address these determinants of physical activity for SED mothers who may have limited opportunities to be physically active.
Because existing data provide little insight into the physical activity contexts of mothers of varying ethnicities and immigrant status living under socio-economically disadvantaged (SED) conditions in Canada, the Canadian Association for the Advancement of Women and Sport and Physical Activity (CAAWS) undertook a two year project (2007 to 2009) with the objective of identifying the individual, social and environmental factors that influence physical activity levels and choices of urban multiethnic SED mothers. Focus group discussions were conducted with multiethnic SED mothers and female health and recreation professionals (HRPs) involved in community physical activity programming for multiethnic SED mothers. The intent was to identify the individual, social and environmental factors that influence both utilitarian (daily living activities, including occupation) and LTPAs of urban multiethnic SED mothers. A secondary objective was to understand the type and amount of physical activity of multiethnic SED mothers in each of these domains and to characterize their psychosocial determinants of LTPA (perceived barriers, social support and selfefficacy) using the women specific Kaiser Physical Activity Survey (KPAS) tool [15]. An outcome of this project was the development of tools to assist HRPs in the planning and development of LTPA programs for SED mothers within their respective communities.
Experimental approach and recruitment
A purposive sampling strategy was used to recruit HRPs and multiethnic SED mothers from across Canada to participate in focus groups and to complete several questionnaires about physical activity between March and August 2008. To identify participants, study researchers collaborated with Federal Government regional managers in the Canada Prenatal Nutrition Program [16] and the Community Action Program for Children [16] to identify socioeconomically disadvantaged urban communities in three regions across Canada (West, Central, East). Regional managers from these agencies and local community partners identified female community health organizers/promoters to act as local site coordinators and to recruit SED mothers and HRPs involved in community physical activity programming for multiethnic SED mothers. Community partners represented the physical activity interests of multiethnic groups, including francophone immigrant and resource centers, francophone sports federations, Family Services Early Childhood Programs, Immigrant Services, Aboriginal Head Start Programs, and parks and recreation and regional health units. Ethics approval was obtained from McGill University.
Inclusion and exclusion criteria
Both HRPs and mothers had to be able to read and comprehend the project pamphlet and consent form in either English or French and to be able to meet at a designated time and place for participation in either an English or French focus group (HRPs, mothers) or to complete the physical activity survey (mothers) in English or French or have a community partner volunteer translator do this with them together with the local site coordinator. In addition, participating mothers had to self-identify as socio-economically disadvantaged (SED) using the MacArthur Scale of Subjective Social Status [17], have at least one child ≤14 yrs of age still living at home, be urban dwelling and not currently limited because of an illness, injury or disability. Each mother was given the choice to participate in both or either the focus group or the physical activity survey session. No honorarium was paid to SED mothers or HRPs but transportation to and from their focus group session was covered, as was child care for the duration of the session. Food vouchers were given to each SED mother at the end of her focus group and/or physical activity survey session(s). To maintain confidentiality and privacy, each participant was assigned a numeric identifier and the term "socio-economically disadvantaged" was removed from all study documents so as not to stigmatize participating/potential participating mothers.
Focus group activities
A total of 42 multiethnic SED mothers participated in 6 focus groups and 25 HRPs participated in 5 focus groups in 3 regions of Canada (west, central and east). Initial pilot tests with 3 SED mothers and 2 HRPs determined the time needed for each component and ensured that the predetermined focus group questions were appropriate for generating discussion on the individual, social and environmental factors influencing utilitarian and LTPA choices of multiethnic SED mothers within their respective communities. All questions were framed in the context of social cognitive theory constructs (behavioural, environmental, personal) which theorize that self-efficacy, attitudes towards physical activity, perceived barriers, and past behaviour all influence intention and shape physical activity behaviour [18]. Bilingual (French, English) moderators facilitated each of the focus group discussions, allowing for flexibility in the focus group questions for generation of new areas of inquiry related to physical activity and to revisit earlier topics. A co-researcher took notes and wrote key points on a board for participants to view. These were used for summarizing the main points at the end of the discussion and asking participants if there were any missing key ideas. Each focus group lasted approximately two hours and was digitally recorded with permission of the participants.
Each focus group with multiethnic SED mothers began with an icebreaker in which mothers brainstormed for all the words that described what physical activity meant to them. Each mother created a physical activity pictogram describing the utilitarian and LTPA and relative levels (low/medium/high) that contributed to her lifestyle from childhood to present. The facilitator used the pictograms to engage the mothers in a discussion on their observed changes in physical activity levels since childhood. Mothers were probed on their barriers and social supports of a physically active lifestyle throughout their lifespan and their perceptions of their required supports for a physically active lifestyle in the future.
Each focus group with HRPs began with three questions to ascertain their beliefs about the health and fitness benefits of physical activity. HRPs were asked for their perceptions regarding the types and relative levels (low/ medium/high) of utilitarian and LTPAs that contribute to multiethnic SED mothers' lifestyles. This was used to facilitate a discussion on HRPs' perceptions regarding the individual, social and cultural, organizational, and community level barriers and supports that influence physical activity behaviours and choices of multiethnic SED mothers. HRPs rated their self-efficacy for promoting and positively influencing multiethnic SED mothers' physical activity behaviours and choices. HRPs were also asked how health professionals, community members, and organizations could better promote opportunities for multiethnic SED mothers to participate in physical activities within their communities.
Survey tools
MacArthur scale of subjective social status All participating mothers were asked to self-identify their socio-economic status, within the context of the Canadian community, using the MacArthur Scale of Subjective Social Status [17]. In a simple pictorial format, it presents a "social ladder" and asks individuals to place an "X" on the rung on which they feel they stand. Individuals placing themselves at rung 5 or lower were considered to be socioeconomically disadvantaged (SED). Local site coordinators described the ladder to each potential participating mother as follows: "At the top of the ladder are the people who are the best off in Canadathose that have the most money, the most education, and the most respected jobs. At the bottom are the people who are the worst offwho have the least money, least education, and the least respected jobs or no job. The higher up you are on this ladder, the closer you are to the people at the very top; the lower you are, the closer you are to the people at the very bottom. Where would you place yourself on this ladder?"
Assessment of physical activity and psychosocial determinants of LTPA
The interviewer-led Kaiser Physical Activity Survey (KPAS) tool [15] was used to understand how the experience of physical activity in the context of everyday life has influenced the type and amount of physical activity of multiethnic SED mothers and to assess the psychosocial determinants of LTPA (perceived barriers, social support and self-efficacy). The KPAS tool has been validated in multiethnic populations of women with varying physical activity habits [7] and has demonstrated good reliability and accuracy to detect specific, habitual, daily activities such as housework/care giving, occupation and sports or exercise activities, the general level of physical activity involved in daily routines during the past year and personal feelings about exercise including perceived barriers, social support and self-efficacy [15]. The survey contains 75 items and takes approximately 30 minutes to complete in English or French (and up to an hour for those women who required a community partner volunteer oral translator for other languages). Timing for completion of the KPAS tool was entirely dependent on the availability of the SED mothers.
A detailed description of the KPAS tool and scoring procedures are described in detail elsewhere [7]. Briefly, the first four sections of the KPAS tool allow classification of physical activity status. Categorical responses, ranging from 1 for "never" to 5 for "always" regarding frequency of participation in sports/exercise and active living domains and ranging from 1 for "none" to 5 for "more than 30 hours/week" of employment created three semi-continuous activity indices (sport/exercise, active living, occupational). For the household/care giving section a 4 level categorical response ranging from 1 for "none" to 4 for "more than 20 hours per week" reflected the weekly time spent in care giving activities. For the sports/exercise section, an activity score describing the energy cost of physical activities was calculated from the mode, frequency, and duration of reported organized and recreational sports/exercise activities. These were converted into a LTPA score expressed as MET hours (kcal/kg per hour) and summed up over all reported sports/exercise activities. MET values calculated from the KPAS tool were then categorized into 3 levels of LTPA used in most surveillance studies (expressed in kcal/kg/ day (KKD)): physically inactive < 1.5 KKD; moderately physically active 1.5-3.0 KKD; and physically active ≥ 3.0 KKD) [11]. Two total activity scores were created: Total Activity Score I (sum of all activity indices except occupation for all mothers) and Total Activity Score II (sum of all activity indices including occupational index for employed mothers only). The psychosocial section of the KPAS tool grouped responses into perceived barriers (external obstacles, heath constraints, lack of motivation, and time constraints), social support and self-efficacy.
Assessment of other covariates
In addition to completing the KPAS tool, mothers were also asked to self-report their current age, weight and height. BMI was calculated using self-reported body weight by height squared (meters 2 ). BMI was further categorized into normal weight (< 25.0 kg/.m 2 ), overweight (25-29.9 kg/.m 2 ) and obese (≥30 kg/.m 2 ). Ethnicity of mothers was based on the profile of the community group they were recruited from (i.e. Aboriginal Centre, African Immigrant Centre, Francophone Resource Centre, etc.).
Evaluation of focus group data
All focus group sessions were summarized using the notes and the verbatim transcribed digital recordings of the sessions. Ethnograph text analysis software (Ethnograph 6.0.1.0, Qualis Research, Colorado, USA) was used to organize the data. A codebook of common themes according to social cognitive theory constructs (behavioural, environmental, and personal) was created by the two facilitators and a framework organized through iterative reading of the transcripts. Sub-themes were determined using this framework and the text coded to identify patterns within the themes that showed a high degree of inter-rater agreement before key findings were established. Participants' words were used to emphasize the key themes and sub themes.
Barriers and supports to utilitarian and LTPA were classified into three categories to represent the themesindividual, social and environmental -and summarized with examples, using explanatory quotations from focus group participants. Utilitarian physical activities are those activities of daily living, including active transport (e.g. walking or biking to work, school, shopping) and physical activity accumulated on the job and while performing household chores and childcare. LTPA included both organized and unstructured physical activities. Organized physical activities were considered planned and typically occured within a specific community setting. This included participating in a sports night (e.g. volleyball), taking an activity class (e.g. aqua fitness) or following a formal exercise program (e.g. strength and conditioning). Unstructured LTPA was performed outside of a formal, organized or structured setting. Examples included swimming, playing sports with children and traditional dancing with family and friends.
Statistical analysis
Statistical analyses were performed using SAS for Windows (version 9.2; SAS Institute, Inc., NC). The KPAS tool domain specific activity indices were characterized by means, medians and 25th-75th percentiles. Distributions of demographic variables (age, BMI, leisure time physical activity levels) were characterized by means and standard deviations. The distributions of the four semi-continuous activity indices (sport/exercise, active living, occupational, household/care giving) were tested for normality using the Shapiro-Wilk test. T-tests for comparison of means contrasted sports/exercise levels amongst employed and unemployed mothers. Chi-square tests were used to assess for differences in the proportions of perceived barriers, social support and self-efficacy of employed versus unemployed SED mothers. A level of P < 0.05 was considered statistically significant. A scatter plot of BMI and kcal/kg/day (KKD) was included using cut points, as defined previously, for leisure time physical activity levels (sedentary, active and moderately active) [11].
Quantitative methods -Kaiser physical activity survey (KPAS) tool
Quantitative assessment of physical activity levels of 59 multiethnic SED mothers was assessed with the KPAS tool. All 59 mothers completed the 1 st 4 sections but only 52 of the SED mothers completed all 5 sections, due to personal time constraints. More than half of all mothers surveyed were employed. More than half of both unemployed and employed SED mothers were overweight or obese but there were no significant differences in age, BMI, or ethno-cultural grouping between the unemployed and employed SED mothers (Table 1). Variability was greatest for the occupational index (interquartile range 2.1) and smallest for the household/ care giving index (interquartile range 0.9). There were no differences in the total activity scores (Total Activity Score I) between employed and unemployed SED mothers when occupational index was excluded however inclusion of the occupational index significantly increased employed mothers total activity level (Total Activity Score II) (p < 0.0001). Only two SED mothers reported no sports/exercise in the past year. The most popular activity was walking (41%) followed by aerobics (19%) and swimming (8%) (data not presented). The mean energy expenditure in sports/exercise was highly variable and this did not differ by employment status (Table 1). There was no apparent relationship between SED mothers' self-reported leisure time sports/exercise energy expenditure and BMI ( Figure 1).
Psychosocial determinants of leisure time physical activities (organized and recreational sports/exercise) of SED mothers derived from the KPAS tool were not significantly different between unemployed and employed mothers ( Table 2). Key factors preventing most of the mothers from getting the amount of exercise they felt they wanted or needed were time constraints and a lack of childcare, a lack of intrinsic motivation (self-discipline), interest or enjoyment, a lack of energy, and a lack of sport skills or knowledge ( Table 2). The majority of SED mothers indicated that they lacked the social supports for getting the amount of exercise they wanted or needed. When responses were grouped into the 6 overall categories of external barriers (4 categories), social support (1 category) and self-efficacy (1 category) there were no significant differences in any of the psychosocial determinants of exercise behaviour between employed and unemployed mothers with the exception of lack of social support which was significantly higher for unemployed mothers ( Figure 2).
Qualitative methods -focus groups with SED mothers
Just over one third (n = 22) of the SED mothers who completed the KPAS also chose to participate in a focus group discussion. A total of 35 SED mothers participated in one of 6 focus groups in 3 regions across Canada. Participant profiles of SED mothers reflected the ethnicity of the communities from within which they were recruited (Table 1). In each focus group SED mothers discussed what "physical activity and exercise" meant to them, their personal levels and types of physical activities over their lifespan, their barriers to physical activity, and the supports that have enabled them to be physically active.
Lifetime trends in physical activity
Personal pictograms captured each mother's trends in utilitarian and LTPA throughout her lifespan. These consistently highlighted that LTPA was highest throughout the secondary school years. Utilitarian physical activity began in adolescence when young girls began to share the responsibility for household tasks including care of their siblings. This and occupational activity (starting with their first jobs) were the most important contributors to daily physical activity levels as girls transitioned into young adulthood. Life changes that occurred with motherhood affected young women's physical activity behaviours, decreasing leisure time sports and exercise and increasing household/childcare and activities of daily living. Motherhood was supportive of leisure time physical activities only in those mothers who played with their school aged children.
"Like if you have kids, you have to teach them, they study at home, they need more activity. Playing together, riding bicycles, walking, dancing, yoga, swimming, shopping, walking the malls, traveling." For many mothers however, motherhood resulted in minimal LTPA with their children.
"Suppose I try to do the dancing with music, with my kids, I have two years-old daughter, it is very hard to do the yoga thing with her because she doesn't like yoga -it's just one-on-one because our children, she's just four years-old so she can't play soccer or something like this." Promoting overall health was the most salient personal factor that influenced SED mothers' participation in LTPA. Self-identified health benefits included feeling good, getting fit and having fun, better energy levels, stress reduction, improved self-esteem, weight loss and healthy aging. Mothers stressed the important role of physical activity for weight loss, in particular the loss of that weight gained during pregnancy: "When I was school-age or teenager, I never thought about losing weight, when I got pregnant I gained lots of weight, now I think oh my god how can I lose the weight? That's the thing working all the time in my mind. So I do physical activity more, like walking" "Exercise, whatever you can get, I'm waiting until the weather change so I can walk more. Eating, uh, healthy eating, it's true, but I don't watch my diet. They go together, but most, to me, I don't watch what I eat but I like to go out there and do exercise to lose the weight."
Individual barriers
SED mothers expressed a wide range of barriers to LTPA including a lack of physical skills to be able to participate in sports/exercise, body image issues related to being overweight, a lack of priority for physical activity, as well as guilt for actually taking the time to be physically active (Table 3). Across all focus groups the most common intrapersonal barriers to both utilitarian and LTPA included a lack of motivation and fatigue. Difficulties finding appropriate clothing as an overweight mother (sport bra, swim suits) limited many mothers' participation in sports and other physical activities.
Interpersonal barriers included nonsupportive cultural norms for both utilitarian and LTPA (e.g. cultural acceptance for women to ride a bike either as a mode of transportation or for sport/exercise) and being intimidated by the social environment through negative experiences about body size and shape including weight-related teasing of children of overweight SED mothers (Table 3). A lack of spousal support was a consistent barrier to LTPA, as were family obligations and expectations. Financial barriers to LTPA were consistently mentioned in all focus groups. This included transportation and clothing costs.
Social barriers
Community barriers (program and childcare costs) rendered physical activity opportunities unaffordable to many SED mothers. The lack of culturally appropriate sport and exercise programming for women, a lack of women coaches/leaders, unavailability of recreational facilities (including limited or inconvenient hours of operation for those SED mothers who work shift work), a lack of childcare, and a lack of physical activity resources in languages other than English were also reported. Web-based information was often not updated with currently available recreation and childcare opportunities and was usually available in English only, despite the multilingual characteristics of the population these promotional materials were to serve.
Policy barriers included limited accessibility to subsidies such as recreational facility access cards to those whose family income was below a set amount that defined them as "poor". However the social stigma of proving low income prevented some of these mothers from acquiring these subsidies. Furthermore, some mothers indicated that although their family income was above the cut-off criteria they nevertheless had insufficient financial resources to participate in leisure time programs because childcare costs were still too high for those mothers that had more than one child. These mothers talked about the potential utility of the social status ladder tool that was used to recruit them into this study as preferred accessibility criteria for subsidized programs (Table 3).
Environmental barriers
The location of the recreational facility was a barrier when it was not served by public transportation or was too far removed from the community that it was designed to serve (e.g. numerous bus transfers) ( Table 3). Across all focus groups, accessibility to a safe and secure environment was a key barrier to both utilitarian and LTPA (Table 3). SED mothers identified concern for safety as an environmental barrier to active transport (e.g. walking through unlit areas at night). Mothers were concerned about crime and the lack of safe neighbourhoods and playgrounds for participating in LTPA. Mothers felt that the outdoor parks in downtown neighbourhoods were rundown, unsafe or inaccessible with a stroller. Dirty needles, cigarettes, crack pipes could also be found, further limiting their accessibility for mothers with young children looking for safe places to play together. Cold temperatures and the lack of snow removal further limited access to the parks throughout the winter and was a noted key barrier to utilitarian physical activity. Furthermore, the height of snow banks at intersections was a challenge for mothers walking to and from the bus and grocery stores with children in strollers. Aboriginal and immigrant multicultural mothers indicated that the decrease in utilitarian physical activity in the winter resulted in weight gain, leading to further health conditions including diabetes (Table 3).
Supporting factors for physical activity Individual supports
Perceived health benefits, in particular maintenance of a healthy weight and/or weight loss, were motivators for these women to be physically active in their leisure time and to incorporate daily physical activities such as walking. A support system of family and friends was essential for those SED mothers to consistently maintain a physically active lifestyle that included LTPA. Traditional activities (e.g. traditional dance) were considered an important physical activity for aboriginal mothers and their children to do together. Many mothers identified the influence of "champions" for social networking, outreach and social support (e.g. health advice, information on programs and services, cultural acceptance, sharing child care responsibilities), and information on evaluating physical health as important. These champions took the form of health and recreational professionals as well as peers who led by example (Table 3).
Social and environmental supports
SED mothers indicated that there should be an increase in the availability of subsidies and bursaries for SED mothers to be able to take part in LTPA at recreation centres with childcare provided simultaneously. Mothers Table 3 Major themes and sub-themes of barriers to and supports of physical activity with representative quotations arising from focus group discussions with multiethnic SED mothers
Individual factors
Barriers Lack of energy and motivation ". . .All your energy (goes) to the kids, only them.." ". . .free time from kids; we can read something, read some books, listen to music or watch movies. . ." Body image "Once I became a teenager I had cramps every time. . .because I didn't want to mess up my hair, I didn't want to change in front of the girls." "Here I come and I'm just in big sweater and pants and then I don't feel comfortable next time I go to the group right because I feel like I stick out." Physical skills "some of us don't know how to swim so we let the kids go in the pool and then watch while they're swimming." Priority "Someone has to take initiative so we can get together and discuss how to improve our daily life" Guilt "..I tell the kids I'm going to lose weight (by doing exercise), I never lose weight; they (children) told you, already you are mom, why you going to lose weight? You're not becoming a good mom anymore; you just think about yourself, you're selfish." Supports Health benefits "to lose weight is a motivator" (to be physically active). . . "Walking is a practical sport because we can do lots with it." "Physical activity helps me to sleep better" "physical activity helps me. . ..physically and mentally to be happy. . ." "Right now I'm a diabetic, my sugars are very high, I'm trying to get my sugars down and my husband is trying to get me back to the gym. . ."
Social factors
Barriers Family expectations "Yeah, back home we always provide, as a mom, give, give, give. Expectations is too high, expectations as a mom is very high, so that' s how they expect, so if you say I want to go to school, I want to exercise, oh my God there is an issue." "In our culture, when you become a mom, your life is ended".
Weight-related teasing "As the mom ok sometimes you feel, ok your mom is fat, because of you the kids may bully, your mom is chubby, look at your mom. There was a rap song saying: look at your mother, she' s fat....the child is six, seven, eight years old, he can get defensive about when somebody sees it' s his mom, like for example I have a seven year-old who always says mommy don't come to the school because some of my friends says that you are fat." Culture/religion ". . .I know back home all my brothers know how to bike, I never learned. Even if we wear pants we don't even know how to ride bikes, because this was wrong" "In my country, when a girl like me wanted to do sports and she did it, one man said: since when did you lose your 'Indian-ness'?" Lack of financial resources ".. some people they don't have a bus pass; they can't afford to buy it. If your budget is low, how're you going to do that?" "Clothing, yeah clothing, it' s expensive; because we don't wear the swimming suits. . ." Lack of spousal support "Husbands don't know that kind of information that we need sometimes help, mental help, sometimes I feel I have two kids, oh my god I need some time for me, at least thirty minutes, I want to do something for me" "Like my husband, he does anything he wants, he has plenty of time after work, he has groups where he exercise, whereas me I'm stuck with the kids all the time. No obligations for the man to look after the kids. So that we do need a lot of support as a mom and the children." Supports Family activities "Like if you have kids, you have to teach them, they study at home, they need more activity. Playing together, riding bicycles, walking, dancing, yoga, swimming; shopping, walking the malls, traveling." "It' s something to do with our kids, to do it together, like with the baseball with the kids, it' s more fun" Traditional Activities "Like traditional dancing, I'd like to learn, I've got my daughter, I'd like to teach her, and I've got me, get myself going again." Family support "I go to my parents' cottage. The kids are at the beach while I do other activities." Friends' support "If you have friends that support you, that want to do it with you, that helps you to do those physical activities"
Environmental factors
Barriers Financial Costs ". . .when we want to register our kids or even me and my husband, a family you know, it' s too expensive to go to the community centre. . ." Stigma with low-income subsidies "There are some but you have to be below the poverty level to have access." also suggested that there should be an increase in the number of hours of affordable childcare within their respective communities. Subsidies, when accessible, supported SED mothers in their pursuit of a more physically active lifestyle in their leisure time. Native and multicultural community recreation centres were identified as secure and trusted environments where physical activities, including traditional activities, could be done in a trusted environment. Outreach and networking were important for delivery of structured physical activity programming information within aboriginal and multicultural communities. In addition, the warmer summer weather was identified by many mothers as an environmental support of both utilitarian and LTPA.
Qualitative methods -focus groups with health and recreation professionals (HRPs)
Five focus groups were conducted with a minimum of three female HRPs of varying backgrounds and levels of experience in 3 regions of Canada (Table 4) In general, HRPs were confident advising SED mothers about the health and fitness benefits of physical activity. The HRPs were also confident in their abilities to assist SED mothers in overcoming barriers to LTPA and establishing a regular leisure time physical activity program. They also considered themselves to be effective promoters of SED mothers' participation in physical activities ( Table 5). The health benefits of physical activity for SED mothers were considered a priority by HRPs. They agreed that 30 minutes of brisk walking most days of the week would suffice for improving health. However, they also considered moderate to vigorous exercise of longer duration as essential for improving health (Table 5).
Perceived barriers of HRPs Individual barriers
Generally, HRPs perceived that SED mothers accrued sufficient daily utilitarian physical activities but that few mothers had LTPA and/or regular exercise programs due to family demands and a lack of supports (i.e. childcare or a women's support group).
"I feel most of them have more activities, they are walking, they don't have a car, or taking the stuff they have to get around to get services, the kind of stuff they do, they probably are on the move more." ". . .coming to work on Wednesday morning, seeing a young mom pushing a double stroller with 2 kids in it in all that snow storm, the street and sidewalks were horrible, it probably took her10 minutes to walk up to the end of the street where she was heading and I thought that this is really physical activity." Table 3 Major themes and sub-themes of barriers to and supports of physical activity with representative quotations arising from focus group discussions with multiethnic SED mothers (Continued) Lack of transportation "It depends also sometimes transportation; you need bus. . .if it' s so far. Yeah, too far or not on a bus route, or four buses/too many buses to get there" Unsafe environments ".. even with the community around here like, the areas are so, you don't trust to walk at night. . .a lot of the places I wouldn't go out by myself, down here it' s just kind of -you don't know" Poor climate "The summer, soccer you can do but the winter you stay at home" "..sometimes, especially in storm (winter), I stay 2,3,4 days at home, I can't go outside. . ..",
Lack of Multicultural Resources
"the problem is that I do not speak English, only French and this makes communication very difficult...." Lack of Childcare "Because you can participate more when you know that you have child care" ". . . they used to have child care and then had exercise on the site and they stopped that and that was a big loss for us so most of the mothers stopped going there." Supports Professional Support "For me, you need a program, a person that gives you support but support with experience, professional support" Availability of subsidies "The last minute club, its good, if nobody registers for that particular course then we get it for free. . ." "I know YMCA have good subsidy for low income people." Presence of Cultural Community Centres "Everybody; they make you feel welcome, you don't get that sense of clique, intruder feeling when you come in and you're new; everybody' s welcome; no judgment.." "I need the socialization; when we do things here, we play baseball, we go swimming, we do all this stuff . . ." Outreach and Networking "And then if I come across other native people, the single moms, or they don't even have to be single, I'll say oh my god, down at the centre we have the blah blah blah and that' s where word of mouth comes in." "A sense of community, trust. . .our nativeness. . .a sense of security. . . when I say my band, no one asks me what instrument I play. . ."
Social and environmental barriers
HRPs discussed the misconceptions and stereotypes surrounding "socio-economically disadvantaged" and the lack of physical activity program leaders who understood the needs of SED mothers who have cultural, religious and ethnic needs that are quite different from the societal norms (Table 6). Across all focus groups, HRPs indicated that there was insufficient funding for physical activity programming for SED mothers and their families, and a lack of partnership between health and recreation professionals to create physical activities that are coordinated, connected, responsive, effective and sustainable ( Table 6). The public health nurses who developed and implemented physical activity strategies targeted to the SED population of women in their community expressed their outrage with the municipal recreation planners whose bottom line mandate is to offer programs that cover all their costs, effectively canceling out programs for those that cannot afford the registration fees. A municipal recreation planner indicated the need for a higher-level directive from Parks and Recreation with respect to programming for the SED community: "..somebody at the top needs to realize because I am driven by money, that my goal is not to make a huge profit, I can run a program but I at least need to recover the instructor's cost." The cold weather environment throughout the winter months was also perceived to be a barrier to physical activity for those new immigrants coming from warmer countries.
Proposed solutions of HRPs
Education of multiethnic SED mothers and their spouses about the importance of LTPA for women's health, increased availability of culturally competent, bilingual and multilingual staff in francophone and multicultural resource centres, and improved partnerships between health and recreation professionals were considered integral components for increasing physical activity levels of SED mothers. Community assessments by HRPs could determine the physical activity needs of SED mothers, their barriers and limitations, and how to best integrate appropriate activities for different cultures, ages and life situations.
Discussion
This study provided insight into the individual, social and environmental factors that influence both utilitarian and leisure time physical activity (LTPA) of multiethnic socioeconomically disadvantaged (SED) mothers in three urban regions of Canada. Using a mixed methods approach that included focus groups, assessment of psychosocial correlates of physical activity, and quantitative assessment of domain specific physical activity levels, our findings highlight, as others have [19][20][21], the importance of considering barriers and supports that influence both utilitarian and LTPA of SED mothers. Our major findings were as follows: 1. (Individual) Achieving a healthy body weight was an important individual factor influencing LTPA habits of SED mothers but as reported by others [7,22,23], domestic activities (household chores and childcare) were significant contributors to daily physical activity levels of SED mothers, leaving them with little time or energy for LTPA. These SED mothers considered LTPA to be an important tool for weight loss and/or maintenance of a healthy weight as others have previously suggested [24,25]. However, our findings suggested that these mothers were unable to maintain healthy weights with their reported levels of physical activity. A second contributing factor which has also been described by others [26,27] was the inability of these SED mothers to set aside time for regular LTPA when they are dealing with home, work or other life stressors.
(Social)
The presence of a social support network including spouse, extended family, community champions and HRPs, was a major factor influencing LTPA behaviours of SED mothers which has been reported in the US [28]. Similar to other studies [29,30], structured physical activity programs and group activities organized by professionals and/or done in partnerships with other groups within their communities improved the self-efficacy of SED mothers to plan time for physical activity within their days and become more physically active. 3. (Environmental) A key environmental factor limiting physical activity levels and behaviours of these SED mothers was the lack of available, affordable, and accessible LTPA programs with childcare that fit into the mothers' schedules, and that responded to their cultural and social needs. Finally, the summer climate promoted both utilitarian and LTPA whereas the winter climate was a major barrier to both.
SED mothers considered LTPA to be an important weight loss tool, similar to findings of the 2005 Canadian Community Health Survey, Cycle 3.1, which showed that those who were active in their leisure time were more likely to have lower levels of overweight and obesity [31]. More than half of SED mothers completing the KPAS tool were overweight or obese and there was no apparent Below are some statements about physical activity. For each statement, please mark your level of agreement.
To improve your health it is essential to do moderate to vigorous exercise for at least 20 minutes, 3 times a week. relationship between LTPA energy expenditure and BMI, suggesting that LTPA programs were not meeting mothers' needs for management of healthy weights. SED mothers' reported mean LTPA energy expenditures of 3.0 (±1.9) kcal/kg/day (KKD), which is a threshold, of physical activity equivalent to walking for ≤ 60 minutes/ day. Although currently considered as "active", the recently recommended ≥ 60 minutes of moderate to vigorous intensity exercise per day [32] may be needed to manage body weight and attenuate age-related weight gain in multi-ethnic SED mothers. Thus LTPA programs may need to focus on increasing opportunities for year round moderate to vigorous physical activity for multiethnic SED mothers. A sense of trust in the community where physical activity programs/efforts were offered was an important motivation for SED mothers to be physically active. Aboriginal women attending private fitness centres did not feel a sense of community trust and support. This made it somewhat uncomfortable and daunting for them to commit to a regular program of physical activity, despite significant health issues that they felt would be better controlled with regular LTPA. Aboriginal mothers felt that the existence of a native friendship centre with structured physical activities for women and their families improved their involvement in LTPA. Being together as a band brought a greater sense of security. The selection of culturally specific champions from among SED clientele with expertise in traditional dance was noted as an important motivator for increasing physical activity levels amongst the SED mothers. SED mothers also noted that LTPA programs involving these "champions" were important for social networking, social support, and for fostering the development of social relationships and community cohesion. As with other research in culturally diverse women [33,34], an absence or lack of women-only exercise and sport programs were described as important deterrents for mothers whose cultures or religions necessitated women only programs. Not enough program leaders who understand SED mothers and their needs Develop mentorship programs for participants in Women's Only physical activity/exercise programs (e.g. Women Alive) to become an instructor and teach the program.
Offer on-going support to the trainees.
Invite SED mothers to participate in physical activity program development Insufficient financial resources for physical activity programs for SED mothers Generate more funding and foster a greater understanding from municipal parks and recreational groups that having SED mothers physical activity programs will profit the community by being more welcoming to everyone.
Lack of partnership between health and Recreation professionals
Individual organizations need to create partner networks to work together to create physical activities that are coordinated, connected, responsive, effective and sustainable. This requires collaboration and coordination between those who have the recreation facilities, and those who have the communication and capacity-building in the community.
Public health can pay for instructors and supply the in-person support needed for the group.
Offer programs at times when there is childcare available and a convenient time for the mothers rather than when the recreation center is not busy or it is not their most profitable time.
SED mothers feel intimidated by some leaders
Set the goal to have fun. Build a sense of belonging and security. Hire welcoming and nonjudgmental leaders.
Hire leaders who understand SED mothers i.e. having people leading who have had the experience in your own community will result in an immediate bond and better understanding of the immediate issues SED mothers face.
Not enough mental health workers More mental health workers to help SED mothers overcome some of the mental health barriers post-partum mothers experience.
Lack of Francophone physical activity resources in Alberta
Integrate Francophone programs into recreation centres; Hire bilingual professionals and recruit bilingual volunteers to contribute to physical activity programs; Political request for bilingual services for all Francophone Canadians (not just immigrant Francophone) within the Francophone community in Alberta potentially using Centre Accueil Nouveaux Arrivants Francophones (CANAF) Environmental barriers limited the accessibility of SED mothers to LTPA opportunities within their own communities. HRPs identified the oppositional clash in values that pits the LTPA needs and interests of the SED mothers against the demands for profit making in the recreation sector. SED mothers indicated that LTPA program subsidies designed to include financially disadvantaged women and their families, further stigmatized these mothers with an onerous and invasive process of proving poverty in order to qualify. Furthermore, subsidies did not necessarily provide a discount on childcare, effectively excluding low income SED mothers from participating in programs when they could not bring their children. A recent report by the Canadian Fitness and Lifestyle Research Institute showed that while the percentage of municipalities who offer programming and scheduling to low income groups has increased dramatically in Canada, the number who offer fee discounts or subsidies for low income adults has remained unchanged [35].
This study was limited by its cross sectional design which meant that causality could not be attributed. There was also the possibility of selection bias as a result of the purposive sampling strategy used. The small sample size was driven by the small number of SED communities within Canada where HRPs were conducting targeted physical activity programming for SED mothers. A key strength of this study was the use of mixed methods to examine the barriers and supports for physical activity that SED mother's experience that could inform future program development efforts.
Implications
The 2010 Toronto Charter for Physical Activity made a call to action to create sustainable opportunities for physically active lifestyles for all Canadians [12]. To do this, organizational programmers and community policy makers must address the social, cultural and physical environment and policies promoting physical activity within their communities, as has been concluded by researchers in the US [36]. An integral component is the involvement of SED mothers of young children in the discussions and development of local physical activity promotion strategies, including increasing partner support, social advocacy, and capacity building, as this has already shown some success in Canada [22] and the US (27).
With the current funding environment it is imperative that HRPs make efficient use of existing resources to maximize participation of SED mothers in LTPA programming within their communities. As part of this project tools were developed to assist HRPs to assess SED mothers' physical activity needs and to develop and plan for LTPA programs that address the issues of availability, affordability and accessibility of LTPA programs while creating positive physical opportunities for mothers to lead healthy lives for themselves and their children. These tools are available for download in French and English on the CAAWS website "Mothers in Motion" http://www. caaws.ca/mothersinmotion/e/lowstatus/tools.cfm.
Conclusions
This study identified lack of time, childcare, social and financial support as key barriers to LTPA of SED mothers. Concerns for safety, nonsupportive cultural and social norms and the environment (seasonal climate) were identified as barriers to both utilitarian and LTPA. The presence of social support networks, including spouse, extended family, community champions and/or HRPs were key supports for SED mothers to incorporate LTPA within their lives. The varying LTPA levels amongst these multi-ethnic SED mothers and the rates of overweight and obesity suggests that current LTPA programs are likely insufficient to maintain healthy body weights. Opportunities for year round moderate to vigorous physical activity programs that have the capacity to meet the social, cultural and health needs of multi-ethnic SED mothers and their families may be needed to manage and achieve healthy body weights. Integral to this is the involvement of SED mothers in the discussions and development of local physical activity promotion strategies. Furthermore, integration of culturally competent HRPs is an important component for this community development approach. Accessible, available and affordable LTPA opportunities for SED mothers that are culturally responsive give SED mothers' the potential to achieve their self-identified health benefits of LTPA. In the future, with more communities having the tools to assess their communities for development of LTPA programming, intervention studies can be better designed to meet SED mother's needs.
the study participants, the CAAWS project Advisory Committee and the contributions of the CAAWS project manager, Stephanie Legault Parker, and the project statistician Steve Doucette.
|
2016-05-04T20:20:58.661Z
|
2012-04-13T00:00:00.000
|
{
"year": 2012,
"sha1": "586aa9650a99b028b5c7edb59f5fb2bf0806ed1e",
"oa_license": "CCBY",
"oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/1479-5868-9-42",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e49c1754df9ffa6d3b7725c7d007538600e36631",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
263952194
|
pes2o/s2orc
|
v3-fos-license
|
Exosomal and Plasma Non-Coding RNA Signature Associated with Urinary Albumin Excretion in Hypertension
Non-coding RNA (ncRNA), released into circulation or packaged into exosomes, plays important roles in many biological processes in the kidney. The purpose of the present study is to identify a common ncRNA signature associated with early renal damage and its related molecular pathways. Three individual libraries (plasma and urinary exosomes, and total plasma) were prepared from each hypertensive patient (with or without albuminuria) for ncRNA sequencing analysis. Next, an RNA-based transcriptional regulatory network was constructed. The three RNA biotypes with the greatest number of differentially expressed transcripts were long-ncRNA (lncRNA), microRNA (miRNA) and piwi-interacting RNA (piRNAs). We identified a common 24 ncRNA molecular signature related to hypertension-associated urinary albumin excretion, of which lncRNAs were the most representative. In addition, the transcriptional regulatory network showed five lncRNAs (LINC02614, BAALC-AS1, FAM230B, LOC100505824 and LINC01484) and the miR-301a-3p to play a significant role in network organization and targeting critical pathways regulating filtration barrier integrity and tubule reabsorption. Our study found an ncRNA profile associated with albuminuria, independent of biofluid origin (urine or plasma, circulating or in exosomes) that identifies a handful of potential targets, which may be utilized to study mechanisms of albuminuria and cardiovascular damage.
Introduction
Hypertension is a multifactorial disease that affects cardiovascular and renal systems [1,2], and persistently increased urinary albumin excretion (UAE) is a marker of cardiovascular risk progression and renal impairment [3,4]. The mechanisms leading to progression of renal disease and albuminuria are incompletely understood.
Non-coding RNA (ncRNA) species comprise more than 90% of all transcripts and have attracted increasing attention in a broad range of biological processes over the last decade [5][6][7]. NcRNAs can be divided into three categories based on their length: ncRNAs longer than 200 nucleotides (nt), including ribosomal RNA (rRNA) and long non-coding RNA (lncRNA); ncRNAs between 40 nt and 200 nt, such as transfer RNA (tRNA), small nucleolar RNA (snoRNA) and small nuclear ribonucleic acid RNA (snRNA); and ncRNA shorter than 40 nt, such as microRNA (miRNA), piwi-interacting RNA (piRNA) and small interfering (siRNA) [8,9]. NcRNAs' expression is tissue and cell-type specific under physiological conditions and plays an important role in many biological processes by regulating gene expression at epigenetic, transcriptional and post-transcriptional levels [10][11][12].
Among these macromolecules, the role of miRNAs in the kidney has been studied extensively, and preliminary evidence indicates that they may regulate progression of glomerular and tubular diseases [13][14][15][16]. Current knowledge on lncRNAs has attracted attention over the last few years in both glomerular and tubulointerstitial kidney diseases, such as in diabetic nephropathy [17]. Competitive binding between lncRNAs, target mR-NAs and miRNAs is thought to regulate gene expression, forming a wide RNA-based transcriptional regulatory network (lncRNA-miRNA-mRNA) in a broad group of diseases [18][19][20]. Another class of small non-coding regulatory RNAs are piRNA, a recently discovered class whose many aspects in terms of biogenesis, relevance to health and disease, and overall gene regulatory mechanisms remain elusive [21]. Previous evidence suggests that piRNA serve as upstream mediators of epigenetic control and may also be involved in transcriptional gene silencing [22,23].
NcRNAs can be released into circulation bound to RNA-binding proteins or packaged into extracellular vesicles (EVs), such as exosomes, and may function as paracrine effectors in the crosstalk between different cell types in the kidney [24,25]. Recent studies have demonstrated the role of exosomal ncRNAs as biomarkers in urological malignancies, chronic kidney disease, cancer or psoriasis [26][27][28]. Our group has recently found that urinary-and plasma-derived exosomes reveal a distinct miRNA signature associated with albuminuria in hypertension, reflecting changes taking place in the kidney [29]. Nevertheless, a study that analyzes the global ncRNA profile associated with early renal damage in hypertension remains largely unknown.
Our aim was to identify a combined signature of various ncRNA biotypes in liquid biopsy, independent of biofluid origin, in urine, plasma or exosomes from hypertensive patients with albuminuria, using high-throughput sequencing analysis, which may more closely reflect the overall biology of underlying early damage than use of single markers. Finally, we constructed an lncRNA-miRNA-mRNA regulatory network with the ncRNA signature combining bioinformatics and correlation analyses associated with development of UAE in hypertension.
Characteristics of Study Patients
The study population included 48 essential hypertensive subjects, 22 subjects with increased UAE and 26 normoalbuminurics (non-UAE). General patient characteristics and antihypertensive medication are shown in Table 1.
Proportions of RNA Types in Each Biological Fraction and Patient Groups
Small RNA-sequencing single-end technology was used to detect RNA types in the three different biofluid fractions (total plasma, urinary and plasma-derived exosomes) from hypertensive patients with or without increased UAE. When we analyzed all mapped reads as a whole, we observed that the most frequent RNA biotypes in proportion were piRNA with 38%, miRNA with 32% and miscellaneous RNA (miscRNA) with 16%. This last group included Y-RNA and Vault-RNA, where Y-RNA represented 99% of mapped reads. LncRNA represented 63% of other mapped read groups (Supplemental Material, Figure S1A). In addition, when all genes included in the analysis were examined and separated by RNA type, those encoding small fragments of RNA showed the highest variety of genes (a total of 10,603), followed by lncRNA (718), piRNA (293) and miRNA (159) ( Figure S1B). We next sought to analyze the proportion of RNA biotypes present in each of the three biological biofluids, finding miRNA to be the predominant biotype with mapped reads in urinary exosome fraction, representing approximately 65% of total mapped reads in both hypertensive patient types (with or without UAE). Nonetheless, piRNAs were also the most representative RNA type in both patient groups, with 40% in plasma exosomes and 51% in plasma ( Figure 1). Of the remaining RNA types identified, mRNA, rRNA, miscRNA and others (where lncRNA represented between 75% to 90% of reads) showed similar percentages in the three biofluids, regardless of the presence of UAE. These data indicate that biofluid origin, mainly if stemming from urine or plasma samples, influences RNA type distribution. In addition, non-significant differences were observed in all biofluids when comparing patient groups with and without UAE.
Differentially Expressed RNAs in Microalbuminuria in Each Biological Fraction
As shown in the volcano plot ( Figure 2), analysis of RNA subtypes in all patients for each biofluid identified more significant RNAs differentially expressed (DE) (FDR < 0.05) in exosome fraction than in plasma, regardless of whether exosomes came from urine or plasma. Significant RNA showed higher fold change (FC) and FDR in urinary and plasma exosomes than in those circulating in plasma (Figure 2A,B vs. Figure 2C). Analyzing the number of DE transcripts (considering p-value < 0.05), 4336 were found in urinary exosomes (U-Exo), 4645 in plasma exosomes (p-Exo) and 1415 in plasma samples (Supplemental Material, Figure S2A). In the three biofluids, the protein-coding genes correspond to approximately 85% of the total transcripts ( Figure S2B), followed by 7% lncRNA and 2% miRNA. Interestingly, transcript type distributions obtained among the biological compartments showed very limited overlapping between exosomal and plasma fractions, with only 199 out of 10,396 common to all three groups (Supplemental Material, Figure S2A), of which 88% were protein-coding genes (175 transcripts).
Differentially Expressed RNAs in Microalbuminuria in Each Biological Fraction
As shown in the volcano plot ( Figure 2), analysis of RNA subtypes in all patients for each biofluid identified more significant RNAs differentially expressed (DE) (FDR < 0.05) in exosome fraction than in plasma, regardless of whether exosomes came from urine or plasma. Significant RNA showed higher fold change (FC) and FDR in urinary and plasma exosomes than in those circulating in plasma (Figure 2A,B vs. Figure 2C). Analyzing the number of DE transcripts (considering p-value < 0.05), 4336 were found in urinary exosomes (U-Exo), 4645 in plasma exosomes (p-Exo) and 1415 in plasma samples (Supplemental Material, Figure S2A). In the three biofluids, the protein-coding genes correspond to approximately 85% of the total transcripts ( Figure S2B), followed by 7% lncRNA and 2% miRNA. Interestingly, transcript type distributions obtained among the biological compartments showed very limited overlapping between exosomal and plasma fractions, with only 199 out of 10,396 common to all three groups (Supplemental Material, Figure S2A), of which 88% were protein-coding genes (175 transcripts).
Differentially Expressed Non-Coding RNAs by Origin
Analyzing DE ncRNA among hypertensive patients with or without UAE, we observed that the three RNA biotypes with the greatest number of statistically significant transcripts were lncRNA (52% in exosome fractions and 44% in plasma), miRNA (20% in U-Exo, 26% in P-Exo and 13% in plasma) and piRNA (15% in U-Exo and plasma, 10% in Volcano plot depicts significantly altered RNAs found in (A), plasma exosomes; (B), urinary exosomes; and (C), in plasma. Each dot represents an RNA; non-significant false discovery rate (FDR > 0.05) and log 2 fold-change ≤2 or ≥−2) in black, with log 2 fold-change ≥2 or ≤−2 in brown, with significant FDR in blue and with significant FDR and log 2 fold-change ≥2 or ≤−2 in green. The threshold dotted line for log 2 fold-change was ≤2 or ≥−2, and for −log 10 (FDR) it was <0.05.
Differentially Expressed Non-Coding RNAs by Origin
Analyzing DE ncRNA among hypertensive patients with or without UAE, we observed that the three RNA biotypes with the greatest number of statistically significant transcripts were lncRNA (52% in exosome fractions and 44% in plasma), miRNA (20% in U-Exo, 26% in P-Exo and 13% in plasma) and piRNA (15% in U-Exo and plasma, 10% in P-Exo) ( Figure 3A). Next, the Venn diagram obtained from among the biological compartments showed very limited overlapping between exosomal and plasma fractions, with 24 of these 835 DE ncRNAs common to all three groups. Ten of them were lncRNA (42%), six pseudogenes (25%), four snoRNAs (17%), two miRNAs (8%) and two piRNAs (8%). These 24 ncRNAs represent the molecular signature related with hypertension-associated UAE, independent of biofluid (U-Exo, P-Exo or circulating in plasma) ( Figure 3B).
Common Differentially Expressed lncRNA-miRNA-mRNA Network from Hypertensive Patients with Urinary Albumin Excretion
The ten DE lncRNAs and two miRNAs established in the molecular signature w selected and potential predicted target mRNAs were identified, creating the common lncRNA-miRNA-mRNA network ( Figure 4). Hub nodes, characterized by their high gree of connectivity to other nodes in the network, can be used to assess the significa Diverging bar charts showed the fold-change expression of the 24 common ncRNAs in each biological fraction ( Figure 3C). The majority of lncRNAs were downregulated in hypertensive patients with UAE in all three biofluids. In both urine and plasma exosome fractions, hsa-piR-32157 was upregulated and downregulated in plasma samples, and the other hsa-piRNA-33056 was upregulated in P-Exo and plasma but downregulated in U-Exo from patients with UAE. Likewise, both miRNAs (miR-208a and miR-301a) were significantly augmented in U-Exo, despite being highly downregulated in plasma fraction. All four snoRNAs were downregulated in U-Exo fraction but were upregulated in plasma. Finally, the vast majority of pseudogenes were downregulated in both exosome and plasma fractions from hypertensive patients with UAE ( Figure 3C).
Common Differentially Expressed lncRNA-miRNA-mRNA Network from Hypertensive Patients with Urinary Albumin Excretion
The ten DE lncRNAs and two miRNAs established in the molecular signature were selected and potential predicted target mRNAs were identified, creating the common DE lncRNA-miRNA-mRNA network ( Figure 4). Hub nodes, characterized by their high degree of connectivity to other nodes in the network, can be used to assess the significance of genes in the network. In the present study, five lncRNAs (LINC02614, BAALC-AS1, FAM230B, LOC100505824 and LINC01484) and one miRNA (miR-301a-3p) were observed to be topological hub nodes whose betweenness, network degree and closeness centrality were significantly higher in comparison with other common RNAs (Table 2). In addition, the Over-Representation Analysis using GO annotation showed that clathrin heavy chain binding, store-operated calcium channel activity, mitogen-activated protein kinase (MAPK) binding and extracellular matrix structural constituent were among other pathways that could play an important role in development of albuminuria, but more evidence is necessary. The Over-Representation Analysis further reported that among the most significant pathways were IL-17 signaling, sphingolipid signaling and metabolism, type II diabetes mellitus, endocytosis and vascular endothelial growth factor (VEGF) signaling ( Figure 4B,C). To further clarify the biological roles of the common short fragments of protein-coding DE RNA in the three biofluids from hypertensive patients with UAE, we performed the gene set Over-Representation Analysis (ORA) with the following findings: 144 common transcripts in the three biofluids and four networks related to pathogenesis of hypertension and presence of UAE were identified, showing a central node with more than six edges.
The first network identified is associated with transforming growth factor (TGF)signaling and includes SMAD3, WNT7B, BMP6 and PDGFRB proteins. The second important network is associated with kidney urinary concentration mechanisms, such as kidney water reabsorption, salt reabsorption and K/Cl cotransporter (YWHAQ, STK24, SLC12P6, CLCNKA and CLCNKB proteins). The third network is linked to modulation of MAPK signaling, including YWHAQ, PRKAG3, PRKAB2 and TRIB2 proteins. Finally, the last network mediates membrane trafficking with YWHAQ, GRIP1, RAB3IL1, HOOK1 and CYTH1 proteins involved ( Figure 5). In addition, the DE mRNA-related GO analysis showed that voltage-gated chloride and anion channel activity, glucocorticoid receptor binding, extracellular matrix (ECM) structured constituent and others could play an important role in development of albuminuria. The pathway analysis further revealed that 12 unique pathways were enriched, including the factor-regulated calcium reabsorption, complement and coagulation cascades, tight junction and vasopressin-regulated water reabsorption. binding, store-operated calcium channel activity, mitogen-activated protein kinase (MAPK) binding and extracellular matrix structural constituent were among other pathways that could play an important role in development of albuminuria, but more evidence is necessary. The Over-Representation Analysis further reported that among the most significant pathways were IL-17 signaling, sphingolipid signaling and metabolism, type II diabetes mellitus, endocytosis and vascular endothelial growth factor (VEGF) signaling ( Figure 4B,C). Finally, we generated a new interaction network joined to the lncRNA-miRNA-mRNA targets interaction network (Figure 4), with the protein-protein interaction network of common DE mRNA ( Figure 5). The new network showed numerous interactions between the nodes of the two sub-networks, mainly at secondary level ( Figure S3). Among the 19 DE genes with node degree >10, we found nine of the ten common lncRNAs, one common miRNA and ten protein-coding genes (Table S1).
Discussion
In the present study, we identified a 24-ncRNA signature associated with albuminuria in hypertension, independent of biofluid, which is common to urinary exosomes, plasma exosomes and circulating in plasma, containing predominantly lncRNA. We also constructed a transcriptional regulatory network (common lncRNA-miRNA-mRNA targets) and predicted the target genes. We found five lncRNAs (LINC02614, BAALC-AS1, FAM230B, LOC100505824 and LINC01484) and one miRNA (miR-301a-3p) with significantly higher node degree and topological network values compared with the other nodes, implying that these hub RNAs are essential in network organization and are potential key regulators controlling UAE development in hypertension-related RNA network. In addition, we used Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis to assess the enriched biological functions regulated by the ncRNA signature, most of them implicated in mechanisms inducing renal damage.
A key feature of the present study was the strategy used. The vast majority of previous studies have identified a unique ncRNA biotype, mainly miRNA, associated with hypertension renal damage in a specific biological fraction [30,31], whereas our multicompartment approach aimed to provide a common global ncRNA profile associated with albuminuria, independent of sample origin. By combining the differentially expressed ncRNAs found in total plasma, urinary and plasma-derived exosomes, we sought to find an ncRNA signature representative of albuminuria development in hypertension that could be identified in clinical practice, regardless of the type of patient sample available (urine, plasma or exosomes). We observed that the behavior of ncRNA levels differed from circulating in plasma or in exosomes, several of them being downregulated in plasma and upregulated in exosomes and vice-versa. Furthermore, common ncRNA level changes were dependent on urine or plasma origin. Previous studies found selective sorting of specific miRNAs into exosomes compared with the whole miRNA circulating pool in specific biological fractions [32,33]. Our data suggest that an increase in exosomal ncRNA expression levels to the detriment of circulating levels could be due to a controlled, specific process related to a pathological condition. These findings identify a common ncRNA profile in all three fractions, which facilitates its identification in clinical practice, independent of sample origin, and biofluid origin will only take into account the interpretation of albuminuria-related expression changes (up-or down-regulation).
Analyzing DE ncRNA in hypertensive patients with or without UAE, the three most representative ncRNA biotypes were lncRNA, miRNA and piRNA in all three biofluids. Considerable research has been conducted over recent years into the molecular mechanisms of hypertension-associated renal pathology; however, most previous studies have focused mainly on protein-coding genes or miRNAs [30,31,[34][35][36]. For example, our group revealed an exosomal miRNA signature associated with albuminuria in hypertension [29]. The lncRNA group has reached special relevance in the last years in health and diseases, but few studies have reported on the role of lncRNA in renal pathology [12,17]. In recent years, another ncRNA group, piRNAs, have gained prominence as modulators of disease pathogenesis. A number of studies have reported that piRNA dysregulates expression in samples of different diseases, and various potential mechanisms have been proposed [22,37]. The present study contributes significantly to the literature due to our global analysis of ncRNA profile in hypertensive patients to assess an ncRNA signature related to albuminuria.
The common 24-ncRNA profile related to albuminuria in hypertension was composed mostly of lncRNA, followed, in order, by pseudogenes, snoRNAs, miRNAs and piRNAs. Ten DE lncRNAs were identified and in the RNA network constructed, five of these showed the highest node degree and significant role in interacting with ncRNA targets, serving as hub nodes (LINC02614, BAALC-AS1, FAM230B, LOC100505824 and LINC01484) joined to miRNA-301a. The GO and KEGG pathway analyses assessing biological functions enriched in response to albuminuria highlighted several pathways: modulators in the renal epithelial-mesenchymal transition (MAPK binding, TGF-β receptor and IL-17 signaling pathway), renal fibrosis (Wnt-protein binding, sphingolipid signaling and metabolism), endocytosis (clathrin heavy chain binding and store-operated calcium channel (SOCC) activity) and ECM constituent (SOCC activity and laminin binding). Accumulating experimental evidence indicates that these enriched pathways have always been involved in renal impairment. As an example, Chaudhari et al. emphasized SOCC as a crucial regulator of ECM synthesis and deposition by glomerular mesangial cells, its dysregulation being implicated in the pathogenesis of mesangial and interstitial fibrosis in diabetic nephropathy [38][39][40][41]. Furthermore, four snoRNAS related to albuminuria in hypertension were identified in the signature. A specific genome-wide array analysis of SNORD116 cluster showed that this small RNA changed the mRNA expression levels of over 200 genes, which was associated with clinical findings [42]. Finally, two piRNAs also showed altered expression levels in the common ncRNA signature. PiR-33056 has the guanine nucleotide exchange factor VAV3 as potential target gene, which is associated with the nucleotide-free states of Rho GTPases that activate pathways leading to actin cytoskeletal rearrangements [43]. As a result of these above-mentioned analyses, we identified an ncRNA signature and molecular network that could play an important role, by way of these pathways, in the development of hypertension-associated albuminuria.
Additionally, we predicted a common protein-coding gene network for the three biofluids, finding among the nodes with highest degree several transcripts known to be associated with albuminuria development and progression of kidney damage in hypertension, such as: SMAD3, WNT7B, BMP6 and PDGFRB (TGF-β signaling) [44]; YWHAQ, STK24, SLC12P6, CLCNKA and CLCNKB (kidney urinary concentration mechanisms) [45]; YWHAQ, PRKAG3, PRKAB2 and TRIB2 (MAPK regulation); and YWHAQ, GRIP1, RAB3IL1, HOOK1 and CYTH1 (membrane trafficking) [46,47]. Next, the GO terms and enriched KEGG pathways revealed voltage-gated chloride and anion channel activity, vasopressinregulated water reabsorption, complement and coagulation cascades and glucocorticoid receptor binding as mechanisms related to renal impairment in hypertension. An extended body of evidence supports the existence of pathways obtained in this study, some of which occur at the glomerulus (podocytes, endothelial cells) and others in the renal tubules. For example, ion channel (CLCNKA and CLCNKB) and transporter (SLC12P6) alteration, which act in concert to regulate volume and ionic concentration by absorption or secretion of ions into the urine, leads to renal disease [48,49]. Srivastava et al. demonstrated that loss of podocyte glucocorticoids receptors leads to upregulation of Wnt signalling and disruption in fatty acid metabolism, important for glomerular homeostasis [50]. The most striking feature of tubulointerstitial fibrosis is excessive deposition of fibrillar material in the widened interstitial space in fibrotic kidneys, and the condition is characterized by production of fibrosis-promoting factors, such as TGF-β1 and PDGF [41], both identified in our protein-coding network. Finally, previous works have shown that systemic endothelial dysfunction is an initiating step in the development of vascular damage, and albuminuria reflects widespread vascular damage [51], being a prognostic factor for cardiovascular risk in hypertension [4]. Therefore, plasma ncRNA signature associated to albuminuria could also indirectly reflect the cardiovascular risk progression in albuminuric hypertensive patients.
The major goal of this study was to identify a combined signature of various ncRNAs, such as lncRNAs, miRNAs and piRNAs, independent of biofluid origin, which may more closely reflect the overall biology of underlying early damage in hypertension than use of single markers. Another highlight was to assess the ncRNA targets to construct a regulatory network and identify the hub nodes that play an important role in network organization. These findings provide insights into the mechanisms involved in the architecture of the glomerular filtration barrier and renal tubular reabsorption and provide potential targets for treating hypertension-associated albuminuria. As discussed above, these data are supported by literature evidence in association with renal impairment. Hence, experimental validation of these findings, identifying precise cellular sources and mechanisms underlying common ncRNAs, are expected to set a benchmark for early renal damage research. Finally, further research using larger and independent cohorts is warranted to confirm the ncRNA signature found.
In summary, our study found an ncRNA profile associated with albuminuria, independent of biofluid origin (urine or plasma, in exosomes or circulating), that targets critical pathways of filtration barrier integrity, tubule reabsorption and vascular endothelial function, suggesting an important role for ncRNA signature in hypertension-associated early renal damage and cardiovascular risk progression. Further experimental studies should be performed to demonstrate the utility of these candidates as promising therapeutic targets in albuminuria and widen opportunities in comprehensive renal damage research.
Subjects
This was an observational case-control study which included 21 patients with essential hypertension (n = 21) and 22 patients without persistent elevated urinary albuminuria (UAE) (≥30 mg/g urinary creatinine) [52]. All hypertensive patients received antihypertensive treatment at the time of the study, and the mean duration of disease progression was five years. Hypertensive patients with severe kidney disease, uncontrolled hypertension, resistant hypertension or secondary hypertension, were excluded. The samples correspond to small RNA-Seq single-end raw data from three different biofluid fractions (total plasma, urinary and plasma-derived exosomes).
Biological Samples
Fresh first morning urinary samples (100 mL) were collected in sterile containers, and human blood samples were collected in EDTA tubes and centrifuged to separate the plasma fraction. All samples were processed within one hour after reception to isolate the exosomal component, as explained below.
Exosome Isolation and Characterization
Exosomes were isolated from urine (Exo-U) and plasma (Exo-P), using a protocol based on sequential ultracentrifugation. Exosome pellets were characterized by qNano Gold instrument (Izon Science Ltd., Christchurch, New Zealand), transmission electron microscopy and western blot. Detailed protocols are explained in our previous study authored by Perez-Hernandez J et al. [29].
RNA Extraction, Small RNA Library Preparation and Next-Generation Sequencing
Total RNA was extracted from exosomes using the Total Exosome RNA and Protein Isolation kit (Invitrogen, Life Technologies, Carlsbad, CA, USA); RNA was obtained using the miRNeasy mini kit (Qiagen, Hilden, Germany) from plasma samples. Quantification of total RNA, quality and size distribution were analyzed by capillary electrophoresis (Agilent 2100 Bioanalyzer, Agilent Technologies, Santa Clara, CA, USA) with the RNA 6000 Pico chip.
Single-patient libraries were prepared from 2 µL of total RNA from each condition (total plasma, urinary exosomes or plasma exosomes) using CleanTag Small RNA library preparation kit (TriLink Biotechnologies, San Diego, CA, USA), following a small RNA library preparation protocol optimized to very low input samples, as previously described [53]. Libraries were sequenced on the HiSeq 2000 platform (Illumina, San Diego, CA, USA) at 8 pM final concentration with a 50-cycle single-read mode (CNAG, Barcelona, Spain). The raw RNA-Seq dataset is available at the BioProject repository, accession: PRJNA590749.
Small RNA Sequencing Data Analysis
Data quality control of the raw data was conducted with FastQC v0.11.8 [54]. Subsequently, the data was filtered using FASTX-Toolkit v0.013 (http://hannonlab.cshl.edu/ fastx_toolkit, accessed on 13 October 2021), removing adapters, low quality reads and nucleotides. Alignment was made with STAR v2.7.3a [55], following the recommendations for use in small RNA-Seq data. GENCODE human genome release 38 (GRCh38.p13) was used as a reference genome. SAM files were converted to BAM and sorted with SAMtools v1.10 [56]. Sorted BAM files were included in R, and the count matrix was obtained using GenomicFeatures [57], Rsamtools [58] and GenomicAlignments Bioconductor packages. Two gtf annotation files were used: GENCODE reference annotation for the Human release 38 (comprehensive gene annotation) [59] and piRNA database Homo sapiens hg38 annotation file v1.7.6 [60]. We therefore followed two pipeline analyses: one for ncRNA and the other for piRNA ( Figure 6 and Supplementary Methods File).
Small RNA Sequencing Data Analysis
Data quality control of the raw data was conducted with FastQC v0.11.8 [54]. Subsequently, the data was filtered using FASTX-Toolkit v0.013 (http://hannonlab.cshl.edu/fastx_toolkit, accessed on 13 October 2021), removing adapters, low quality reads and nucleotides. Alignment was made with STAR v2.7.3a [55], following the recommendations for use in small RNA-Seq data. GENCODE human genome release 38 (GRCh38.p13) was used as a reference genome. SAM files were converted to BAM and sorted with SAMtools v1.10 [56]. Sorted BAM files were included in R, and the count matrix was obtained using GenomicFeatures [57], Rsamtools [58] and GenomicAlignments Bioconductor packages. Two gtf annotation files were used: GENCODE reference annotation for the Human release 38 (comprehensive gene annotation) [59] and piRNA database Homo sapiens hg38 annotation file v1.7.6 [60]. We therefore followed two pipeline analyses: one for ncRNA and the other for piRNA ( Figure 6 and Supplementary Methods File).
Preprocessing, Annotation and Normalization
The count matrix was included in a DGEList object using the edgeR Bioconductor package [61]. The metadata for samples was included, and the genes not expressed in either experimental condition were discarded. Annotation was performed by org.Hs.eg.db Bioconductor package [62], which provides a genome-wide annotation for Human (see Figure 6 and Supplementary Material Methods). A matrix with filtered, normalized and annotated counts per million (CPM) mapped reads was generated for piRNA pipeline and another one for ncRNA pipeline to estimate the abundance of RNA types in each group of samples, summing up the counts of these two matrices (Supplementary Methods file).
Statistical Analysis
The contrasts between hypertensive patients with (UAE) and without albuminuria (non-UAE) were determined by performing a negative binomial generalized log-linear model to analyze the read counts for each gene, adjusted for sex. The p-values were adjusted using Benjamini-Hochberg method. p < 0.05 was considered statistically significant. The edgeR Bioconductor package was used for all statistical analyses [61]. The graphs
Preprocessing, Annotation and Normalization
The count matrix was included in a DGEList object using the edgeR Bioconductor package [61]. The metadata for samples was included, and the genes not expressed in either experimental condition were discarded. Annotation was performed by org.Hs.eg.db Bioconductor package [62], which provides a genome-wide annotation for Human (see Figure 6 and Supplementary Material Methods). A matrix with filtered, normalized and annotated counts per million (CPM) mapped reads was generated for piRNA pipeline and another one for ncRNA pipeline to estimate the abundance of RNA types in each group of samples, summing up the counts of these two matrices (Supplementary Methods file).
Statistical Analysis
The contrasts between hypertensive patients with (UAE) and without albuminuria (non-UAE) were determined by performing a negative binomial generalized log-linear model to analyze the read counts for each gene, adjusted for sex. The p-values were adjusted using Benjamini-Hochberg method. p < 0.05 was considered statistically significant. The edgeR Bioconductor package was used for all statistical analyses [61]. The graphs were made using the R package ggplot2 [63] or VennDiagram [64], as appropriate (Supplementary Methods File).
Non-Coding RNA Target Predictions
The targets for lncRNAs were predicted using LncRRIsearch, a web server for comprehensive prediction of human and mouse lncRNA-lncRNA and lncRNA-mRNA interaction (http://rtools.cbrc.jp/LncRRIsearch, accessed on 13 October 2021) [65]. The top ten targets for each isoform with an energy threshold ≤−100 kcal/mol were selected for each lncRNA gene.
In the case of miRNA-targets, three web-based tools were used: TargetScan (http: //www.targetscan.org/vert_72/, accessed on 13 October 2021) [66], miRDB (http://mirdb. org/, accessed on 13 October 2021) [67] and miRTarBase (https://mirtarbase.cuhk.edu. cn/~miRTarBase/miRTarBase_2022/php/search.php, accessed on 13 October 2021) [68]. The selection criteria by target was a cumulative weighted context++ score of <−0.5 for TargetScan. For miRTarBase, the targets were selected with a number of papers greater than 1 or if the sum of validation methods was greater than 1. For miRDB, all targets with a Target Score of 90 or higher were selected. The targets predicted in common for at least two tools were selected, except for hsa-mir-208a-5p, where only miRDB predicted targets, i.e., all targets with a Target Score of 99 or higher, were selected.
Molecular Pathways Analyses
Gene set Over-Representation Analysis (ORA) was performed in WEB-based GEne SeT AnaLysis Toolkit (http://www.webgestalt.org/, accessed on 15 October 2021), using GO and KEGG databases [69]. The protein-protein interaction network was generated using STRING database v11.0 [70]. All biological interactions with a confidence score of 0.2 or greater were included. The STRING database provides a confidence score (from 0 to 1), which estimates the likelihood that an annotated interaction between a pair of proteins is biologically meaningful, specific and reproducible. The networks were analyzed and displayed using the yFiles organic layout with Cytoscape v3.8.1 [71]. In this network, nodes and edges represented biological data in a direct manner, in which each node represented a biological molecule, and the edges represented interactions between nodes. The ncRNAtarget network was generated using STRING to obtain the interaction between targets, following the same methodology as for the protein network. LncRNA-target and miRNAtarget interaction was included using Cytoscape, based on predictions by the web-based tools described above (Section 4.9), using a manually generated sif file.
|
2022-01-15T16:12:54.760Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "12fcd513be9503dfe4948392e49bc140a4fd76b7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/2/823/pdf?version=1642058365",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f46a571e02c4febac4a558e5a49741b68c5a92c7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227175270
|
pes2o/s2orc
|
v3-fos-license
|
Acacetin exerts antioxidant potential against atherosclerosis through Nrf2 pathway in apoE−/− Mice
Oxidative stress has a considerable influence on endothelial cell dysfunction and atherosclerosis. Acacetin, an anti‐inflammatory and antiarrhythmic, is frequently used in the treatment of myocarditis, albeit its role in managing atherosclerosis is currently unclear. Thus, we evaluated the regulatory effects of acacetin in maintaining endothelial cell function and further investigated whether the flavonoid could attenuate atherosclerosis in apolipoprotein E deficiency (apoE−/−) mice. Different concentrations of acacetin were tested on EA.hy926 cells, either induced or non‐induced by human oxidized low‐density lipoprotein (oxLDL), to clarify its influence on cell viability, cellular reactive oxidative stress (ROS) level, apoptotic ratios and other regulatory effects. In vivo, apoE−/− mice were fed either a Western diet or a chow diet. Acacetin pro‐drug (15 mg/kg) was injected subcutaneously two times a day for 12 weeks. The effects of acacetin on the atherosclerotic process, plasma inflammatory factors and lipid metabolism were also investigated. Acacetin significantly increased EA.hy926 cell viability by reducing the ratios of apoptotic and necrotic cells at 3 μmol/L. Moreover, 3 μmol/L acacetin clearly decreased ROS levels and enhanced reductase protein expression through MsrA and Nrf2 pathway through phosphorylation of Nrf2 and degradation of Keap1. In vivo, acacetin treatment remarkably attenuated atherosclerosis by increasing reductase levels in circulation and aortic roots, decreasing plasma inflammatory factor levels as well as accelerating lipid metabolism in Western diet‐fed apoE−/− mice. Our findings demonstrate the anti‐oxidative and anti‐atherosclerotic effects of acacetin, in turn suggesting its potential therapeutic value in atherosclerotic‐related cardiovascular diseases (CVD).
| INTRODUC TI ON
Atherosclerosis-related cardiovascular disease (CVD) is a leading cause of mortality in developed and developing countries.
Atherosclerosis is a chronic inflammatory condition characterized by dyslipidaemia and oxidative stress. 1 Low-density lipoprotein (LDL) is considered to be a key molecular in every stage of atherosclerosis. Specifically, it is the oxidized LDL (oxLDL) particles, as opposed to normal LDL, that have a pathogenic influence, activating the vascular intima and subsequently initiating atherosclerosis. In addition to their lipid-lowering effects in clinical therapies, statins, according to many studies, possess anti-oxidative ef- Acacetin, a natural flavone widely distributed in plant pigments, has been shown by many studies to have multiple beneficial biological effects in cancers, 9,10 cardiac remodelling, 11 microbial infections, 12 inflammation 13 and oxidative stress. 14 In human umbilical vein endothelial (HUVEC) cells, acacetin inhibited E-selectin expression through the p38/MAPK pathway and activation of the nuclear factor NF-κB. 15 Acacetin has also demonstrated an ability to down-regulate inflammatory iNOS and COX-2 gene expression in RAW264.7 cells by inhibiting the activation of NF-κB through interfering with the PI3K/Akt/IKK and MAPK pathways. 16 Moreover, our previous study found that AMPK-mediated Nrf2 activation through acacetin is involved in cardiomyocyte protection against hypoxia/ reoxygenation injury by its anti-oxidative, anti-inflammatory and anti-apoptotic effects. 17 Collectively, this evidence clearly illustrates the involvement of acacetin in oxidative stress and inflammation-associated conditions. Methionine, a thiol amino acid, is not only an initiation amino acid but also a sensitive target for oxidants, giving it vital roles in some critical signalling pathways. Met is easily oxidized to MetO, which is reduced back to Met exclusively by the intracellular methionine sulphoxide reductase (Msr) system. Specifically, MsrA, an enzyme involved in this system, reduces methionine-S-sulfoxide (MetSO) to Met. 18 We previously demonstrated that exogenous reconstructed MsrA protein plays a protective role against oxidative stress as well as inflammation in RAW264.7 cells, and attenuates the atherosclerotic process in western diet-fed apoE deficiency (apoE −/− ) mice. 19 However, it is currently still unclear precisely how MsrA exerts protection against oxidative stress.
The goal of this study is to investigate whether acacetin can protect against oxidative stress in humans and attenuate atherosclerosis in apoE −/− mice. Moreover, we will attempt to reveal the mechanisms underlying the role of acacetin in the MsrA-and Nrf2-related pathways, aiming to provide evidence of its potential therapeutic role in atherosclerosis-related CVD.
| Cell culture
Human endothelial cell line EA.hy926 cells were obtained from ATCC.
| Western blot analysis
Harvested cells or frozen tissues were lysed by RIPA with 1% protease and phosphatase inhibitors (Roche) for Western blot assay.
The appropriate amount of proteins was loaded and separated by 10% or 12% SDS-PAGE and transformed onto PVDF membrane.
Protein expression was detected by primary antibodies followed by HRP-conjugated secondary antibodies. Signals were detected using an enhanced chemiluminescence kit (ECL, GE Healthcare) and captured by a chemiluminescence detection system (FlouChem E). The band densitometry was analysed by Image J software (NIH).
| Determination of basic biochemical parameters in Western diet-fed mice
After 12 weeks of experimental procedures, blood samples were collected from mice after overnight fasting by retro-orbital venous plexus puncture. Plasma was immediately separated by centrifugation at 1000× g for 10 minutes at 4°C. Total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), lowdensity lipoprotein cholesterol (LDL-C) and apolipoprotein AI (apoAI) levels were measured by enzymatic colorimetric methods using Mind Bioengineering kits. The remaining plasma was used for determination of IL-6, IL-10, TNFα and MCP-1 levels by ELISA kits according to the manufacturer's instructions. Frozen mouse livers were lysed by RIPA with 1% proteinase inhibitors for Western blot.
| Histochemical and immunohistochemistry of atherosclerotic lesions
Mice were subcutaneously injected with acacetin or normal saline two times every day for 12 weeks. Mice were made to fast after the last injection and then sacrificed. The aortic roots were embedded in OCT (Sakura, USA) and quickly frozen horizontally to −20℃. Eight micrometers serial sections of the aortic root were collected on 10 slides.
For atherosclerosis analysis, the entirety of the aorta was fixed in 4% paraformaldehyde, opened longitudinally, and then analysed en face. The aortic root slides were determined by Oil Red O (ORO) staining and quantification by Image J software as described previously. 19 For plaque component analysis, immunohistochemistry was carried out using Abcam's IHC staining protocol for frozen sections.
Briefly, the aortic root slides were fixed in precooled acetone and then stained with anti-iNOS, anti-CD206, anti-pNrf2 S40 , anti-Nrf2 and anti-MsrA primary antibodies followed by donkey anti-rabbit (Alexa Fluor® 488) or anti-mouse (Alexa Fluor® 647) secondary antibodies, respectively. Images were captured using the Leica SP8 fluorescent microscope.
| Statistical analysis
Data are presented as mean ± SEM. Statistical analyses were performed using Onaway ANOVA between groups. Differences were considered to be significant at P < .05.
| Acacetin inhibited high oxLDL-induced cell death and oxidation
MTT method was used to determine the optimal concentration of high oxLDL for the following studies. The cytotoxicity of high oxLDL F I G U R E 2 Regulatory effects of acacetin on anti-oxidative stress-related reductases as well as the abolishment of its protective effects against oxLDL in EA.hy926 cells with the silencing of MsrA or Nrf2. (A-G) EA.hy926 cell MsrA, Nrf2/Keap1-related protein, and SIRT1 expression levels without (control) or with oxLDL stimulation in the absence (oxLDL) or presence of 0.3, 1, or 3 μmol/L acacetin were measured by Western blot. (H-J) Apoptosis of EA.hy926 cells was measured by flow cytometry. Cells were transfected with scrambled siRNA, MsrA siRNA, Nrf2 siRNA or SIRT1 siRNA for 48 h, and then subjected to oxLDL treatment in the absence (control) or presence of 3 μmol/L acacetin. (K-L) Intracellular ROS levels were measured by flow cytometry. Cells were transfected with scrambled siRNA, MsrA siRNA or Nrf2 siRNA for 48 h, and then subjected to oxLDL treatment for 15 min in the absence (control) or presence of 3 μmol/L acacetin for 24 h. n = 5 for each group, P # < .05, P ## < .01 vs control group; P * < .05, P ** < .01, P *** < .001 vs oxLDL-stimulated group was tested on EA.hy926 cells, revealing that high oxLDL, unlike normal oxLDL, had stronger cytotoxicity that induced much more cell death after 24 hours incubation ( Figure S1A). We chose 5 μg/mL high oxLDL, which caused nearly 28% cell death, as the final study concentration. Pretreatment with acacetin for 4 hours significantly protected against the oxLDL-mediated reduction of cell viability ( Figure S1B). To determine the deep effects of acacetin on cell death, cells were treated with the same procedures as in the MTT study.
In addition to this, we also measured the intracellular ROS levels. Cells were pretreated with acacetin for 24 hours followed by 20 μmol/L DCFH-DA and 50 μg/mL high oxLDL for 15 minutes.
As shown in Figure 1G,H, high oxLDL treatment resulted in a high intracellular ROS level (the fluorescence intensity nearly 2.36 folds of the control group). In contrast, ROS production was markedly reduced in cells pretreated with 3μM acacetin (decreased to 1.4 folds of control). Moreover, the oxidase protein and MDA were also reduced by different concentrations of acacetin ( Figure S1C,D).
These results in turn indicate that acacetin could protect EA.hy926 cells from high oxLDL-induced cell death by reducing intracellular ROS levels.
| Acacetin enhanced cellular antioxidative defence through increasing oxidoreductases expression
To investigate how acacetin exerts its anti-oxidative stress effects, EA.hy926 cells were treated with acacetin at different concentrations for 24 hours. Three micrometers acacetin significantly increased reductase MsrA, Nrf2, Nrf2 downstream HO-1, and CAT protein expression at the cellular basal level ( Figure S1E-S1I). These results suggest that acacetin may enhance basal anti-oxidative defences.
Furthermore, in order to ensure the anti-oxidative effects of acacetin under oxidative stress conditions, cells were pretreated with acacetin at different concentrations for 4 hours followed by 5 μg/ mL high oxLDL stimulation for 20 hours. Surprisingly, we found that MsrA protein expression level was also significantly increased at 3 μmol/L acacetin (Figure 2A,B). Moreover, acacetin also increased Nrf2, downstream HO-1, Trx ( Figure 2C-E) and SIRT1 protein expression levels ( Figure 2F), but did not change pAMPK Thr172 / tAMPK levels ( Figure S1N). Interestingly, we also showed that 3 μmol/L acacetin remarkably decreased oxLDL-induced Keap1 expression level ( Figure 2G). The Nrf2/Keap1 system is a defence mechanism used to preserve cellular homeostasis, and Nrf2 is regarded as a master regulator of the oxidative stress response. 20 As such, these results preliminarily confirmed that acacetin can exert anti-oxidative effects through the Nrf2 pathway but not the AMPK pathway. Acacetin also slightly up-regulated CAT, SOD1 and SOD2 levels (Figure S1J-S1M).
Collectively, these data suggest that acacetin can potentiate cellular anti-oxidative defence through the up-regulation of MsrA, Nrf2, HO-1, Trx and SIRT1 to reduce intracellular ROS levels, which in turn indicates that acacetin may be involved in Nrf2/Keap1 or MsrArelated pathways.
| Acacetin enhanced cellular anti-oxidative effects through the MsrA-Nrf2/Keap1 pathway
In order to confirm whether acacetin exerts its anti-oxidative stress effects through Nrf2/Keap1 or some other pathway, we silenced cellular nrf2, msra, sirt1 gene expression using siRNAs. We displayed that the influence of acacetin on not only easing apoptosis and necrosis ( Figure Figure 3C), the SIRT1 level was significantly increased after acacetin treatment ( Figure 3D) which had the same tendency as the scrambled group. Interestingly, Keap1 expression in the scrambled group was remarkably contrary to that when Nrf2 expression was reduced ( Figure 3E). Moreover, when MsrA was silenced, aside from the lack of MsrA increase after either oxLDL stimulation or pretreatment with acacetin followed by oxLDL ( Figure 3C
| Acacetin attenuated atherosclerosis in Western diet-fed apoE -/mice through the activation of Nrf2 and MsrA in the lesions
ApoE −/− mice were subcutaneously injected with normal saline or acacetin and fed a Chow (blank group) or Western diet for 12 weeks.
The body weights (at various time-points) of mice showed no difference between the control and acacetin-treated groups, but spleen/ Mice were fed a Western-type diet and injected subcutaneously with normal saline or acacetin for 12 wk.
Abbreviations: TC, total cholesterol; TG, triglycerides. # Is statistically significant vs blank group. Micrographs were captured at ×200 magnification. n = 5-9 of each group, P * < .05, P ** < .01 vs control group body weight ratios (at the endpoint) were significantly lower in acacetin-treated mice (Table 1). After 12 weeks, plasma TC, TG, HDL-C and LDL-C levels were measured. TC and LDL-C levels were not different between control and acacetin-treated groups, albeit TG levels were significantly higher in the latter. Interestingly, HDL-C and apoAI levels were markedly increased in acacetin-treated mice ( Table 1).
The impact of acacetin injection on the development of atherosclerosis in apoE -/mice was assessed. Representative atherosclerotic lesions in en face images and cross-sections of aortic roots stained with ORO are shown in Figure 4A,B En face analysis of pinned-out aortas revealed that the atherosclerotic lesion percentage area in acacetin-injected mice (8.19% ± 0.92%) was significantly reduced compared to that of in the control group (11.04% ± 1.04%, P < .05, Figure 4D), especially in the arch region (20.25% ± 2.28% vs 25.64% ± 1.42%, P < .05, Figure 4C). In addition to this, the lipid staining area in the aortic root lesion of acacetin-treated mice (0.13 ± 0.01 mm 2 ) was 26.7% (P < .05) smaller than that of in control mice (0.18 ± 0.02 mm 2 , Figure 4E). We uncovered a higher expression level of pNrf2 S40 in both the arterial wall and plaque but Nrf2 mainly in the plaque of acacetin-injected mice compared to those in control groups, indicating that acacetin could successfully activate Nrf2 in aortic cells ( Figure 4F).
We also found that MsrA expression recovered mainly in the arterial wall of acacetin-treated mice ( Figure 4G).
| Acacetin ameliorated oxidative stress and inflammatory in apoE −/− mice
Anti-serum amyloid A and PON1 are HDL apolipoproteins: whilst the former has pro-atherogenic activities, 22 the latter is instead considered to be atheroprotective and decreases after inflammatory stimuli. 23,24 We found that the plasma levels of SAA and PON1 were significantly decreased and increased, respectively, in acacetin-treated mice ( Figure 5A-C). Furthermore, reverse cholesterol transport (RCT) is a process that facilitates cholesterol transport from peripheral organs back to the liver to regulate excessive systemic cholesterol levels. The cholesterol efflux mediated via ABCA1 and ABCG1 outside the cells whilst HDL-C can be taken up by SR-BI for degradation of HDL. 25 We found that ABCA1, SR-BI and ABCG1 protein levels were up-regulated in acacetin-injected mice ( Figure 5D,G), which indicates that it may accelerate RCT in the liver. Meanwhile, liver CAT protein expression level was also significantly increased in acacetin-injected mice relative to that of in controls (Fig. S2A,E). However, Nrf2 and the other oxidoreductases, MsrA, and PON1 expression levels showed F I G U R E 5 Acacetin ameliorated oxidative and inflammatory stress in the circulation and liver of apoE −/− mice. (A-C) Plasma SAA and PON1 levels were detected by Western blot. (D-G) ABCA1, ABCG1, and SR-BI levels in the liver were determined by Western blot. n = 5-9 of each group, P # < .05, P ## < .01 vs blank group; P * < 0.05, P *** < 0.001 vs control group no observable differences between the control and acacetin-treated groups (Fig. S2B,D). These data accordingly suggest that acacetin plays an anti-oxidative stress role in the liver.
The inflammatory process in the atherosclerotic-burdened artery may lead to increased blood levels of pro-inflammatory cytokines, including but not limited to IL-6 and TNFα. 26 In this study, plasma samples were diluted to the appropriate ratio and measured using the ELISA method. As shown in Table 1
| D ISCUSS I ON
Atherosclerosis is no longer thought to be a chronic inflammatory disease characterized solely by dyslipidaemia, but rather by oxida- The Nrf2/Keap1 complex is a potent transcriptional activator that plays a central role in the induction of many cytoprotective genes in response to electrophilic and oxidative stress. 37 Induction of HO-1 and Trx in a Nrf2-dependent manner has also been experimentally demonstrated. 38 without altering plasma lipid levels. In addition to this, injection of acacetin also raised mouse plasma levels of PON1 and IL-10 and reduced levels of pro-inflammatory factors, SAA, IL-6 and TNFα. PON1 is an important anti-oxidation enzyme that is synthesized in the liver and secreted into the plasma wherein it associates with HDL particles. SAA is likewise also produced in the liver, and its expression is correspondingly increased in response to IL-6 and TNFα. 24 However, whilst the expression levels of CAT and SOD2 similarly increased in the liver under pro-inflammatory factor stimulation, PON1 levels remained unchanged, possibly due to its secretion to the plasma. All these results suggested that acacetin confers an anti-atherogenic benefit through accelerating lipid metabolism, anti-oxidation along with anti-inflammation mechanisms in the circulation and liver.
In our results, both the spleen/body weight ratios and the altered fate of macrophages are changed by acacetin. We speculate that, as the most important immune organ, spleen weight may represent strong immunity with increasing monocytes and enhancing phagocytosis effects (M1 macrophage-like). YUNG-LUEN SHIH and colleagues found that bufalin increased the body weight, but reduced liver and spleen weights, and reduced CD3, CD16 and Mac-3 cell markers.
They finally conclude that bufalin may modulate immune responses not only through increasing monocyte (CD11b) population and T-and B-cell proliferation, but also by increasing macrophage phagocytosis in leukaemic mice in vivo. 42 Marco Busnelli and colleagues found that fenretinide, a synthetic retinoid derivative, could induce spleen abnormally enlarged and markedly increased atherosclerotic lesions at the aortic arch, thoracic and abdominal aorta of fenretinide-treated mice, just the similar results in our control (western diet) group. 43 In our present study, we found that mice treated with acacetin had lower spleen/body weight ratio and it may be because acacetin regulates spleen immunity function, decreasing monocytes or promoting monocyte-macrophage to M2 differentiation.
Interestingly, acacetin-treated mice had a higher body weight than that of in the control group, a finding in stark contrast to that of Liou et al's, showing that in high fat diet-fed obese mice, acacetin significantly reduced body weight. 44 Burke et al recently also reported that citrus flavonoids supplementation to a high fat, cholesterol-containing diet protected against obesity. These effects may be related to reserve existing obesity, adipocyte size and number through enhanced energy expenditure and increased hepatic fatty acid oxidation. 45 Potential reasons our differing results include variations in mice type as well as intravenous administrative methods.
| CON CLUS ION
In conclusion, our present study demonstrated that the natural flavone acacetin promotes not only a significant reduction in cellular apoptosis through anti-oxidative stress effects via the MsrA-Nrf2/ Keap1 pathway in vitro, but also halts atherogenesis through accelerating lipid metabolism as well as the anti-oxidation and anti-inflammatory capacity of Western diet-fed apoE -/mice. It may therefore serve as a potential drug candidate for prevention and treatment atherosclerosis-related CVD.
ACK N OWLED G EM ENTS
We are grateful to professor Gui-Rong Li, Dr Cui-Lian Dai and associate researcher Dianyu Dong from Xiamen Cardiovascular Hospital of Xiamen University for giving advice and editing the English text of a draft of this manuscript.
CO N FLI C T O F I NTE R E S T S
The authors have no conflicts of interest to declare. Project administration (lead); Resources (lead).
E TH I C S A PPROVA L A N D CO N S E NT TO PA RTI CI PATE
The Xiamen University Ethics Committee approved the protocols according to the Helsinki Declaration.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data and materials in this study are available on request from the authors.
|
2020-11-27T14:06:22.571Z
|
2020-11-26T00:00:00.000
|
{
"year": 2020,
"sha1": "b26979361255c2118d455b4174438bb5ac3a2303",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.16106",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "04bbef1a55b12c81502190880cdaadc4f84f7b92",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
1392873
|
pes2o/s2orc
|
v3-fos-license
|
Strategies to facilitate the discovery of novel CNS PET ligands
Positron Emission Tomography (PET), as a non-invasive translatable imaging technology, can be incorporated into various stages of the CNS drug discovery process to provide valuable information for key preclinical and clinical decision-making. Novel CNS PET ligand discovery efforts in the industry setting, however, are facing unique challenges associated with lead design and prioritization, and budget constraints. In this review, three strategies aiming toward improving the central nervous system (CNS) PET ligand discovery process are described: first, early determination of receptor density (Bmax) and bio-distribution to inform PET viability and resource allocation; second, rational design and design prioritization guided by CNS PET design parameters; finally, a cost-effective in vivo specific binding assessment using a liquid chromatography-mass spectrometry (LC-MS/MS) “cold tracer” method. Implementation of these strategies allowed a more focused and rational CNS PET ligand discovery effort to identify high quality PET ligands for neuroimaging.
pre-clinical leads and select the best molecule to advance to clinical trials ( Fig. 1b) (Matthews et al. 2012). Clinically, target occupancy can be used to unequivocally confirm "drug-on-target" (proof of mechanism), (Morgan et al. 2012) guide clinical dose selections, and enable data-driven clinical "Go/No Go" decisions depending on whether the onset of side effects occurs prior to or after desired target occupancy range for efficacy (Wong et al. 2009). Finally, PET imaging, with ligands specific to targets that are associated with a given disease state, can be used as disease state biomarkers for early disease detection, disease progression monitoring and patient phenotyping. Prominent examples are FDA-approved Aβ plaque PET ligands (Ono & Saji 2015) [e.g. florbetapir, (Eli Lilly Pharmaceuticals Press Release 2012) florbetaben (Piramal Imaging 2014) and flutemetamol (GE Healthcare Press Release 2013)] and emerging Tau PET ligands (Choe & Lee 2015) [e.g. [ 18 F]T-807 (Xia et al. 2013) and [ 18 F]THK-5351 (Harada et al. 2016)], which detect Aβ amyloid plaque and neurofibrillary tangles accumulation in brain, two main hallmarks of AD pathology (Fig. 1c).
To enable PET imaging, it is necessary to develop high quality target-specific CNS PET ligands that meet a distinctive set of criteria (Table 1). Structurally, a CNS PET ligand needs to contain a structural moiety amenable for [ 11 C] or [ 18 F] incorporation. Considering the short half-lives of PET radionuclides (20 min for [ 11 C] and 110 min for [ 18 F]), late-stage radiolabel incorporation is necessary to allow rapid synthesis and purification. With regard to pharmacology, a CNS PET ligand should be potent (B max /K d >10, see section on 'understand the target') and selective towards the target of interest (typically > 30×) and needs to occupy the same binding pocket as the drug molecule to allow competitive blocking and target occupancy measurement. In terms of pharmacokinetic (PK) properties, a CNS PET ligand must be brain penetrant, but should not form brain-permeable radioactive metabolites which can confound radioactivity measurements. Furthermore, it must demonstrate low non-specific binding (NSB) to brain white matter to achieve sufficient signal-to-noise ratio for quantification. Finally, similar to drug molecules, a CNS PET ligand must demonstrate safety for clinical dosing. Considering low clinical doses of PET ligands (typically < 10 μg), a simplified microdosing good laboratory practice (GLP) toxicity study is required for exploratory investigative new drug (eIND) submission, involving acute intra venous (IV) dosing in a single species, typically rats, plus 14-day observation (Wagner & Langer 2011). Compared to CNS drug molecules, a CNS PET ligand may have more stringent criteria regarding potency (single digit nM or sub-nM), and PK, such as minimal brain permeable radioactive metabolite formation and low NSB.
Considering the importance of PET imaging in CNS drug discovery, it is crucial to establish an efficient process for CNS PET ligand discovery. Specifically in the industry setting, there are often large collections of existing chemical matter (up to thousands of compounds), from drug discovery efforts and literature, with a wealth of potency, selectivity and in vitro absorption, distribution, metabolism and excretion (ADME) data. While this provides an enviable starting point for PET ligand discovery effort, it also presents a distinctive challenge on how to identify the right leads for PET imaging from such large compound pools. In the past, prioritization was carried out in a largely empirical manner, with a primary focus on potency. A number of radiotracer leads, sometimes more than 10, were radiolabeled and screened in PET imaging to identify one viable PET ligand. This "multiple shots on goal" approach (Fig. 2, path a) was resource intensive and not sustainable, considering the significant cost associated with non-human primate (NHP) PET imaging studies. Therefore, it is imperative to identify new strategies to significantly improve the PET ligand discovery process in a more rational way with higher success rate, lower cost, and less resources. In an ideal state, we envision advancing no more than three ligand candidates, preferably one, into PET imaging studies to yield a successful PET ligand for clinical imaging (Fig. 2).
Herein, three strategies aiming to improve the overall success rate and efficiency of the novel CNS PET ligand discovery process (Fig. 2, Path b) are described. First, "understand the target": we aimed to gain an early understanding of the expression level (B max ) and bio-distribution of the specific target to determine PET viability and study design. Second, "design the right molecule": we aimed to define a set of tractable design and selection parameters to guide rational PET ligand design. Third, "implement cost-effective in vivo model": we aimed to explore cost-effective methods to provide an early read on the in vivo specific binding of ligand candidates prior to triggering more resource-extensive non-human primate (NHP) PET imaging studies. It was also critical that all three strategies could be carried out efficiently so that higher success rates would not come at the cost of longer timelines.
Bmax and bio-distribution: understand the target
To assess PET viability, it is important to have a clear read on two specific parameters regarding the biological target: the maximum concentration of target binding sites (B max ) and brain bio-distribution. B max represents target expression level and is key to inform the level of affinity (K d ) required for a successful radiotracer (B max /K d ≥ 10) (Patel & Gibson 2008). The lower the expression level, the higher the affinity required for a radiotracer to show in vivo specific binding. If a given target has a B max value less than 1 nM, it will be challenging to identify PET ligand leads with sufficient potency and alignment of other properties (e.g. PK, NSB). Early determination of B max would thus allow teams to assess the likelihood of success of PET strategy for various targets and help allocate resources to more viable targets (e.g. B max > 1 nM). Brain bio-distribution, while not impacting PET doability, is required to inform subsequent PET imaging studies such as specific binding assessment. For example, if a target is only expressed or enriched in certain brain regions [e.g. striatum for phosphodiesterase 10 (PDE10) (Tu et al. 2010)] then a target-free brain region could be used as a "reference region" (e.g. cerebellum for PDE10) to determine specific binding. On the other hand, if a target is expressed throughout the brain [e.g. PDE4, (Pérez-Torres et al. 2000) metabotropic glutamate receptor type 5 (mGluR5) (Romano et al. 1995)], it would be necessary to carry out both baseline and blocking or equivalent (e.g. knock-out animal) studies to determine specific binding. For any novel CNS target that lacks pre-existing knowledge on B max and PET ligand design parameters: design the right molecule Effective PET ligand discovery calls for facile prioritizations of best leads from a large pool of existing chemical matter and focused structure-activity relationship (SAR) efforts to rapidly design and identify suitable candidates for PET imaging. In an effort to understand the preferred property space for CNS PET ligands, our group compiled a PET ligand database consisting of 62 clinically validated CNS PET ligands and 15 unsuccessful radioligands as negative controls. A systematic analysis was then carried out, in which key differences between the two categories in terms of physicochemical properties and in vitro ADME properties were identified . As shown in Fig. 3, for in vitro ADME properties, fraction unbound in brain (Fu_b) (Di et al. 2011) emerged as a pronounced differentiator across two categories. A Fu_b value of greater than 0.05 should be targeted to minimize the risk of NSB as it captured the majority of the successful ligands (67 %), while only 13 % of the failed ligands were in this range. Passive permeability as measured by Ralph Russ canine kidney (RRCK) apparent permeability coefficient (P app ) apical-to-basolateral (AB) and efflux risk as measured by multidrug resistance protein 1 (MDR1) basolateral-to-apical/apical-to-basolateral (BA/ AB) ratios were identified as two important parameters to predict brain permeability. Specifically, RRCK P app AB values > 5 × 10 −6 cm/s and MDR1 BA/AB ratios ≤ 2.5 should be targeted to increase the probability of the ligand leads getting into the brain. For the analysis of physicochemical properties, we used a novel tool, CNS PET multi- parameter optimization (MPO) (scores ranging from 0 to 6). This multi-parameter optimization tool allowed us to track all six physicochemical properties commonly considered in compound design, including cLogP, cLogD, MW (molecular weight), tPSA (topological polar surface area), HBD (number of hydrogen bond donors), and pKa (ionization constant of the most basic center). Compared to individual physicochemical property parameters, CNS PET MPO provided much improved differentiations between the two categories, with CNS PET MPO > 3 preferred. Only a low percentage (33 %) of the failed ligands and the vast majority (79 %) of the successful ligands resided in this range. Further analysis also revealed that by targeting CNS PET MPO > 3, there was a higher probability to align all three key ADME parameters in one molecule. Importantly, our group has developed high performing in silico models for RRCK, MDR and Fu_b based on experimental data of a diverse set of >100,000 compounds, which allows calculations of in vitro ADME properties for assessment prior to synthesis. These in vitro ADME and physicochemical property criteria, together with previously described potency criteria (B max /Kd > 10), should be considered for lead prioritization and rational ligand design. The prospective use of these PET design parameters was exemplified by the development of a highly selective PDE 2A PET ligand 4-(3-[ 18 F]fluoroazetidin-1-yl)-7-methyl-5-(1-methyl-5-(4-(trifluoromethyl)phenyl)-1H-pyrazol-4-yl)imidazo[1,5-f] [1,2,4] triazine ([ 18 F]PF-05270430) . As illustrated in Fig. 3, starting from over one thousand existing chemical compounds, upon filtering by the defined criteria of CNS PET MPO, RRCK/MDR, and Fu_b, and further prioritization by potency, we quickly identified pyrazolopyrimidine 1 as a promising PET ligand lead. PET assessment of [ 11 C]1 in NHP revealed preferential binding to striatum, consistent with PDE 2A enzyme expression. However, in vivo binding potential (BP) of 0.6 was sub-optimal for practical utility in receptor occupancy quantification. To address this issue, we carried out a PET-specific structure activity relationship (SAR) effort with the goal of further improving potency while maintain other properties. A total of six close-in analogs were targeted for synthesis, with calculated properties in the desired ranges. From this cohort, imidazotrazine PF-05270430 was identified as the best lead to advance for PET assessment (Fig. 4). Compared to compound 1, PF-05270430 achieved~4× improvement in potency (IC 50 = 0.5 nM) without compromising selectivity and other properties. As predicted by the in silico models, PF-05270430 fit our defined preferred PET parameters nicely, including high CNS PET MPO (4.86), high RRCK P app AB (21.0 × 10 −6 cm/s) and low MDR BA/AB (1.71) suggesting good brain permeability and favorable Fu_b (0.08) with a low risk of non-specific binding. In addition, the fluorine on the terminal azetidine provided a synthetic handle for [ 18 F] labeling. In the subsequent PET imaging studies, the improvement in potency translated nicely into higher in vivo BP ND (1.8 in putamen, 1.4 in caudate). Importantly, the signal of [ 18 F]PF-05270430 in striatum was significantly blocked by a selective PDE 2A inhibitor PF-05180999 in a doseresponsive manner, confirming its in vivo specificity for the PDE 2A enzyme. These results suggested that [ 18 F]PF-05270430 could be used as a promising ligand for PDE 2A PET imaging.
The PET design parameters have also been applied to the discovery of tritiated in vivo radiotracers. This was illustrated by our effort in the discovery of a nociceptine opioid receptor (NOP) (Zhang et al. 2014). Past efforts in the NOP radiotracer discovery were unsuccessful largely due to high in vivo NSB, likely a result of the highly lipophilic nature of previous ligand leads. To identify better tracer leads, we started off by mining the data for existing chemical matter, prioritized by the CNS PET design parameters. From this effort, a spirocyclic compound 5 caught our attention, as it demonstrated desired sub-nM affinity to the NOP receptor (K i = 0.59 nM) with wellaligned properties including CNS PET MPO of 3.3 (>3), RRCK P app AB of 13.9 × 10 −6 cm/s (>5), MDR BA/AB of 1.76 (≤2.5) and Fu_b of 0.07 (>0.05). However, it lacked sufficient selectivity over mu-opioid receptor (10×). Further in-depth SAR analysis revealed that mu-opioid receptor selectivity could be modulated by reversing the amide connectivity at the right hand N-alkyl portion, as exemplified by compound 6 (105× selectivity). Combining structure features of compound 5 with the reversed amide selectivity handle from compound 6 yielded a much improved radiotracer lead PF-7191. As shown in Fig. 4, PF-7191 maintained all desired CNS PET properties with significantly improved potency (K i = 0.1 nM) and selectivity over mu-opioid receptor (1036×). The bio-distribution of [ 3 H]PF-7191 in rat brain was determined through an ex vivo binding study. Four brain regions (cortex, hippocampus, striatum and cerebellum) were examined. The NSB was determined using a high dose of a selective NOP receptor antagonist (PF-04926965, 10 μM). As shown in Fig. 3, the distribution of [ 3 H]PF-7191 binding was consistent with known NOP receptor expression: high in cortex and hippocampus and low in striatum and cerebellum. Robust specific binding was observed in all four brain regions tested with cortex showing the highest specific binding with a specific/non-specific binding ratio of~18. In the subsequent in vivo receptor occupancy study, [ 3 H]PF-7191 demonstrated rapid brain uptake and a high percentage (~80 %) of the total binding in rat cortex was determined to be receptor specific. The binding of [ 3 H]PF-7191 was inhibited by PF-04926965, a selective NOP receptor antagonist, in a dose-response manner, yielding an ED 50 of 0.9 mg/kg. This overall favorable profile indicated that [ 3 H]PF-7191 is a robust radiotracer to support preclinical in vivo receptor occupancy measurements and a promising lead for C-11 labeling and further PET assessment in higher species.
LC-MS/MS "cold tracer" method: implement cost-effective in vivo specific binding assessment In vivo specific binding assessment studies are necessary to confirm the viability of a potential PET ligand in preclinical species prior to advancing into human studies. For in vivo specific binding assessment, our earlier efforts employed the traditional in vivo radiotracer target occupancy (TO) protocol in rodents and used it as a pre-screen prior to advancing potential ligand leads to the more costly NHP PET imaging studies. In these efforts, mice were treated with either vehicle (baseline) or a high dose of a blocking compound (blocking), followed by administration of a tritiated version of a PET ligand lead. After tissue dissection and sample preparation, the radioligand binding in brain regions of interest was quantified by scintillation spectroscopy. The specific binding was determined by comparing a vehicle group with a blocking group pre-treated with a high dose of a target-selective compound or a wild-type (WT) group with a target knock-out (KO) group mimicking complete target occupancy. While this method could provide valuable specific binding information, it had significant drawbacks. First of all, there was considerable cost and time associated with precursor preparation and tritiation of suitable brain penetrant ligand leads. Furthermore, the substrate scope was limited as not all ligand leads had sites for tritiation. For example, leads that only had a [ 18 F] labeling handle would not be amenable to this method.
To address these limitations, we explored a previously reported liquid chromatographymass spectrometry (LC-MS/MS) "cold tracer" protocol for in vivo specific binding assessment ( Fig. 6) (Chernet et al. 2005). Recent advances in high sensitivity LC-MS/MS have enabled accurate quantification of low compound concentrations at tracer levels. Therefore, rather than administration of a radioactive "hot" ligand, a ligand candidate in a non-radiolabeled "cold" form at a low tracer dose (typically ≤ 10 μg/kg) was injected into test animals. The distribution of the "cold" tracer in various brain regions was then quantified by high sensitivity LC-MS/MS in place of scintillation spectroscopy. These modifications offered significant improvements over the traditional radioactive TO method in multiple fronts. First, the "cold tracer' method bypassed the tritiation step, saving time and cost, and importantly this method was applicable to all PET ligand chemotypes. Second, it provided exposure measurements for both the radioligand lead and blocking compound, yielding TO and PK information in a single experiment. Finally, it allowed concurrent use of multiple "cold" tracers in the same animals to enable sophisticated pharmacology studies that would not be possible otherwise. All these advantages mounted to a faster and more cost-effective way to test new PET ligand leads in vivo. It is important to point out, however, that in certain situations wherein there is significant rodent/human species disconnect, either in target expression (B max ) or binding affinity of a PET ligand lead, then study in rodents would no longer predict PET ligand performance in human. In situations like this, one could consider testing the ligand leads directly in higher species to gain an accurate read on ligand viability.
The usage of LC-MS/MS "cold tracer" method was nicely demonstrated by the discovery of a novel kappa opioid receptor (KOR) antagonist PET ligand [ 11 C]-3chloro-4-[4-[[(2S)-2-(pyridine-3-yl)pyrrolidin-1-yl]methyl]phenoxy]benzamide ([ 11 C]LY2795050) (Fig. 7a) (Mitch et al. 2011). SAR efforts around a novel aminobenzyloxyarylamide chemical series yielded four KOR antagonists that were selected for in vivo specific binding evaluation in rats by the LC-MS/MS. The level of specific binding was determined by comparing brain uptake in the KOR-enriched striatum region, representing total binding, with the KOR-free cerebellum region, representing NSB. Among four compounds assessed, compound 2 demonstrated highest specific to NSB ratio (2.2). However its brain permeability was poor. Of the remaining three compounds, compound 3 (LY2795050) emerged as the best lead with appropriate brain kinetics and a specific to NSB ratio (1.2) comparable to (−)-4-methoxycarbonyl-2-[(1-pyrrolidinylmethyl]-1-[(3,4dichlorophenyl)acetyl]-piperidine (GR103545) (1.4), a known KOR agonist PET ligand. The striatal brain uptake of LY2795050 was dose-dependently blocked by a KOR antagonist compound 4 [0.1-30 mg/kg, per oral (PO)]. LY2795050 was also evaluated in WT and KOR-KO mice following the same LC-MS/MS protocol. Consistent with the findings in rats, LY2795050 showed higher concentrations in the KOR-rich striatum region, with a striatum-to-cerebellum ratio of 3.3 at 60 min after dosing. LY2795050 was subsequently radiolabeled by [ 11 C] and evaluated in a NHP PET imaging study (Zheng et al. 2013). As predicted by rodent LC-MS/MS "cold tracer" studies, [ 11 C]LY2795050 showed favorable metabolic profile and reasonable KOR-specific binding signals. The level of specific binding as measured by BP ND was 0.63 in cingulate cortex and 0.66 in putamen.
|
2017-07-12T17:39:45.561Z
|
2016-09-13T00:00:00.000
|
{
"year": 2016,
"sha1": "83b126dabb6267d799a004837a58cfd74a214cc0",
"oa_license": "CCBY",
"oa_url": "https://ejnmmipharmchem.springeropen.com/track/pdf/10.1186/s41181-016-0016-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2eeb06138c8137aa7fe5de5a8b7f2a095bb37be",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264467805
|
pes2o/s2orc
|
v3-fos-license
|
A map matching algorithm based on modified hidden Markov model considering time series dependency over larger time span
With the advancement of geopositioning systems and mobile devices, much research with geopositioning data are currently ongoing. Along with the research applications, map matching is a technology that infers the actual position of error-prone trajectory data. It is a core preprocessing technique for trajectory data. Among various map matching algorithms, map matching using Hidden Markov Model (HMM) has gained high attention. However, the HMM model simplifies the dependency of time series data excessively, which leads to inferring incorrect matching results for various situations. For example, complex road relationships or movement patterns, such as in urban areas, or serious observation errors and sampling intervals make matching more difficult. In this research, we propose a new algorithm called trendHMM map matching, which complements the assumptions of HMM. This algorithm considers a wider range of dependencies of geopositioning data by incorporating the movements of neighboring data into the matching process. For this purpose, the concept of the window containing adjacent geopositioning data is introduced. Thus trendHMM can utilize relationships among continuous geopositioning data and showed considerable enhancement over HMM-based algorithm. Through experiments, we demonstrated that trendHMM map matching provides more accurate results than the existing HMM map matching for various environments and geopositioning data sets. Our trendHMM algorithm shows up to 17.58% of performance enhancement compared to HMM based one in terms of Route Mismatch Fraction.
Introduction
Due to advancements in wireless communication and geopositioning technologies, it has become possible to collect a large amount of geolocation data from various devices such as smartphones and vehicles in different geopositioning systems.The continuous movements of objects can be represented in periodic time series form.However, issues such as low measurement frequency and measurement errors can lead to discrepancies between the actual position and the collected data.Therefore, proper data preprocessing is necessary.During the data preprocessing process, map matching technology is frequently used to infer the actual path taken by the subject.This approach involves depicting the connectivity between roads as a graph and inferring which edge, or road, the data at a specific point in time corresponds to.Among various map matching algorithms, those using the Hidden Markov Model (HMM) have gained attention.The HMM-based map matching algorithm, first introduced by Newson and Krum [1], has been widely utilized in research due to its performance and robustness.Particularly, it is known for its excellent performance with data having sampling intervals of less than 30 seconds.However, the existing HMM-based map matching approach oversimplifies the problem.One fundamental issue is that it considers the data at time to be dependent only on the data at time − 1.Therefore, the existing HMM-based map matching approach yields inaccurate results for data with large errors or sampling intervals, as well as for areas with complex road networks such as urban areas.To overcome these limitations, there have been studies utilizing high-order HMMs that consider time points earlier than − 1, such as [2,3].However, the computational cost significantly increases as the dimensionality increases, preventing the extension of the model beyond second order.Therefore, instead of simply increasing dimensions, we considered the dependencies among various data by grouping multiple data points to account for the movement trend in the dataset, without explicitly expanding the dimensionality.In this research, we propose the trendHMM map matching algorithm, which complements the existing HMM-based map matching algorithm by considering a wider range of dependencies.This algorithm takes into account the "trend," which represents broader movements by grouping the matching data and neighboring data.By considering the trend, we were able to address the issues of dependency and dimensionality increase that the conventional approach had.Through experiments using various geopositioning data, we confirm that the trendHMM map matching algorithm yields more accurate results than the existing HMM map matching approach.Furthermore, we performed a detailed comparison between trendHMM and the well-known HMM-based map matching algorithm, which is a representative map matching algorithm, both with and without preprocessing.This allowed us to make a precise comparison between trendHMM and HMM.The results of this comparison can be found in subsection 4. 3.
The key contributions of this research are as follows: • It expands the dependency between data by reflecting the movement trend of data.
• It demonstrates good performance without the need for data preprocessing, unlike conventional HMM-based algorithms.
• It exhibits similar time and space complexity as conventional HMM-based algorithms, but achieves better performance.
The contents of this research paper are as follows: Section 2 discusses about research results related with our topic.In section 3, a new algorithm developed in this paper, trendHMM, will be explained.As well, we will discuss existing map matching algorithm based on HMM as a basis of our algorithm along with terminology used in the research paper.Section 4 will show the experimental results, both for HMM-based algorithms and trendHMM algorithm as well as comparison of results of both algorithms.Final section 5 will conclude this research and discuss about possible future researches.
Related works
According to the map matching survey research [4], map matching algorithms can be broadly classified into four categories based on the applied techniques: Similarity Model, Candidate-Evolving Model, Scoring Model, and State-Transition Model.The Similarity Model infers the closest road geometrically and topologically.In other words, it matches the trajectory data to the road that is closest in terms of shape.Since the moving object always travels on the road network and cannot leap from one segment to another, the measured geolocation series closely resembles the actual path on the map.This model generally demonstrates high efficiency, but it exhibits lower accuracy when dealing with data with large sampling intervals or errors, or in complex road scenarios.
There exists another related survey to our study [5].In this survey, the authors classified and reviewed existing map matching algorithms.The previously mentioned Similarity Model, Candidate-Evolving Model, Scoring Model, State-Transition Model, etc., are also included, and our research falls within the State-Transition Model category.In addition to this, various research results related to map matching include the following.
In a research [6], the Similarity Model is further classified into two subcategories: point-to-curve matching, where the model is applied to each point of the trajectory data, and curve-to-curve matching, where the model is applied to segments formed by grouping the trajectory data.This classification is also mentioned in research results such as [7] and [8].In point-to-curve matching, each point of the trajectory data is matched with the nearest edge of the road network.On the other hand, in curve-to-curve matching, the trajectory data is grouped to form trajectory (Tr) segments, and then matched with the closest edge.Various models have been proposed based on different definitions of proximity, and notable research in this area include [9] and [10].
The Candidate-Evolving Model maintains a candidate set (also known as particles or hypotheses) during the map matching process.The candidate set is initialized by the first trajectory (Tr) sample, and only the candidates that are closest to the latest observation are kept from the existing candidates.By iterating the algorithm, the number of times each candidate is included in the candidate combination can be calculated.The most frequently included candidates are then used to compute segments and determine the matching path.Prominent models in this category include methods that combine Monte Carlo sampling techniques and Bayesian inference, such as Particle Filter-based approaches [11,12], and methods that utilize the Multiple Hypothesis Technique (MHT) [13].These approaches effectively utilize the candidate set to infer the most likely matching path.
The Scoring Model, as described in research such as [14] and [15], does not rely on a specific model.Instead, it searches for candidate points that maximize a pre-defined scoring function for each Tr segment.In particular, a research shown in [15] achieved lane-level map matching performance using this approach.The road network is partitioned into grid cells.For each timestamp, the candidate grid cells corresponding to the observed values are identified, and the candidate with the highest scoring function is selected.The scoring function is determined using four linear features, such as the proximity between the grid cell and the trajectory sample.By utilizing the scoring function, the Scoring Model evaluates and selects the most suitable candidates for each Tr segment, providing a flexible and effective approach for map matching tasks.
The State-Transition Model constructs a weighted topological graph consisting of all possible paths that the subject can take.In this graph, nodes represent the possible states of the subject at specific moments, and edges represent transitions between states at different timestamps.The model infers the optimal path by determining the path that maximizes the weights.Notable approaches in this category include the conditional random field (CRF) model [16], the weighted graph transition (WGT) model [17,18], and methods that utilize Hidden Markov Model (HMM).While these algorithms share similarities, they differ in how they calculate the weights.We focus on the method that utilizes HMM, which is a popular choice for map matching tasks.
HMM (Hidden Markov Model) is one of the most widely used methods in the State-Transition Model for map matching.HMM focuses on cases where the states in a Markov chain cannot be directly observed but can be inferred from the observed measurements.This assumption aligns well with map matching problems, where each point in the trajectory (Tr) is treated as an observed measurement, and the actual position of the subject is considered as an unobserved state.In HMM, the roads near the measured values are considered potential locations (states) of the subject due to observation errors in the trajectory.The probability of the measured values being observed when the subject is actually located on a road is expressed as emission probability.The probability of the subject transitioning from one candidate location to the next candidate location consecutively is represented as the transition probability.By finding the combination of candidate locations with the highest probability, HMM determines the final matching path.
HMM-based map matching can be broadly classified into online and offline models.The offline model uses the entire trajectory (Tr) to understand the overall relationships within it.It demonstrates robust performance against variations in sampling intervals and observation errors but can be computationally inefficient.The online model, first proposed in [19], performs map matching by incorporating the data collected in real-time and constructing segments.It is utilized in real-time navigation and similar online services, and most online models employ sliding window techniques for the matching process.The algorithm we will present belongs to offline algorithm.
Recently, several algorithms have been proposed to enhance traditional HMM-based map matching by incorporating a range of factors such as speed, angle, preference, and more, aiming to achieve higher accuracy.For example, research like [20][21][22]2] include factors such as speed limits, road levels, and the difference between the vehicle's heading change and the road segments' heading change in the probability calculations.Additionally, research like [2,23,24] take into account driver's travel preference information obtained through heuristics or learning.However, these models assume a perfect road network representation.They assume that the road network includes all roads existing at the time when the trajectory is measured and does not account for any hidden or missing roads.As more factors are considered in map matching, there is a tendency to interpret the data according to the road network as much as possible.Therefore, these models, as shown in [25], may not be suitable for solving the map inference problem of discovering hidden roads when the network itself is incorrect.Indeed, there have been studies that consider a larger temporal dependency to overcome the Markov property in HMM.Since the movement of the target is a continuous time series, there exist complex spatio-temporal relationships between the current state and previous states.In other words, the Markov property overly simplifies the map matching problem.To address this, [2] proposed a second order HMM-based map matching algorithm, which showed better performance than the first-order approach.However, [2] did not extend the algorithm beyond two dimensions due to computational efficiency issues.Our aim is to provide an algorithm which maintains the same time complexity as traditional HMM-based map matching but incorporates a larger temporal dependency compared to second order HMM.
There are research results that share similar objectives to our study [26].It proposes methods to minimize errors in GPS observation data, but its approach to model construction differs from ours.Moreover, its focus on online algorithms is apart from our study.There are also research outcomes that utilize additional equipment over map matching algorithms.In [27], this research demonstrates cases where map matching technology is applied.It introduced radar/INS integrated techniques with map matching techniques to achieve the high level of accuracy required for autonomous navigation.
We propose a statistical model called trendHMM map matching, which complements traditional HMM map matching.We primarily focused on comparing the most common map matching algorithm, Hidden Markov Model (HMM) map matching, rather than other map matching algorithms shown in subsection 4.3.To consider a larger temporal dependency beyond just the previous timestamp, trendHMM map matching constructs a window by grouping several data points based on the point of interest.Representative points within the window are selected to form a new trajectory that captures broader movements.During the map matching process of , the generated weights are additionally considered, reflecting the movements and trends over a wider range.The size of the window in trendHMM can be adjusted to control the temporal dependency, and even with varying window sizes, it provides the same time complexity as the HMM approach.Furthermore, as trendHMM is a purely statistical algorithm without external factors, it is expected to achieve better performance in map inference problems.
An enhanced version of HMM based map matching algorithm
In this section we have to define terminology used in this paper.Then, we will explain basics of existing HMM based map matching algorithm as a basis of our algorithm, with its limitation and point of advancement.The core of this section is subsection 3.4 which explains trendHMM algorithm on the basis of previous subsections in this section.
Terminology
Movement data refers to measured geopositioning coordinates, and a trajectory represents the temporal sequence of movement data.Each geopositioning coordinate includes longitude, latitude, and a timestamp, and there may be discrepancies between the measured coordinates and the actual positions due to inherited geopositioning system properties and/or environmental problems.Regarding GPS error is described in [28].
Map matching is a crucial technology for preprocessing trajectories.It involves representing the connectivity between roads in the form of a graph and assigning each movement data point to a specific road segment, thereby inferring the actual location of the object.The concepts used in this manuscript are as follows.
• Geolocation data: It refers to a single geopositioning point.The t-th movement data in the trajectory is denoted as ().
Then the number of geolocation data in , i.e., cardinality of is n.• Road Network: It is represented as a graph to depict the connectivity relationship between roads, denoted as ( , ).Here, represents the set of nodes, and represents the set of edges.• Route: It is a time series composed of edges from the Road Network that match the trajectory, denoted as .
Basics on map matching based on HMM
The purpose of map matching is to match each movement data in the trajectory, denoted as , with a specific edge in the road network.The HMM proceeds by modeling the edge where the object is actually located at that point as a hidden state, and modeling the measured location point as an observed value.
First, based on a given mobility dataset, roads within a radius of are selected as candidates for the data point. () ∈ represents a candidate for the data point ().The emission probabilities and transition probabilities are calculated using the shortest distance between the data point and candidate edges, as well as the points () .and (+1) .located at the shortest distance.The emission probability represents the probability of observing a measurement given the actual road position.The emission probability for the candidate () is defined by equation (1).
In Equation (1), represents the distance between () and () .,and follows a Gaussian distribution with zero mean for .
The transition probability represents the probability of transitioning from () to (+1) in reality.Fig. 1(a) shows an example of () ..As shown in Fig. 1(b), let be the distance between () and ( + 1), and let be the shortest distance traveled along the road between () .and (+1) ..The transition probability is calculated as Equation (2), and it follows an exponential distribution with respect to the difference between and .
𝑇 𝑃 (𝐶 𝑔(𝑡) , 𝐶 𝑔(𝑡+1
The actual path inferred by the algorithm is the combination of candidates that maximizes Equation ( 4) with respect to Tr. Equation (3) represents the weights that will be used to compute emission and transition probabilities for each candidate of the data.The weights are larger for candidates that are closer to the data and have smaller differences between and .This can be efficiently computed using the Viterbi algorithm [29].The algorithm based on the Viterbi algorithm is described in Algorithm 1.
Algorithm 1 An algorithm of HMM map matching with Viterbi.
End Function
In Algorithm 1, the Viterbi algorithm is divided into two stages: forward and backward.The input parameter is an array of candidate sets for each geolocation data in .In other words, [] represents the set of candidate objects () for ().Each candidate object () has instance variables: , , and , which are denoted as (𝑡) . in the algorithm. () .represents the road that the candidate object points to. (𝑡) .stores the maximum (Equation ( 4)) from (1) to () in .Therefore, for a given () , the following equation (Equation ( 5)) holds true. (𝑡) .refers to the t-1 candidate object that maximizes the value in Equation (5).
𝑊 ℎ𝑚𝑚 (𝐶 𝑔(𝑡) , 𝐶 𝑔(𝑡+1) ) = log 𝑒𝑝(𝐶 𝑔(𝑡+1) ) + log 𝑡𝑝(𝐶 𝑔(𝑡) , 𝐶 𝑔(𝑡+1) )
(3) In Algorithm 1, the forward step is a function that computes the probabilities of all candidate objects.After executing the forward step, the candidate object in [ − 1] with the highest prob value corresponds to the maximum value of ( ) in Equation ( 4).The backward step is a function that returns the matched path.Starting from the candidate object with the highest prob value in [ − 1], it follows the pointers to return the matching result of the path.
Limitation of map matching based on HMM
HMM assumes that the observation at time depends only on the previous time step − 1, which is known as the Markov Property.However, this assumption oversimplifies the map matching process and can lead to incorrect results.Fig. 2 illustrates incorrect matchings.The area inside the red circle shows incorrect matching, where the path deviates from the correct route.Fig. 2(a) represents a case with large sampling intervals and significant data errors.While considering multiple time steps in the map matching process could help mitigate the impact of errors, HMM only considers the immediate previous time step, leading to ambiguous results.And Fig. 2(b) depicts a situation that can occur in densely populated urban areas.Similarly, due to data errors, a reverse movement can occur.
Proposed algorithm: trendHMM map matching algorithm
The main concepts of trendHMM are as follows.The movement data () at the time is dependent on a wider range of movement data as well as the time − 1.In particular, neighboring data points such as ( − 2), ( − 1), ( + 1), ( + 2) have a closer relationship with () compared to other data points.Although these neighboring data points may exhibit different characteristics in terms of direction, speed, and other movement-related features, they share a common underlying movement pattern.These unique movement patterns exhibited by neighboring data points are defined as movement trend.Fig. 3 provides an example of movement trend.Fig. 3(a) represents a movement trend of vehicle and Fig. 3(b) represents a movement trend of pedestrian.The measured geopositioning coordinates (blue dots) exhibit variations due to observation errors and have different movement-related characteristics.However, they share a common movement trend (red line).The actual location of () is significantly influenced not only by the immediate previous data point, ( − 1) but also by the wider movement trend.To address this, we propose a trendHMM map matching algorithm that supplements the traditional HMM model with movement trends.We introduce a weight that considers the movement trend in addition to the existing weight ℎ combined into equation ( 4) when calculating the map matching score.In other words, trendHMM aims to infer the path by maximizing the following equation ( 6), which takes both the traditional HMM score and the movement trend weight into account.
𝑡𝑟𝑒𝑛𝑑_𝑠𝑐𝑜𝑟𝑒(𝑇 𝑟)
assigns a higher value to candidates that fit the movement trend.The specific method for calculating for candidate () is as follows.Firstly, create a window by grouping neighboring data points around () as the reference.This window is constructed to capture the relevant movement trend, as illustrated in Fig. 4(a).Within the window, including (), calculate the centroid () of the data points from previous time steps.After obtaining the centroid for the data points preceding (), the same procedure is applied to the subsequent data points to obtain the centroid .Then, as illustrated in Fig. 4(b), we connect (), , (), and to create a new trajectory .() represents the data point outside the window, and its corresponding value is determined as (0, − + 1), where w is window size.By considering the centroids, incorporates the movement trend more robustly, even in the presence of errors.In Fig. 4, represents a trajectory constructed from the original trajectory up to ().Finally, the generated and are used to calculate ( () ) according to Equation (7).
The equation ( 7) represents the maximum value obtained from conducting trendHMM map matching on and performing traditional HMM map matching on while fixing the candidate for () as a specific () .This can be efficiently computed using the Viterbi algorithm.It's important to note that equation (7) incorporates the influence of the original .The trend score _( ) is calculated by the Viterbi algorithm before determining ( () ).The normalization parameter is used to ensure that the magnitudes of each ℎ and in equation ( 6) are equal.The value of N can be determined from the equations.The total number of weights for _( ) is 2 × from Equation ( 6), and the total number of weights for ( ) is 3 from Equation (5).Therefore, the value of can be set to 2 + 3 for the calculation of equation (7).The trendHMM algorithm using the Viterbi algorithm is described in Algorithm 2. In Algorithm 2, trendForward is an algorithm that modifies the forward step to align with trendHMM.The function addTrendWeight incorporates the trend into the calculation according to the trendHMM algorithm.Afterwards, the backward step of Algorithm 1 is performed to return the final matching path.
Algorithm 2 An algorithm of trendHMM with map matching Viterbi.
Input: , array of candidate site object (CD) sets for all geolocations.Function addTrendWeight(, ): make candidates array with (), , (), as shown in Fig. 4; calculate ( () ) using equation (7) The algorithm proposed in this research has the same time complexity as traditional HMM-based map matching.Let's denote the average number of candidate points for each geolocation coordinate as , and the number of geolocation coordinates in as .The time complexity of traditional HMM-based map matching, determined by the Viterbi algorithm, is ( 2 ).For trendHMM, each geolocation requires an additional map matching for , which consists of 4 data points.This means that 3 additional computations are performed for each geolocation compared to the traditional HMM.However, since and remain the same, the overall time complexity of trendHMM remains ((3 + ) 2 ) = (4 2 ) = ( 2 ).Therefore, we can conclude that the time complexity of both algorithms is the same.
For the spatial complexity, both HMM and trendHMM have the same spatial complexity, which is (), where represents the number of candidate spatial objects for the next step.Here, is a variable meaning the length of trajectory.Regardless of window size, two major points are generated for map matching one point.Algorithm 2 generates two more CDS during the process.Thus, we can conclude the space complexity of trendHMM algorithm is ( + 2) = ().
Data and experiment setup
The dataset used in the experiments consists of a large-scale real-world dataset comprising 100 geopositioning tracks, with locations spanning various locations worldwide.The dataset is publicly available in [30].Each track is represented by a map or a route that accurately matches the map.Additionally, some tracks are labeled with features that may pose challenges for map matching algorithms.
• u-turns: the vehicle turned 180 • and reversed the direction of travel • hives: large numbers of points packed in a small area • loops: the vehicle was traveling in circles • gaps: temporal gaps existing in the track • severe congruence issues: situations where the map and the track are incongruent or dissimilar In addition, in the data sets, there are 19 tracks formed by 19 high-quality geopositioning data points without connected segments.The length of the tracks ranges from 5 to 100 kilometers, and the dataset contains a total of 247,251 points with a sampling rate of 1 Hz.The average length and duration of the 100 tracks are 26.8 km and 4950.7 seconds, respectively.More detailed information can be found in the dataset documentation [30].
We subsampled the original datasets in order to create five different datasets concentrating on sampling intervals rather than one second of original datasets: 10 seconds, 20 seconds, 30 seconds, 60 seconds, and 120 seconds.Additionally, we generated preprocessed datasets using the Douglas-Peucker algorithm [31] on each dataset.
The matching accuracy is measured by the Route Mismatch Fraction (RMF) proposed by Newson and Krumm [1].The measured values include the total length of false positive road segments, denoted as + , and the total length of false negative road segments, denoted as − .If we consider 0 as the sum of + , the quantifies the map matching error as the ratio ( + + − )∕ 0 .A smaller value indicates that the map matching results are more similar to the actual path.
Next, we evaluate the efficiency of trendHMM for geolocation data with various sampling intervals.To compare trendHMM with the traditional HMM-based method, we derive the values for each approach.Throughout the experiments, we set the geopositioning measurement standard deviation to 10 in Equation ( 1) for both methods.The scaling factor in Equation ( 2), used for estimating transition probabilities, is also set to 10 , but the qualitative results are maintained within the range reported in [23].For efficiency purposes, we construct candidates by searching for road segments within a radius of 200 from each geopositioning point.All methods are implemented in Python, and the experiments are conducted on two Intel Xeon CPU E5-2630 v2 2.60 GHz CPUs with 64 GB RAM.
Parameter estimation
The trendHMM algorithm proposed in this research requires an additional estimation of the window size, denoted as .According to the algorithm in Section 3.4, as the value of increases, the distance and time interval between the considered movement data points also increase.In other words, the range of temporal dependencies expands.
In this experiment, we conducted experiments by varying the value of as shown in Fig. 5 to select an appropriate value for the data generated with different sampling intervals (10 sec, 20 sec, 30 sec, 60 sec, 120 sec).Overall, we observed a trend where a higher value tends to result in lower values when the sampling interval is smaller.However, when the sampling interval is larger, we noticed that a higher value leads to lower performance.To accurately estimate the precise location of the movement data (), a narrow range of movement trend composed of data points with close temporal and spatial proximity is required.However, with larger sampling intervals, although approximate location estimation is still possible, the lack of precision compared to smaller sampling intervals results in lower performance as the value of increases.
Experimental results
The experiments were conducted based on two main criteria: trendHMM vs. HMM and the presence or absence of data preprocessing presented by Douglas-Peucker algorithm [31].In all experiments, the parameter value for the Douglas-Peucker algorithm was fixed at 0.001 for data preprocessing.The radius for the () data was set to 0.002, and the number of neighboring data points to be grouped, denoted as , was set to one of {3, 4, 5, 6, 7, 8}.
• : The value of the parameter for the Douglas-Peucker algorithm is 0.001.
• : The value of the radius for the () data is 0.002.• : The number of neighboring data points to be grouped, denoted as , is a value from integer set {3, 4, 5, 6, 7, 8}.
Firstly, we compared the results of trendHMM and HMM with and without data preprocessing.Each of the basic algorithms was applied in the same manner, with the only difference being the inclusion or exclusion of the data preprocessing step.In Fig. 6, (a) shows that without preprocessing has better overall performance than with preprocessing in trendHMM, and (b) shows that with preprocessing is better in HMM.In Fig. 6(a), the red line representing "without_preprocessing" generally exhibits lower values than the brown line representing "with_preprocessing."This means that trendHMM without preprocessing performs better compared to trendHMM with preprocessing.On the other hand, in Fig. 6(b), for HMM, "with_preprocessing" shows lower values than "without_preprocessing."In other words, in the case of trendHMM, results without preprocessing are generally better, however, in the case of HMM, it was confirmed that the results with preprocessing are better.Next, we compared the non-preprocessed trendHMM with each HMM.Fig. 7 shows a comparison with the HMM without preprocessing, and Fig. 8 shows a comparison with the HMM with preprocessing.Overall, it can be confirmed that trendHMM without preprocessing shows superior performance than the cases of HMM with or without preprocessing.
However, in cases of a sampling interval of 20 sec and values of [3,4,5], trendHMM without preprocessing showed lower performance compared to the HMM with preprocessing.To investigate the cause, we plotted some of the data that resulted in lower performance on the map.Fig. 9 shows that it corresponds to cases where the trajectory stays in one location for a long time.Fig. 9(a) represents the map with the highest error, and Fig. 9(b) represents the map with the second-highest error.This suggests that the lower performance in certain situations may be due to factors such as the data preprocessing and variations caused by the window size .It is likely that some inaccuracies occurred under specific conditions.However, trendHMM with preprocessing at the corresponding sampling interval showed higher performance than HMM of each case.
According to Table 1, it can be observed that among all the models, trendHMM without preprocessing exhibits the highest performance.Additionally, when comparing trendHMM and HMM as shown in Table 2, irrespective of preprocessing, it can be observed that trendHMM shows improved performance.In general, the best value of trendHMM was extracted from trendHMM without preprocessing, and the best value of HMM was extracted from HMM with preprocessing.Traditional HMM-based map matching only considers very short-term dependencies by utilizing the immediate past data.However, trendHMM has the advantage of being able to adjust the time dependencies more flexibly by using the window size .Additionally, trendHMM achieves high performance even without data preprocessing, while HMM requires preprocessing as a crucial step.Therefore, trendHMM has the advantage of reducing the costs associated with preprocessing, making it more efficient compared to HMM.
Conclusion and future research
We proposed the trendHMM map matching method as an enhancement to the existing HMM-based map matching.The trendHMM overcomes the limitations of HMM by considering a wider range of movement patterns, while maintaining the same time complexity as traditional methods.Through experiments with diverse sampling intervals, trendHMM demonstrated superior performance.Notably, trendHMM achieved high accuracy without the need for data preprocessing, whereas HMM required preprocessing as a crucial step.Therefore, it can be concluded that trendHMM can reduce the preprocessing costs associated with traditional HMM approaches.
The trendHMM is a method that purely relies on statistical modeling without considering external factors such as speed, heading, or road preferences.Therefore, it is expected to be more suitable for addressing the map inference problem of identifying unmapped roads.Future research is planned to further explore and address this aspect.In the trendHMM, a fixed value of was used for extracting movement trends.If is too small, it may not properly capture the movement trend.On the other hand, if it is too large, it may incorporate irrelevant and extensive movement trend unrelated to the current motion.Therefore, a method for dynamically determining an appropriate value is needed.In future research, we plan to explore methods for self-adaptive window size determination using factors such as the angle and speed of movement data.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 1 .
Fig. 1.Typical Operation of Map Matching Algorithm based on HMM.
Fig. 2 .
Fig. 2. Example of Erroneous Matching by HMM based Map Matching Algorithm.
Fig. 5 .
Fig. 5. Values with respect to Window Size W.
Fig. 6 .
Fig. 6.The RMF values of trendHMM and HMM with and without Data Preprocessing.
Input: Function backward(): ⟵ ; ⟵ length of ; ⟵ The candidate object in [ − 1] with the highest ; while is not null do insert .to ; ⟵ .;end reverse 's order; return ; with and add to () .;
Table 1
Comparison of Values w.r.t.Various Sampling Intervals.
Table 2
values comparison: trendHMM versus HMM with their best RMF Values.
|
2023-10-26T15:35:37.169Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "a2a777627ad8559c53484b154523ff7f9885052a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e21368",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "44615b88a8c949971278463eabca66674d7453de",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212644834
|
pes2o/s2orc
|
v3-fos-license
|
Maximizing Happiness in Graphs of Bounded Clique-Width
Clique-width is one of the most important parameters that describes structural complexity of a graph. Probably, only treewidth is more studied graph width parameter. In this paper we study how clique-width influences the complexity of the Maximum Happy Vertices (MHV) and Maximum Happy Edges (MHE) problems. We answer a question of Choudhari and Reddy '18 about parameterization by the distance to threshold graphs by showing that MHE is NP-complete on threshold graphs. Hence, it is not even in XP when parameterized by clique-width, since threshold graphs have clique-width at most two. As a complement for this result we provide a $n^{\mathcal{O}(\ell \cdot \operatorname{cw})}$ algorithm for MHE, where $\ell$ is the number of colors and $\operatorname{cw}$ is the clique-width of the input graph. We also construct an FPT algorithm for MHV with running time $\mathcal{O}^*((\ell+1)^{\mathcal{O}(\operatorname{cw})})$, where $\ell$ is the number of colors in the input. Additionally, we show $\mathcal{O}(\ell n^2)$ algorithm for MHV on interval graphs.
Introduction
Clique-width is one of the most important parameters that describe structural complexity of a graph. Probably, only treewidth is more studied graph width parameter. We note that one can treat clique-width as some generalization of treewidth as graphs of bounded treewidth have bounded clique-width. Hence, the existence of an FPT algorithm parameterized by clique-width is a stronger result than the existence of an FPT algorithm parameterized by treewidth. Complexity of many problems were studied parameterized by the clique-width parameter, including Max-Cut [12], Edge Dominating Set [12], Hamiltonian Path [11], Graph k-Colorability [13,18], computation of the Tutte polynomial [15], Dominating Set [18,19], computation of chromatic polynomial [25], and Target Set Selection [16]. In this paper, we continue the line of the research and investigate computational and parameterized complexity of the Maximum Happy Vertices and Maximum Happy Edges problems parameterized by clique-width of the input graph.
Before defining Maximum Happy Vertices and Maximum Happy Edges, we need to define what a happy vertex or a happy edge is. Definition 1. Let G be a graph and let c : V (G) → [ ] be a coloring of its vertices. We say that an edge uv ∈ E(G) is happy with respect to c (or simply happy, if c is clear from the context) if its endpoints share the same color, i.e. c(u) = c(v). We say that a vertex v ∈ V (G) is happy with respect to c if all its neighbours have the same color as v, i.e. c(v) = c(u) for each neighbour u of v in G.
We now give the formal definition of both problems.
Maximum Happy Vertices (MHV) Input:
A graph G, a partial coloring of vertices p : S → [ ] for some S ⊆ V (G) and an integer k. Question: Is there a coloring c : V (G) → [ ] extending partial coloring p such that the number of happy vertices with respect to c is at least k?
Maximum Happy Edges (MHE) Input: A graph G, a partial coloring of vertices p : S → [ ] for some S ⊆ V (G) and an integer k. Question: Is there a coloring c : V (G) → [ ] extending partial coloring p such that the number of happy edges with respect to c is at least k?
Maximum Happy Vertices and Maximum Happy Edges were introduced by Zhang and Li in 2015 [29], motivated by their study of algorithmic aspects of homophyly law in large networks. These problems recently attracted a lot of attention from different lines of reseach. From the parameterized point of view, the problems were studied in [1,2,3,6,26,4,5]. Works [29,30,28,27] are devoted to approximation algorithms for MHV and MHE. Finally, Lewis et al. [21] study the problems from experimental perspective.
Before we state our results we mention some previously known results under different parameterizations. Aravind et al. [3] constructed O * ( tw ) and O * (2 nd ) algorithms for both MHV and MHE, where tw is the treewidth of the input graph and nd is the neighbourhood diversity of the input graph. Misra and Reddy [26] constructed O * (vc O(vc) ) algorithms for both problems, where vc is the vertex cover number of the input graph.
Our results: Below cw is the clique-width of the input graph, is the number of colors in the input precoloring and n is the number of vertices in the input graph. In the paper we prove the following results for the Maximum Happy Edges problem: -MHE admits an XP-algorithm with running time n O( ·cw) , if a cw-expression of the input graph is given; -MHE does not admit an XP-algorithm parameterized by clique-width alone, unless P = NP (by showing that MHE is NP-complete on threshold graphs).
Note that the question of the complexity of MHE on the class of threshold graphs was asked explicitly by Choudhari and Reddy in [6].
For the Maximum Happy Vertices problem we establish the following results: -MHV admits an FPT algorithm with O * (( + 1) O(cw) ) running time, if a cw-expression of the input graph is given (note that MHV parameterized by clique-width alone is W [2]-hard [5]); -Additionally, MHV is solvable on the class of interval graphs in time O( n 2 ).
Our work shows that clique-width is a parameter under which computational complexity of problems MHV and MHE differ most significantly. On graphs of bounded clique-width, MHV admits an FPT algorithm with running time O * (( + 1) O(cw) ), while MHE is NP-complete on graphs of clique-width two and does not admit even an XP-algorithm when parameterized by cw, however, we show that there is an XP-algorithm for the extended parameter cw + . Note that when parameterized by treewidth, neighbourhood diversity or vertex cover, the problems are known to have similar complexity. We believe that the FPT algorithm for MHV parameterized by cw + is the most interesting result of this paper.
After establishing existence of polynomial algorithms for problems on graphs of bounded clique-width, it is natural to investigate complexity of problems on minimal hereditary classes of unbounded clique-width. Unit interval graphs is one of such graph classes [22]. We show that MHV is polynomially solvable on the class of interval graphs, which is a wider graph class. So we think that our result for interval graphs nicely complements our understanding of computational complexity of MHV parameterized by clique-width. We note that interval graphs also separate MHV and MHE, as MHE is NP-complete on threshold graphs, that are a subclass of interval graphs.
Preliminaries
Basic notation. We denote the set of positive integer numbers by N. For each positive integer k, by [k] we denote the set of all positive integers not exceeding k, {1, 2, . . . , k}. We use for the disjoint union operator, i.e. A B equals A ∪ B, with an additional constraint that A and B are disjoint.
We use the traditional O-notation for asymptotical upper bounds. We additionally use the O * -notation that hides polynomial factors. We investigate MHV and MHE mostly from the parameterized point of view. For a detailed survey in parameterized algorithms we refer to the book of Cygan et al. [9]. Throughout the paper, we use standard graph notation and terminology, following the book of Diestel [10]. All graphs in our work are undirected simple graphs.
Graph colorings. When dealing with instances of MHV or MHE, we use a notion of colorings. A coloring of a graph G is a function that maps vertices of the graph to the set of colors. If this function is partial, we call such coloring partial. If not stated otherwise, we use for the number of distinct colors, and assume that colors are integers in [ ]. A partial coloring p is always given as a part of the input for both problems, along with graph G. We also call p a precoloring of the graph G, and use (G, p) to denote the graph along with the precoloring. The goal of both problems is to extend this partial coloring to a specific coloring c that maps each vertex to a color. We call c a full coloring (or simply, a coloring) of G that extends p. We may also say that c is a coloring of (G, p). For a full coloring c of a graph G by H(G, c) we denote the set of all vertices in G that are happy with respect to c.
Clique-width. In order to define cliquewidth we follow definitions presented by Lackner et al. in their work on Multicut parameterized by clique-width [20].
To define clique-width, we need to define k-expressions first. For any k ∈ N, a k-expression Φ describes a graph G Φ , whose vertices are labeled with integers in [k]. k-expressions and its corresponding graphs are defined recursively. Depending on its topmost operator, a k-expression Φ can be of four following types.
The labels of the vertices remain the same. 3. Renaming labels. Φ = ρ i→j (Φ ). The structure of G Φ remains the same as the structure of G Φ , but each vertex with label i receives label j. 4. Introducing edges. Φ = η i,j (Φ ). G Φ is obtained from G Φ by connecting each vertex with label i with each vertex with label j.
Clique-width of a graph G is defined as the smallest value of k needed to describe G with a k-expression and is denoted as cw(G), or simply cw.
There is still no known FPT-algorithm for finding a k-expression of a given graph G. However, there is an FPT-algorithm that decides whether cw(G) > k or outputs (2 3k+2 − 1)-expression of G. For more details on clique-width we refer to [17].
Maximum Happy Edges
This section is dedicated to the Maximum Happy Edges problem parameterized by clique-width. We start with showing that Maximum Happy Edges is NP-complete on graphs of clique-width at most two.
In [6], Choudhari and Reddy proved that MHV is polynomially solvable on the class of threshold graphs (that have clique-width at most two [23]) and questioned the complexity of MHE on the same graph class. We answer their question by showing that Maximum Happy Edges is NP-complete on threshold graphs. To prove this, we require the following useful characterization of threshold graphs. Lemma 1 ( [24]). Threshold graphs are graphs that can be partitioned in a clique K = {u 1 , u 2 , . . . , u k } and an independent set I, such that We now prove the abovementioned hardness of MHE. Theorem 1. Maximum Happy Edges is NP-complete on the class of threshold graphs.
Proof. We reduce from SAT, that is a classical NP-complete problem. Let F be a boolean formula on n variables in conjunctive normal form F = C 1 ∧C 2 ∧. . .∧C m . C i is a clause being a disjunction of distinct literals, so it can be represented as We show how to, given F , construct an instance (G, p, k) of Maximum Happy Edges, such that F is satisfiable if and only if (G, p, k) is a yes-instance of MHE. Moreover, G is a threshold graph and the construction can be done in polynomial-time.
Let F be a boolean formula on n variables in CNF, consisting of m clauses. We construct (G, p, k) as follows.
G will be a threshold graph. So it will consist of two parts: a clique K and an independent set I. Firstly, we introduce the clique vertices in G. For each clause C i of F we introduce a new vertex c i in G. For each variable x j of F we introduce m 2 new vertices v j,1 , v j,2 , . . . , v j,m 2 in G. We introduce all possible edges between these m + nm 2 vertices in G so these vertices form the clique K in the partition of G.
Before we proceed, let us give an intuition of the further construction. Each color we use in p corresponds to a literal in F , i.e. to an element in L = {x 1 , x 2 , . . . , x n , x 1 , x 2 , . . . , x n }. Thus, we use 2n colors in p. For convenience, we use corresponding literals to denote colors instead of the numbers in [2n]. We want each clause vertex c i to be colored with a color corresponding to one of its literals, i.e. one of the colors l i,1 , l i,2 , . . . , l i,ki in any optimal coloring. Similarly, we want each variable vertex v j,t corresponding to the variable x j to be colored with one of the colors corresponding to the literals of x j , i.e. either x j or x j . For each vertex u ∈ K, we denote the set of required colors as L(u), i.e. L(c i ) = C i = {l i,1 , l i,2 , . . . , l i,ki } for clause vertices, and L(v j,t ) = {x j , x j } for variable vertices. The purpose of the remaining independent set of G is exactly to ensure that the vertices of the clique are colored with the required colors.
Our graph is a threshold graph. It means that it is possible to find an order , i.e. satisfy the condition of Lemma 1. The order we obtain is the following: u i = c i for every i ∈ [m], and u m+jm 2 +t = v j+1,t for each j ∈ {0, 1, . . . , n − 1} and each t ∈ [m 2 ]. The condition of Lemma 1 is satisfied as we step by step add vertices to I. The i th step will correspond to the vertex u i ∈ K. At this step we introduce all neighbours of u i in I and their colors in the precoloring p. For convenience we denote N (u i ) ∩ I by P i . the set Pi−1, i.e. the vertices colored on previous steps color l1 color lt ui · · · · · · · · · · · · u1, u2, . . . , ui−1 At first, we construct P 1 in the following way. For each l ∈ L(u 1 ), add exactly m + nm 2 vertices to P 1 and color them with the color l. No more vertices are added to P 1 , so |P 1 | = |L(u 1 )| · (m + nm 2 ). Then, for each i ∈ [2, m + nm 2 ], we construct P i by adding new vertices to P i−1 and precoloring them. By doing so we satisfy condition . The process of this construction is described below and illustrated in Fig. 1.
Let N p (P i , l) be the number of vertices in P i that are precolored with the color l, i.e. N p (P i , l) = |{u ∈ P i | p(u) = l}| . For each i we require that the vertices in P i are precolored mostly with required colors for u i , that is, colors in the set L(u i ). Formally, for every l ∈ L, we require Note that P 1 satisfies (*). Now let P i−1 be constructed and satisfy (*). We construct P i that also satisfies the constraint. We start with P i = P i−1 . Then for each l ∈ L(u i ) we introduce i(m + nm 2 ) − N p (P i−1 , l) new vertices precolored with color l to P i . For every . Hence, P i also satisfies (*).
The construction of G is finished. Let us remark again that K forms a clique in G and I = P m+nm 2 forms an independent set in G. We constructed graph in a way that where K = {u 1 , u 2 , . . . , u m+nm 2 }. Thus, by Lemma 1, G is a threshold graph. Moreover, construction of G is done in polynomial time.
We finally set the number of required happy edges k = m + nm 2 · m + nm 2 + 1 2 + n · m 2 2 + m 3 and argue that F is satisfiable if and only if (G, p, k) is a yes-instance of MHE. Let F be satisfiable, that is, F has a satisfying assignment σ : x j → {0, 1}. We construct coloring c of G extending p that yields at least k happy edges as follows.
For each j ∈ [n] and t ∈ [m 2 ], set the color of the vertex v j,t corresponding to the variable x j with the color corresponding to the literal of x j that evaluates , there is at least one variable satisfying clause C i . In other words, there exists j ∈ [n], such that either x j ∈ C i and σ(x j ) = 1, or x j ∈ C i and σ(x j ) = 0. Choose any such j and color the corresponding clause vertex c i with the color corresponding to the literal of x j that evaluates to true. That is, There is no any uncolored vertex left, so the construction of c is finished.
There are at least k happy edges in G with respect to c.
Proof of the claim. Consider edges between K and I in G. Observe that for each u i ∈ K, c(u i ) ∈ L(u i ), that is, each variable vertex is colored with the color corresponding to one of its literals, and each clause vertex is colored with the color corresponding to one of the literals it contains. Each u i ∈ K is incident to exactly N p (P i , c(u i )) happy edges that has the other endpoint in I. By construction of G, any P i satisfies (*) and c(u i ) ∈ L(u i ), hence there are exactly i(m + nm 2 ) happy edges between u i and I in G with respect to c.
In total, there are exactly happy edges between K and I with respect to c.
For each j ∈ [n], variable vertices {v j,1 , v j,2 , . . . , v j,m 2 } share the same color, and form a clique in G. Thus, for each j ∈ [n], there are m 2 2 happy edges between vertices of type v j,t . In total over all variable vertices, there are n · m 2 2 such happy edges with respect to c.
Consider now edges between clause vertices and variable vertices. For each i ∈ [m], clause vertex c i is colored with the color corresponding to a literal that evaluates to 1 with respect to σ. This literal corresponds to some variable, say x j . All variable vertices corresponding to x j are also colored with the literal of x j that evaluates to 1 with respect to σ. Hence, On the other hand, c(v j ,t ) = c(c i ) for any j = j, since c(v j ,t ) corresponds to a literal of the variable x j . Thus, there are exactly m 2 happy edges between c i and the variable vertices v j,t in G with respect to c. In total over all clause vertices, there are exactly m 3 such happy edges.
Considered types of edges are distinct and cover all edges of G. The number of happy edges among them sums up to k.
Hence, we showed that if F is satisfiable, then (G, p, k) is a yes-instance of MHE. We now give a proof in the other direction.
Let c be a coloring of G extending p such that at least k edges of G are happy with respect to c. We assume that c is optimal, i.e. it yields the maximum number of happy edges in G. We make the following claims and then show how to construct a satisfying assignment σ of F .
Claim 2.
In any optimal coloring c of G extending p, c(u i ) ∈ L(u i ) for every Proof of the claim. Suppose that c is an optimal coloring of G, but c(u i ) / ∈ L(u i ) for some u i ∈ K. There are exactly N p (P i , c(u i )) happy edges between u i and I. On the other hand, |K| = m + nm 2 , hence u i is adjacent to at most m + nm 2 − 1 vertices of color c(u i ) in K.
But if one picks any color l ∈ L(u i ) and puts c(u i ) = l, u i becomes adjacent to at least N p (P i , l) = i(m + nm 2 ) happy edges, and the total number of happy edges in G with respect to c increases. This contradicts the optimality of c.
Claim 3. In any optimal coloring c of G extending p, all variable vertices corresponding to the same variable are colored with the same color. Formally, Proof of the claim. Suppose that c is an optimal coloring extending p, but · (m + nm 2 ) happy edges going in I respectively according to (*).
Let h 1 and h 2 be the number of vertices in K that are colored with colors c(v j,t1 ) and c(v j,t2 ), respectively. Thus, v j,t1 and v j,t2 are incident to exactly h 1 − 1 and h 2 − 1 happy edges in G[K], respectively. Note that the edge between v j,t1 and v j,t2 is not happy.
Without loss of generality, h 1 ≥ h 2 . Change the color of v j,t2 in c to c(v j,t1 ). Since c(v j,t2 ) is still a literal of x j , hence c(v j,t2 ) ∈ L(v j,t2 ), the number of happy edges connecting v j,t2 and I does not change, even though the set of such happy edges becomes different. Consider edges in G [K]. v j,t2 is now adjacent to h 1 neighbours of the same color, as the edge between v j,t1 and v j,t2 also becomes happy. Since h 1 > h 2 − 1, we have increased the total number of happy edges in G with respect to c. This contradicts the optimality of c.
We now use the above claims to construct σ from an optimal coloring c yielding at least k happy edges. By Claim 2, there are exactly (m + nm 2 ) · (1 + 2 + . . . + (m + nm 2 )) = (m + nm 2 ) · m+nm 2 +1 2 happy edges between K and I with respect to c. By Claim 3, there are exactly n · m 2 2 happy edges between all variable vertices. There are exactly m clause vertices in G, hence there are at most m 2 happy edges between all clause vertices. The only edges left are the edges between clause and variable vertices, hence there are at least happy edges between clause and variable vertices. Construct σ according to the colors of variable vertices, so that the literal corresponding to c(v j,t ) evaluates to 1 with respect to σ. Formally, for each We now argue that each clause C i ∈ F contains a literal that evaluates to 1 with respect to σ, and that this literal is c(c i ).
Suppose that it is not true, and there is a clause C i so that c(c i ) is a literal that evaluates to 0 with respect to σ. By construction of σ, there are no happy edges between c i and the variable vertices. c i corresponds to a literal evaluating to 0, but all colors of variable vertices are literals that evaluates to 1 with respect to σ. Moreover, any other clause vertex c i is adjacent to either 0 or m 2 variable vertices of color c(c i ). For each literal, there are either 0 or m 2 variable vertices colored correspondingly to this literal.
There are exactly m−1 clause vertices apart from c i , hence at most (m−1)·m 2 edges between clause vertices and variable vertices are happy with respect to c.
for any m > 0, a contradiction. Thus, each clause C i contains a literal that evaluates to 1 with respect to σ, i.e. σ is a satisfying assignment of F . We proved that if (G, p, k) is a yes-instance of MHE, then F is satisfiable. The proof is complete.
There is no XP-algorithm for Maximum Happy Edges parameterized by clique-width, unless P = NP.
Proof. Suppose there is an XP-algorithm for Maximum Happy Edges parameterized by clique-width, i.e. there is an algorithm with running time n f (cw) for some function f for MHE. Threshold graphs are a subclass of cographs [23], that is, graphs of clique-width at most two [8]. Hence, MHE on threshold graphs can be solved in n f (2) = n O(1) . Then, by Theorem 1, problem that is solvable in polynomial time is NP-hard, hence P = NP.
We have shown that MHE parameterized by clique-width alone is hard. Following known results on the existence of O * ( O(pw) ) and O * ( O(tw) ) running time algorithms for both MHV and MHE parameterized by pathwidth or treewidth combined with the number of colors [1,26,2], it is reasonable to ask the complexity of MHE parameterized by cw+ . We now show that MHE parameterized by cw + admits an XP-algorithm.
Theorem 2.
There is an algorithm for Maximum Happy Edges with n O( ·cw) running time, if a cw-expression of G is given.
Proof. The algorithm is by standard dynamic programming on a given w-expression Ψ of G. We assume that Ψ is a nice w-expression of G, i.e. no edge is introduced twice in Ψ . For each subexpression Φ of Ψ , OP T (Φ, n 1,1 , n 1,2 , . . . , n 1, , n 2,1 , n 2,2 , . . . , n w, −1 , n w, ) denotes the maximum number of happy edges that can be obtained in G Φ simultaneously with respect to a coloring such that the number of vertices with a label i in G Φ that are colored with a color a in G Φ is exactly n i,a . Formally, where E(G Φ , c) is the set of edges that are happy in G Φ with respect to c. If there are no colorings corresponding to a cell OP T (Φ, n 1,1 , . . . , n w, ), we put its value equal to −∞.
The algorithm computes the values of OP T in a bottom-up approach, starting from the simplest subexpressions of Ψ up to Ψ itself. Thus, when the algorithm starts computing the values of OP T (Φ, ·) for a subexpression Φ of Ψ , it has all values of OP T computed for each subexpression of Φ. There are four possible cases of computing values of OP T (Φ, ·) depending on the topmost operator in Φ.
4. Φ = η i,j Φ . This is the only case where edges are introduced. Any coloring c of G Φ is a coloring of G Φ . Moreover, if c corresponds to OP T (Φ, n 1,1 , . . . , n w, ) then, clearly, c corresponds to OP T (Φ , n 1,1 , . . . , n w, ) as well. Thus, one shall only compute the number of newly-introduced edges that are happy with respect to c. As Ψ is a nice w-expression, each edge between vertices with label i and vertices with label j is newly-introduced. Each of such happy edge should connect a vertex with the label i and a vertex with the label j that are colored with the same color a for some a. The number of such edges for a fixed a is n i,a · n j,a . Hence, OP T (Φ, n 1,1 , . . . , n w, ) = OP T (Φ , n 1,1 , . . . , n w, ) + a=1 n i,a · n j,a .
The description of all possible cases for Φ and corresponding recurrence relations is finished. Note that there are at most |Ψ | · n ·w cells in the OP T table, and each of them is computed in O(n ·w ) time (computation in the case of disjoint union and the case of relabelling takes the most time) by the algorithm. Thus, the whole computation of OP T takes O(|Ψ | · n 2 ·w ) running time. Clearly, the maximum number of happy edges that can be obtained in G simultaneously equals max n1,1,...,n w, OP T (Ψ, n 1,1 , . . . , n w, ), which is found in O(n ·w ) time. This finishes the proof. The fixed-parameter tractability of MHE with respect to cw + remains unknown though. We note that Theorem 2 does not imply that no FPT-algorithm exists for MHE parameterized by cw + under P = NP. But it at least implies that no algorithm with running time O * (poly( ) f (cw) ) exists for MHE, unless P = NP. We leave the FPT-membership of MHE parameterized by cw + as an open question.
Maximum Happy Vertices
We start this section by answering the complexity of Maximum Happy Vertices parameterized by cw + . We note that MHV is W[2]-hard when parameterized by the clique-width of the input graph alone [5]. In contrast to this, we show that MHV is in FPT if the clique-width parameter is extended by the number of colors . Proof. Given a w-expression Ψ of G, we solve (G, p, k) with the following dynamic programming.
OP T (Φ, col 1 , col 2 , . . . , col w , out 1 , out 2 , . . . , out w ), where col i , out i ∈ [ ] ∪ {0}, and Φ is a subexpression of Ψ . For convenience, we may also refer to this value as OP T (Φ, col, out), where col = (col 1 , col 2 , . . . , col w ) and out = (out 1 , out 2 , . . . , out w ), col, out ∈ Then OP T (Φ, col, out) denotes the number of special (to be formally defined below) happy vertices in V (G Φ ) maximized over all colorings c of (G, p) such that 1. c is a full coloring of G extending p; 2. For each i, if col i = 0, then either V i is empty or |c(V i )| ≥ 2; (here and below, We denote the set of all colorings that satisfy the conditions above by C(Φ, col, out). To explain the purpose of the out values, we suggest the following useful observation.
Observation 1 For each subexpression
In other words, vertices with the same label have the same set of neighbours apart from neighbours in G Φ .
With this observation, it is easy to see that, if specified, out i = 0 denotes the color of the neighbours of the vertices with label i, apart from their neighbours in G Φ . In OP T (Φ, col, out) all happy vertices are counted, except for vertices in V i such that out i = 0 and N G (V i ) = N G Φ (V i ) (that is, vertices with label i having at least one neighbour outside G Φ ). That is, , and V i (Φ, out i ) = ∅ otherwise. We recall that by H(G, c) we denote the set of all vertices in G that are happy with respect to c. For compactness, denote Note that the conditions impose that if V i is empty, then col i = 0; and if N G (V i ) = N G Φ (V i ), then out i = 0. Therefore, for some values of Φ, col and out, there may be no corresponding colorings c of (G, p), i.e. C(Φ, col, out) = ∅. For such cases, we put OP T (Φ, col, out) = −∞. Technically, −∞ is a special value with a property that x + (−∞) = (−∞) + x = −∞ and max{x, −∞} = max{−∞, x} = x for all possible values of x.
To avoid trivial cases when C(Φ, col, out) = ∅, we introduce the notion of good triples. It has a property that if a triple (Φ, col, out) is not a good triple, then C(Φ, col, out) = ∅. This does not work in the other direction though, and C(Φ, col, out) = ∅ may hold for a good triple (Φ, col, out).
Definition 2.
We say that a triple (Φ, col, out) is a good triple, if it satisfies the following conditions: The first two conditions for a good triple were discussed slightly above. If (Φ, col, out) does not satisfy these two conditions, then C(Φ, col, out) = ∅. The third condition of a good triple handles the case when vertices with label j are neighbours of the vertices with label i in G, but not in G Φ yet. If the color of the outer neighbourhood of the vertices with label i is specified, i.e. out i = 0, then all vertices with label j in G Φ should share the same color out i , i.e. col j = out i . Obviously, if a triple does not satisfy this condition, there is no colorings corresponding to that triple. The fourth condition ensures that for any two labels sharing an outer neighbour, if the colors of outer neighbours are fixed for both labels, these fixed colors should be the same. So we have: From now on, we work just with good triples. That is, we do not necessarily exclude all triples (Φ, col, out) with C(Φ, col, out) = ∅, but just some of them. More importantly, we do not exclude any triple with C(Φ, col, out) = ∅ from our consideration.
We also note that for each fixed subexpression Φ and each coloring c of (G, p), there is at least one correct choice of the corresponding values of col and out, so that c ∈ C(Φ, col, out).
Obviously, the maximum number of happy vertices that can be obtained in (G, p) can be found as a maximum value of OP T (Ψ, col, out) over all possible values of col and out. We now show how we calculate the value of OP T (Φ, col, out) for each possible choice of the subexpression Φ of Ψ , col and out. We do that in a bottom-up manner, starting with the smallest subexpressions of Φ. In fact, when we are to calculate the values of OP T for a subexpression Φ, we have all values of OP T for all proper subexpressions of Φ calculated. Now fix Φ, col and out, for which (Φ, col, out) is a good triple, and consider the last operator in Φ. Note that if (Φ, col, out) is not a good triple, then OP T (Φ, col, out) just equals −∞.
Φ = i(v).
Then G Φ is a subgraph of G consisting of a single vertex v. Take any c ∈ C(Φ, col, out). Note that col i = 0, as V i = {v} and |c(V i )| = 1, so Hence, if v is precolored and col i = p(v), then C(Φ, col, out) = ∅, and we put OP T (Φ, col, out) = −∞.
For each j = i, V j = ∅, so col j = out j = 0 for all such j.
. Thus, if out i = 0, then either N G (v) is empty and v is isolated and happy; or v should not be counted in OP T (Φ, col, out), even if it is happy. If out i = 0, v is not isolated in G and all its neighbours are colored with the color out i by c, hence v is happy if and only if in the same way as for V (G Φ ). In particular, We shall now formulate the main lemma for this case. Note that this lemma does not immediately follow from the definition of P (col). The maximum number of happy vertices OP T (Φ , col , out ) in V (G Φ ) is achieved with some coloring c ∈ C(Φ , col , out ), and the maximum number of happy vertices in V (G Φ ) is achieved with some coloring c ∈ C(Φ , col , out ). But in general, c and c are different colorings of (G, p). Therefore, we need a mechanism to combine two different colorings that agree with (Φ, col, out) in one coloring that preserves all happy vertices in both V(Φ , out ) and V(Φ , out ). We proceed with the following claim, which implies Lemma 2.
Claim 6. Let out i = out i if V i = ∅, and out i = 0 otherwise. Define out i in the same way for V i . Let (col , col ) ∈ P (col), and let c ∈ C(Φ , col , out ), c ∈ C(Φ , col , out ). Let c be a coloring of (G, p) defined as Proof of the claim. out )). That is, v is a neighbour of a vertex u ∈ V j (Φ , out ) for some j. Hence, by definition of V j , out j = 0, and c (v) = out j . Here we have v ∈ N G (V j )\N G Φ (V j ). Note that the edge between u and v is outside of Since out j = out j , out j = 0, and as (Φ, col, out) is a good triple, col i = out j by the third condition in the definition of good triples. Then c (v) = col i , a contradiction. If the unsatisfied restrictions are imposed by out , for some i, out i = 0 and c (v) = out i . If v ∈ V j for some j, then col j = col j = out i = out i by the third condition of a good triple, and we obtain a contradiction like in the previous case. and (Φ, col, out) is a good triple, by the fourth condition of a good triple, out i = out j . Hence, out i = out j = c (v), a contradiction. We finally obtain that c ∈ C(Φ , col , out ) ∩ C(Φ , col , out ). (ii) Straightforwardly follows from Claim 5 and (i).
(iii) By definition of c, all vertices in V (G Φ ) that are happy (and counted in OP T (Φ , col , out )) with respect to c in G, are happy with respect to c as well. Thus, H(G, c) That is, c preserves colors of all happy vertices in V(Φ , col ) and all their neighbours in G, with respect to c . Suppose that it is not true, and there is a vertex , out )) and c (v) = c (v). Using the fact that (Φ, col, out) is a good triple, we can obtain a contradiction. We skip this case analysis, as it is very similar to the case analysis shown above for proving that c ∈ C(Φ , col , out ) ∩ C(Φ , col , out ).
Claim 6 shows that two colorings c ∈ C(Φ , col , out ) and c ∈ C(Φ , col , out ) that agree with (Φ, col, out) always can be merged in a single coloring c ∈ C(Φ, col, out) of G Φ preserving happy vertices in both V(Φ , out ) and V(Φ , out ), so Lemma 2 holds. With Lemma 2, it is easy to compute the value of OP T (Φ, col, out) in ( + 1) 2w · n O(1) time, as |P i (col i )| ≤ ( + 1) 2 . 3. Φ = ρ i→j Φ . Take a coloring c ∈ C(Φ, col, out). Note that necessarily col i = out i = 0, as V i = ∅. We want to find the values of col and out , so that c ∈ C(Φ , col , out ) and V(Φ, out) = V(Φ , out ). As only labels i and j are touched by the topmost operator in Φ, col k = col k and out k = out k for any k not equal to i or j. We assume that neither V i nor V j are empty, otherwise finding col and out is trivial.
then necessarily at least one of col i and col j is equal to 0 (so |c(V i )| ≥ 2 or |c(V j )| ≥ 2) or col i = col j (so |c(V i ) ∪ c(V j )| = 2). If col j = 0, then col i = col j = col j obviously. Now consider handling out i and out j . If out j = 0 and N G (V j ) = N G Φ (V j ), then the vertices with label j in G Φ are not counted in OP T (Φ, col, out), so we can skip counting them in OP T (Φ , col , out ) by putting out i = out j = 0.
out j , so out i = out j = out j necessarily. Hence, in either case out i = out j = out j , so out = out. Thus, we obtain where the values of col i and col j are iterated over a few options (if col j = 0, they are both equal to col j , otherwise there are at most ( + 1) 2 − options as discussed above). For each k not equal to i or j, col k = col k . 4. Φ = η i,j Φ . Again, take a coloring c ∈ C(Φ, col, out) and consider finding appropriate col and out so that c ∈ C(Φ, col , out ) and OP T (Φ, col, out) = OP T (Φ, col , out ). Trivially, col = col as V i = V i for each i ∈ [w]. For each k not equal to i or j, it is enough to put out k = out k , as Consider now the value of out i . If V j is empty (hence, col j = 0), G Φ equals G Φ , so out i = out i . If out i = 0, then we should put out i = 0 so V i (Φ, out) = V i (Φ , out ). Suppose that out i = 0 and V j is not empty. If col j = 0 or col j = out i , then all vertices in V i are unhappy with respect to c, as V j Thus, we would like not to count the vertices in V i happy in G Φ , so we can put out i = 0. In case when col j = out i , all vertices with label i that are happy in G Φ with respect to c are happy in G Φ with respect to c as well. Hence, one should put out i = out i in this case. The value of out j is handled in the same way.
We finally obtain that OP T (Φ, col, out) = OP T (Φ , col, out ), where out k = out k for each k not equal to i or j, and out j = 0, if out j = 0 and col i = out j , out j , otherwise. This exhausts the list of possible cases.
It is easy to see that the time required for the computation of a value OP T (Φ, col, out) requires polynomial time for each case, except for the case of the disjoint union operator. For this case, at most ( + 1) 2w · n O(1) operations are required for the computation of a single cell of OP T . Since OP T consists of at most |Ψ | · ( + 1) 2w cells, computation of all values of OP T takes at most ( + 1) 4w · n O(1) running time.
It is left to answer the initial problem question using the computed values of OP T . Note that each full coloring of G extending p is contained in C(Ψ, col, 0) for some choice of col. Furthermore, V(Ψ, 0) = V (G), so OP T (Ψ, col, 0) is the maximum number of happy vertices that can be obtained in G with respect to colorings in C(Ψ, col, 0). Thus, the maximum number of happy vertices that can be obtained in G equals max col OP T (Ψ, col, 0), and this can be found in O(( + 1) w ) running time having all values of OP T computed. This finishes the description of the algorithm. In the rest of this section we show that Maximum Happy Vertices is polynomially solvable on the class of interval graphs, that is related to clique-width in the following sense. Interval graphs have unbounded clique-width, moreover, unit interval graphs are minimal hereditary graph class of unbounded cliquewidth [22]. Since threshold graphs are a subclass of interval graphs, this result also covers the result of Choudhari and Reddy in [6], where they showed that MHV is polynomially solvable on the class of threshold graphs. We also note that MHE, in contrast to MHV, is NP-hard on the class of interval graphs, which is a corollary of Theorem 1.
We start with the following convenient characterization of interval graphs.
Theorem 4 ([14]).
A graph is an interval graph if and only if its maximal cliques can be linearly ordered in such a way that for every vertex in the graph the maximal cliques to which it belongs occur consecutively in the linear order.
The sequence of the maximal cliques of an interval graph G in the correct ordering from Theorem 4 can be found in O(|V (G)| 2 ) time using the LBFS algorithm of Corneil, Olariu and Stewart [7]. The following lemma is a folklore technical result, so it is given without a proof.
Lemma 3. Let G be an interval graph, n = |V (G)|, m = |E(G)|. There is a sequence S 0 , S 1 , . . . , S 2n of subsets of V (G), such that Moreover, this sequence can be found in O(n 2 ) time.
We shall now prove a very useful property of this sequence.
Moreover, this ordering can be found in O(n) time.
Proof. We claim that for each S i an appropriate order is the order v 1 , v 2 , . . . , v |Si| so that the values of l vi go in the increasing order. As the values of l v are found in O(n) time, it is easy to find such ordering in O(n) time as well, using additional O(n) memory. It is easy to prove that this ordering is sufficient by induction on i. The base case i = 0 is trivial since S i = ∅. Let now i > 0 be an integer and the claim hold for i−1. There are two possible cases: either S i = S i−1 \{v} or S i = S i−1 ∪{v}. In the former case, G i−1 and G i are the same, and the ordering of the vertices S i is just a subsequence of the ordering of the vertices of S i−1 , so the claim holds true for i. In the latter case, G i differs from G i−1 in the vertex v and edges connecting each vertex in As v has the largest value of l v among all vertices in S i , it stands the last in the ordering for S i , and its neighbourhood is contained in the neighbourhood of each other vertex in S i . The other vertices in the ordering of S i are no different from these of S i−1 , so the claim holds true for i as well. The proof is finished.
We are now ready to present a polynomial time algorithm for the Maximum Happy Vertices problem on the class of interval graphs. Proof. We present an algorithm solving Maximum Happy Vertices on the class of interval graphs. Let (G, p, k) be an instance of MHV given to the algorithm, where G is an interval graph, n = |V (G)|, m = |E(G)|.
Firstly, the algorithm finds a sequence S 0 , S 1 , S 2 , . . . , S 2n from Lemma 3 in O(n 2 ) time. Then it employs a dynamic programming over the sequence. Denote by G i the graph induced by the union of the first i + 1 subsets in the sequence, i.e.
(3) Also denote by C(i, h, a, u) the set of all colorings of G i corresponding to the right part of equation 3 for OP T (i, h, a, u), so For each choice of (i, h, a, u) such that there is no appropriate coloring c for this choice, i.e. C(i, h, a, u) = ∅, we put OP T (i, h, a, u) = −∞. Strictly speaking, OP T (i, h, a, u) denotes the maximum number of vertices that can be happy simultaneously in G i with respect to colorings c such that there are exactly h happy vertices in S i , the vertex v ∈ S i with the largest value of r v is colored with the color a. And u denotes the vertex that is colored with a color different from a with the largest value of r u . The intuition behind this DP is the following. Since each S i induces a clique in G, S i can contain a happy vertex only if all its vertices are colored with the same color. Thus, we do not really need to store the colors of all vertices in S i as parameters of OP T , since happy vertices can be produced only when there is exactly one color in S i . The value of u allows to understand whether all vertices in S i are colored with the same color. Moreover, the value of a clearly determines this color. Due to the interval structure of G, the values of h, a and u can be easily maintained while making transitions from i to i + 1. The following claim shows that the set of happy vertices inside S i is determined uniquely by the value of h.
Proof of the claim. By definition of C(i, h, a, u), c) also. Thus, H(G i , c) ∩ S i consists of the last h vertices from the ordering.
Clearly, OP T (0, 0, −1, −1) = 0 and every other value of OP T (0, ·, ·, ·) equals −∞, as S 0 = ∅. Now the algorithm has all values of OP T computed correctly for i = 0. Then, the algorithm iterates over all values of i from 0 to 2n−1. Having the value of i fixed, our algorithm initializes all values of OP T (i + 1, ·, ·, ·) with −∞. Then it iterates over each state of the dynamic programming OP T (i, h, a, u) with OP T (i, h, a, u) = −∞. Consider now a coloring c ∈ C(i, h, a, u) of G i . There are two possible options of how G i+1 and S i+1 differs from G i and S i .
The following claim formalizes dynamic programming transitions that should be made in this case. and and and Proof of the claim.
Recall that a denotes the color of the vertex w ∈ S i with the largest value of r w , and a denotes such a color in S i+1 = S i \ {v}. Since v is the vertex with the smallest value of r v in S i (r v = i + 1), a should change only if S i = {v}. The same can be said about u and u .
Basically, Claim 8 states that C(i, h, a, u) = C(i + 1, h , a , u ), where h , a and u depend only on the values of i, h, a and u. Note that in order to compute the value of h , our algorithm firstly finds H(G i , c) ∩ S i as stated in Claim 7. To do this, the algorithm needs the ordering of the vertices of S i from Lemma 4. This ordering is found once for a fixed value of i in O(n) time. When the ordering for a fixed i is known, then, by Claim 7, the condition v ∈ H(G i , c) (equivalently, v ∈ H(G i , c) ∩ S i ) from equation 4 can be checked in O(1) time using the value of h. Thus, for each fixed values of i, h, a and u, the algorithm computes h , a and u and updates current value of OP T (i + 1, h , a , u ) (initially all values of OP T (i + 1, ·, ·, ·) are equal to −∞) with the value of OP T (i, h, a, u): OP T (i + 1, h , a , u ) := max{OP T (i + 1, h , a , u ), OP T (i, h, a, u)}.
Since each full coloring of G i+1 is a full coloring of G i and all values OP T (i, ·, ·, ·) are computed correctly, all values of OP T (i + 1, ·, ·, ·) are computed correctly as well. Note that if h > 0 and u = −1, then necessarily OP T (i, h, a, u) = −∞, as there may be no happy vertices inside S i , if S i contains two vertices colored with distinct colors. The algorithm does not iterate over such values of OP T (i, h, a, u), so for a fixed value of i it iterates over at most We now describe transitions for the second case, when S i+1 = S i ∪ {v}, so G i+1 differs from G i by a vertex v and all edges between v and S i . Now, for a coloring c ∈ C(i, h, a, u) we consider each extension of c onto v agreeing with p. That is, if v is a vertex precolored by p, we consider only one extension of c. Otherwise, we consider all colors of v in an extension of c. When the color of v in an extension of c, say c , is specified, then c ∈ C(i + 1, h , a , u ) for some values of h , a and u . The following claim shows how to determine these values. and Proof of the claim. Consider proving the equation for h . Recall that N Gi+1 (v) = S i and h = |H(G i+1 , c ) ∩ S i+1 | = |H(G i+1 , c ) ∩ (S i ∪ {v})|. Note that if v is happy with respect to c , then all vertices in S i are colored with the color b or S i = ∅. Clearly, S i = ∅ is equivalent to a = −1. Note that u = −1 is equivalent |c(S i )| ≤ 1. Thus, u = −1 and a = b is equivalent to that S i is not empty and all vertices in S i are colored with the color b by c. Thus, if v is happy with respect to c in G i+1 , then H(G i+1 , c ) = H(G i , c) ∪ {v}, so h = h + 1. If, otherwise, v / ∈ H(G i+1 , c ), then no vertex in S i (hence no vertex in S i+1 ), is happy with respect to c in G i+1 , so h = 0. The equality |H(G i+1 , c )| = |H(G i , c)| + h − h follows from the discussion above and the fact that H(G i+1 , c ) \ S i+1 = H(G i , c) \ S i .
To prove the equation for a , recall that precolored vertex, there is still the only case c (v) = p(v)). Hence, for the values of i for which r v < r w , this change in the algorithm achieves the desired O( n) running time bound. Consider now the other case, r v > r w . In this case, the value of a equals to the color of v in c , but does not depend on a. It only matters whether a = b or not for correct computation of h and u . Thus, in this case algorithm iterates over possible values of h, u and c (v), but not fixing the value of a . To handle the case c (v) = a, it is enough to find the values of h and u accordingly to Claim 9 and make the transition OP T (i, h, a, u)+h −h} is enough to be made. Thus, it is enough to compute max a =c (v) OP T (i, h, a, u) faster for any given value of c (v). To achieve that, for a fixed triple of values i, h and u compute a sequence p 0 , p 1 , . . . , p , where p 0 = −∞ and p j = max{p j−1 , OP T (i, h, j, u)} for each j ∈ [ ], in O( ) time straightforwardly. p j is essentially the maximum value among the values of OP T (i, h, ·, u) taken for the first j colors. Analogously compute a sequence s +1 , s , . . . , s 1 , where s +1 = −∞ and s j = max{s j+1 , OP T (i, h, j, u)} for each j ∈ [ ], in O( ) running time. In other words, s j is the maximum value of OP T (i, h, ·, u) among the colors from j to . When these two sequence are computed, then, clearly, max a =c (v) OP T (i, h, a, u) can be found as max{p c Clearly, the maximum number of happy vertices that can be obtained in (G, p) equals OP T (2n, 0, −1, −1), as G 2n = G and C(2n, 0, −1, −1) equals the set of all colorings of G extending p. Hence, the algorithm finally checks that OP T (2n, 0, −1, −1) is at least k to determine whether (G, p, k) is a yes-instance of MHV. This finishes the proof.
|
2020-03-11T01:00:37.177Z
|
2020-03-10T00:00:00.000
|
{
"year": 2020,
"sha1": "6df5899401c99f862059d6d90373510aa63bca83",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.04605",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6df5899401c99f862059d6d90373510aa63bca83",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
221012499
|
pes2o/s2orc
|
v3-fos-license
|
An Externally-Validated Dynamic Nomogram Based on Clinicopathological Characteristics for Evaluating the Risk of Lymph Node Metastasis in Small-Size Non-small Cell Lung Cancer
Background: Lymph node metastasis (LNM) status is of key importance for the decision-making on treatment and survival prediction. There is no reliable method to precisely evaluate the risk of LNM in NSCLC patients. This study aims to develop and validate a dynamic nomogram to evaluate the risk of LNM in small-size NSCLC. Methods: The NSCLC ≤ 2 cm patients who underwent initial pulmonary surgery were retrospectively reviewed and randomly divided into a training cohort and a validation cohort as a ratio of 7:3. The training cohort was used for the least absolute shrinkage and selection operator (LASSO) regression to select optimal variables. Based on variables selected, the logistic regression models were developed, and were compared by areas under the receiver operating characteristic curve (AUCs) and decision curve analysis (DCA). The optimal model was used to plot a dynamic nomogram for calculating the risk of LNM and was internally and externally well-validated by calibration curves. Results: LNM was observed in 12.0% (83/774) of the training cohort and 10.1% (33/328) of the validation cohort (P = 0.743). The optimal model was used to plot a nomogram with six variables incorporated, including tumor size, carcinoembryonic antigen, imaging density, pathological type (adenocarcinoma or non-adenocarcinoma), lymphovascular invasion, and pleural invasion. The nomogram model showed excellent discrimination (AUC = 0.895 vs. 0.931) and great calibration in both the training and validation cohorts. At the threshold probability of 0–0.8, our nomogram adds more net benefits than the treat-none and treat-all lines in the decision curve. Conclusions: This study firstly developed a cost-efficient dynamic nomogram to precisely and expediently evaluate the risk of LNM in small-size NSCLC and would be helpful for clinicians in decision-making.
INTRODUCTION
Lung cancer is the most common cause of cancer-related death worldwide in recent years (1). As computed tomography (CT) becomes the main means of screening for high-risk populations of lung cancer, the detection rate for small-size lung cancer has been increasing (2). The standard treatment for early-stage lung cancer is lobectomy with systematic lymph node dissection (LND) (3), but whether lobectomy with systematic LND is necessary for all these patients remains unclear. The sublobar resection (wedge resection and segmentectomy) has been used for early-stage NSCLC, especially for patients with impaired pulmonary function reserve (4). Moreover, compared with those who underwent systematic LND, patients that received selective LND also presented a lower incidence of perioperative complications and similar survival (5,6). However, these sublevel surgical procedures would be more likely to cause tumor residual, since occult LNM occurred not rarely, even though in small-size NSCLC (7)(8)(9)(10)(11). Of possibly greater concern is the occurrence of micrometastases in histologically negative lymph nodes from early-stage NSCLC patients (10). Thus, developing reliable methods for evaluating the risk of LNM is of great importance, and would be helpful for decision making of medical management.
CT and positron emission tomography/computed tomography (PET/CT) are widely used for noninvasive nodal staging, but these methods have limited accuracy. Invasive node staging approaches, such as mediastinoscopy and endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA), may not be cost-efficient for small-size NSCLC patients. However, there is still no generally-accepted method for precisely evaluating the risk of LNM in early-stage NSCLC. The clinicopathological characteristics and risk factors associated with LNM remain unclear. We aimed to analyze the clinicopathological characteristics related to LNM and to develop and validate a cost-efficient dynamic nomogram for evaluating the risk of LNM in patients with small-size NSCLC.
Patient Enrollment
From January 2013 to June 2019, NSCLC patients in Peking Union Medical College Hospital (PUMCH) were retrospectively reviewed (Figure 1). The eligibility criteria were: (1) single cancer lesion; (2) ≤ 2 cm in maximal diameter on CT; (3) receiving lung resection (lobectomy or sublobar resection) with systematic LND; (4) complete pathological information; (5) not receiving neoadjuvant chemotherapy or radiotherapy before surgery. This study was approved by the Ethics Committee of Peking Union Medical College Hospital. All patients signed informed consent forms before operation.
Clinicopathological Characteristics
All clinical information, including gender, age, smoking status, and serum carcinoembryonic antigen (CEA), was collected before the operation. The CT were performed within 60 days before surgery, and the imaging features involved maximal tumor size, imaging density, and specific signs such as speculation, vessel convergence, lobulation, pleural indentation, and calcification. The CT images were reviewed by two thoracic surgeons and one radiologist independently. The final conclusion was made by consensus reading if disagreement occurred between them. The tumor imaging density was grouped into pure ground glass nodule (pGGN), mixed GGN (mGGN), and solid lesions. mGGN was defined as the presence of a solid component within the nodule at the mediastinal window level of CT. Then, the mGGN was further divided into two groups according to the ratio of the maximal diameter of the solid component to the maximal diameter of the tumor area (the cutoff value was set as 50%). The pathological findings were recorded from paraffin-embedded surgery specimens by pathological experts from PUMCH. Patients' pathological N status was confirmed according to the 8th edition TNM Classification for lung cancer (12). The presence of lymphovascular invasion (LVI) and pleural invasion were also included.
Surgical Procedure
For patients with a tumor size <8 mm, lung resection was considered for those with high risk of malignancy after follow-up or at the demand of patients. Sublobar resection (wedge resection or segmentectomy) was considered for patients with a peripheral pGGN tumor or those with poor lung function reserve or comorbidities. For others, standard lobectomy with systematic LND was recommended firstly. In addition to N 1 nodes (#10, #11, #12, #13, and #14), systematic LND included 2R, 4R, 3A, 3P, #7, #8, and #9 for tumors located in the right lung and 4L, #5, #6, #7, #8, and #9 for tumors located in the left lung, if possible.
Statistical Analysis
Patients were randomly divided into a training cohort and a validation cohort as a ratio of 7:3. Using the training cohort, the least absolute shrinkage and selection operator (LASSO) regression was performed to select optimal predictive variables for LNM. Then the logistic regression models were constructed by incorporating these variables. The models' performance in both the training cohort and validation cohort included discrimination and calibration. Discrimination is the predictive accuracy to distinguish patients with LNM from those without LNM and can be measured by the area under the receiver operating characteristic (ROC) curve (AUC). Calibration curves were plotted using 1,000 bootstrap resamples, which reflected the model's agreement between the predicted probability and the actual probability. Decision curve analysis (DCA) was used to assess the model's clinical usefulness. Based on the logistic regression model, a dynamic nomogram was developed, presenting a specific system for calculating the risk of LNM. Continuous data were expressed as median with interquartile (IQR). Statistical analysis was performed using IBM SPSS 25.0 (SPSS Inc; Chicago, IL, USA) and R software version 3.6.3. Pearson's Chi square test was used for categorical data analysis (Fisher exact test was used if necessary), while Mann-Whitney U-test were used to compare quantitative parameters. P < 0.05 were considered statistically significant. All statistical analyses were two-sided.
Patient Characteristics
A total of 1,102 patients met the inclusion criteria, including 403 males and 699 females. Among them, 116 patients (10.5%) were node-positive (pN + ), and 986 (89.5%) had no lymph node metastasis (pN 0 ) by final histopathology. The median age was 58 (IQR: 51-65) years, and the median tumor size on CT was 1.3 (IQR: 1.0-1.7) cm. Lobectomy was performed for 870 patients and sublobar resection for 232 patients. The two cohorts' data of demographic characteristics and variables are shown in Table 1.
The training cohort consisted of 774 (71.6%) patients, while the validate cohort included 328 (28.4%) patients. The LNM rate was 12.0% (83/774) in the training cohort vs. 10.1% (33/328) in the validation cohort (P = 0.743). The two cohorts were similar in most of characteristics except smoking history (P = 0.049) and imaging density (P = 0.039), the distribution of which were with slightly significant difference.
Variable Selection for Constructing Models
Using LASSO regression in the training cohort, 15 variables in Table 1 were reduced to 6 when using 1 standard error (1-SE) criterion, and reduced to 11 when using the minimum error criterion (Figure 2). Then two logistic regression models, which were marked as Model1 and Model2, were developed by incorporating the 6 variables and 11 variables in the training cohort, respectively. The details for selected variables are shown in Table 2
Model Validation
To assess the predictive ability of models, the ROC curves in both the training cohort and validation cohort were plotted (Figure 3) Figure 3B). In addition, decision curve showed that two models presented similar net benefits at the entire range of threshold probabilities, with the better performance than the two extreme lines (treat-none and treat-all) when the threshold probability was 0-0.8 (Figure 4). Thus, with fewer variables incorporated, Model1 was selected as a better model and used to develop a nomogram for calculating the risk of LNM (Figure 5). Then calibration curves were plotted for internal and external validation, showing the great calibration of the nomogram model in both training and validation cohorts (Figure 6). Additionally, a dynamic nomogram application (https://nomogramwyj. shinyapps.io/27978155ff984a9f9616812a465b802c) was also developed, which can be conveniently available to clinicians and patients worldwide. Using the application, one patient's risk probability plus 95%CI of LNM can be obtained immediately when imputing his information of the six variables we identified (shown in Supplement Figure 1). The R code and data for the application were attached in Supplement Data Sheet 1.
Based on the nomogram model, each patient's risk probability of LNM was calculated. The optimal cutoff point to distinguish between LNM (-) and LNM (+) was 0.092. All patients' predictive risk probabilities from the nomogram was standardized using the following formula: (risk probability−0.092)/standard deviation. Then the standardized risk of each patient was shown in Figure 7. The x-axis represents each patient, while y-axis represents the standardized risk probabilities from the nomogram.
DISCUSSION
In this study, single NSCLC ≤ 2 cm were retrospectively reviewed for the development and validation of a dynamic nomogram for evaluating the risk of LNM in small-size NSCLC. To our knowledge, this study was the first one to develop a dynamic nomogram for the evaluation of LNM in lung cancer. For patients whose tumors were staged as T 1 N 0 M 0 , the standard therapy has been considered as lobectomy with systematic LND (3). In recent years, with the rapid development and employment of radiographical screening methods, increasing numbers of small-size lung carcinomas have been discovered. Several studies have reported no significant difference in survival between patients with stage I NSCLC who underwent sublobar resection (wedge resection and segmentectomy) and standard lobectomy, especially for NSCLC ≤ 2 cm (4,13,14). SABR (15) and RFA (16), which have also become alternative treatments for small-size NSCLC, might be more appropriate for elderly or inoperable patients than surgery. However, compared to standard lobectomy, sublobar resection and other alternatives might lead to a higher risk of recurrence and a worse survival (17); one reason for this difference might be attributed to occult LNM and micrometastasis, which were not rare in small-size NSCLC (7)(8)(9)(10)(11). According to studies from Shi et al. (18) and Yu et al. (19) the incidence rates of LNM in NSCLC ≤ 2 cm were up to 14.1 and 10.2%, respectively, which were similar to the result in our study (10. 5%, 116/1,102). Therefore, it is necessary to evaluate the risk of LNM in patients with small-size NSCLC. The reliable predictive methods are highly required. Unlike the conventional univariate analysis, LASSO regression that we used aimed to select variables for logistic regression to avoid overfitting. Then two models were developed with different number of variables. Both of models presented very high AUCs in ROC curves (Figure 3) and similar clinical usefulness in the decision curve (Figure 4). The Model1, with much fewer variables incorporated, was used to plot the nomogram. The nomogram was a novel cost-efficient tool for precisely calculating the risk of LNM (Figure 5). It has a user-friendly interface and is very convenient to apply in helping making clinical decisions. Our nomogram model was also well-calibrated in both internal and external validation (Figure 6), with the mean absolute error of 0.01 and 0.02, respectively. Thus, based on the nomogram, each patient's risk for LNM can be accurately calculated according to the six variables, which were tumor size, CEA level, imaging density, pathological type of NSCLC (adenocarcinoma or non-adenocarcinoma), lymphovascular invasion and pleural invasion.
Previously, our (20) and other studies (19,(21)(22)(23) have found that several preoperative factors might be associated with the occurrence of LNM in small-size NSCLC, including serum CEA, tumor size and imaging features such as imaging density. Tumor size was identified as an important predictor for LNM. The incidence rate of LNM increased as tumor size increased. In our nomogram model, the larger tumor size (OR: 3.889, 95%CI: 1.878-8.052, P < 0.001) was an independent risk factor for LNM. Okada et al. (24) thought of tumor size as a predictor for sublobar resection, but in our study, only those patients with tumors ≤ 0.5 cm were not found to have positive nodes. It is not advisable to choose surgical procedure and postoperative management solely by tumor size. Other factors should be considered. Similar to previous studies (19,22), a higher serum CEA (OR: 1.247, 95%CI: 1.110-1.402, P < 0.001) was also associated with a higher LNM rate in our study. Thus, the serum CEA level might be helpful for LNM prediction and should be listed as a routine test for NSCLC patients. In addition, tumor imaging density was also considered to be an important risk factor for LNM since none of the patients with a pGGN tumor were found to be node-positive in our study. The solid component on CT has been confirmed as having a more invasive ability for lung tumors and indicated a significantly higher LNM rate than pGGN (21), which was further consolidated by our study. Therefore, pGGN can be a strongly effective predictor for node negativity in small-size NSCLC.
Moreover, this study also indicated that pathology had a strong association with LNM in patients with small-size NSCLC. Non-adenocarcinomas (OR: 3.583, 95%CI: 1.466-8.757, P = 0.005) were more likely to have LNM than adenocarcinomas, although there were only limited quantities of their cases. Furthermore, the presence of lymphovascular invasion (OR: 11.979, 95%CI: 4.479-32.039, P < 0.001) and pleural invasion (OR: 2.406, 95%CI: 1.223-4.731, P < 0.001) significantly indicated LNM. Therefore, when lymphovascular invasion or pleural invasion was present, even if the histological results showed the dissected lymph nodes were negative, surgeons should be more alert to the occurrence of LNM. If possible, pathologists might observe these characteristics intraoperatively and help surgeons make decisions of surgical procedure, especially for those patients who are difficult to decide between radical and sublevel resection.
Our previous study developed machine learning-based models for the preoperative prediction of LNM (20), but the LNM status should be furtherly evaluated after surgery, with considering histology, especially for patients with negative nodes. The current surgery methods cannot guarantee the removal of all tumor cell-invaded lymph nodes. Additionally, with the increasing application of sublevel resection, the undetected occult LNM and micrometastasis might occur more frequently, likely to cause recurrence and worse survival. Furthermore, the advanced immunotherapy for NSCLC patients may require an accurate N stage, since a discrepant programmed death-ligand 1 (PD-L1) expression was observed between primary tumors and nodal metastases of NSCLC (25). The small-size NSCLC patients with nodal metastases are the potential population that may benefit from adjuvant immunotherapy, and the nomogram we developed can be helpful for identifying optimal candidates for immunotherapy. Thus, based on our nomogram model, the patients with high risk of LNM could be precisely selected for closer postoperative follow-up and timely medical intervention, which might improve prognosis.
In addition, some methodological innovations were indicated in our study. The nomogram was rarely used to predict LNM in lung cancer. Jiang et al. (26) developed a nomogram for predicting occult N2 LNM in squamous cell lung cancer, but the number of cases was very limited and only involved squamous carcinomas. Some previous studies have developed nomogram models using radiomics for predicting LNM (27,28). However, identifying radiomics features requires special techniques, and the radiomics models are hard to be clinically applied. The dynamic nomogram we developed was based on clinicopathological characteristics that were routinely recorded, and was easily to use for clinicians. To our knowledge, this study was the first nomogram model for evaluating the risk of LNM in NSCLC based on relevant clinicopathological characteristics, and we also developed a dynamic nomogram application for clinicians and patients to use worldwide. Moreover, other than conventional univariate analysis, LASSO regression was used to select optimal predictive variables, which effectively performed well in reducing the dimension of data (29,30). Finally, besides the internal validation, the external validation was also used to assess the model's calibration, since there was a large enough number of patients.
Furthermore, the potential limitations of our study should be noted. First, this study cohort only consisted of patients from a single center, which was not representative of patients in other hospitals. A more accurate nomogram should be developed using data from multiple centers in the future. Second, in spite of undergoing systematic LND, a small group of patients received sublobar resection, which might lead to the underestimate of positive N1 rate. We will conduct further research on their difference. Thirdly, our study did not involve all clinicopathological characteristics. In our previous (20) and other studies (31), maximal standardized uptake value (SUV max ), was proved to be an impressive predictive factors for LNM, but was not included. That is because PET scan was not routinely performed for small-size NSCLC, and the variable with many missing values was not suitable for linear regression. Future studies may consider all clinicopathological characteristics as much as possible for developing a globally-applicable nomogram.
CONCLUSIONS
Based on clinicopathological characteristics, this study firstly developed a cost-effective dynamic nomogram for calculating the precise risk of LNM in small-size NSCLC, and will be helpful for clinicians' decision-making.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethics Committee of Peking Union Medical College Hospital. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
|
2020-08-07T13:05:54.039Z
|
2020-08-07T00:00:00.000
|
{
"year": 2020,
"sha1": "9430907273824dd956452d61c4cff721c994ef2d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.01322/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9430907273824dd956452d61c4cff721c994ef2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16682035
|
pes2o/s2orc
|
v3-fos-license
|
Extraperitoneal versus transperitoneal laparoscopic radical cystectomy for selected elderly bladder cancer patients: a single center experience
ABSTRACT Objective: This study reports the initial experience of extraperitoneal laparoscopic radical cystectomy (ELRC) and compared with transperitoneal laparoscopic radical cystectomy (TLRC) in the treatment of selected elderly bladder cancer patients. Patients and Methods: A total of forty male bladder cancer patients who underwent ELRC (n=19) or TLRC (n=21) with ureterocutaneostomy were investigated. Demographic parameters, perioperative variables, oncological outcomes and follow-up data were retrospectively analyzed. Results: A significantly shorter time to exsufflation (1.5±0.7 vs 2.1±1.1 d; p=0.026) and liquid intake (1.8±0.9 vs 2.8±1.9 d; p=0.035) were observed in the ELRC group compared with the TLRC group. The incidence of postoperative ileus in the ELRC group was lower than the TLRC group (0 vs 9.5%). However, the difference had no statistical significance (p>0.05). The removed lymph node number in the ELRC group was significantly lower than the TLRC group (p<0.001). No significant differences were observed between the two groups in the overall and cancer-free survival rates (p>0.05). Conclusions: ELRC seems to be a safe and feasible surgical strategy for the selected elderly bladder cancer patients with ≤ T2 disease. The surgical and oncological efficacy of the ELRC is similar to that of the TLRC, but with faster intestinal function recovery. Further studies with a large series including different urinary diversions are needed to confirm our results and to better evaluate the benefit of ELRC in bladder cancer patients.
INTRODUCTION
Bladder cancer is one of the most common urologic malignancies in men with an especially high incidence in the elderly patients (1). Radical cystectomy (RC) with urinary diversion is a standard surgical measure in Urology and constitutes the golden choice for muscle-invasive bladder cancer (MIBC). With the rapid advances in urological laparoscopy over the past few decades, laparoscopic radical cystectomy (LRC) has been widely used for MIBC as a minimally invasive treatment to reduce morbidity. However, in elderly patients, LRC is still a challenge due to the associated severe comorbidities and whether they can tolerate longer operation time, pneumoperitoneum, and peculiar surgical position as well as younger patients (2). Although the role of LRC in elderly patients is still debated (3,4), some reports have shown that LRC may be performed safely in well-selected elderly patients (2,5).
As we know, generally the LRC is performed with traditional transperitoneal approach and the operative steps of transperitoneal laparoscopic radical cystectomy (TLRC) are basically duplicated from the open techniques. To our best knowledge, there is no report about LRC with an extraperitoneal approach by now. But with the experience of EORC and LRC, the application of extraperitoneal laparoscopic radical cystectomy (ELRC) can be available. In the present study, we describe our initial experience of ELRC and compare variables with those of TLRC done by the same surgeon in our institution.
Patient Selection
From January 2012 to March 2015, a retrospective study of male elderly patients with MIBC or high risk NMIBC who underwent LRC was conducted in our institu tion. All the cases were evaluated by common preoperative examination including routine laboratory tests, abdominal ultrasonography, chest radiography, echocardiography, lung function test, computerized tomography or magnetic resonance imaging. The indication for LRC was histologically diagnosed MIBC by transurethral resection or biopsy confirmed recurrent multifocal high-grade NMIBC or bladder cancer in situ that were refractory to repeated transurethral resection with intravesical therapy. The exclusion criteria were a Body Mass Index (BMI) >30kg/m 2 , American Society of Anesthesiology (ASA) >3, tumor grade >T2 and inability to provide written informed consent. Since the patients undergoing conduit diversion need the transperitoneal approach anyhow, we chose the patients who underwent ureterocutaneostomy diversion to access the safety and feasibility of ELRC. The indications for ureterocutaneostomy diversion included cases of inability to use intestinal segments due to related problems or the patient decided to undergo ureterocutaneostomy due to the decreased life expectancy with associated comorbidities. All patients had discussed the risks and benefits related to the two procedures of LRN and all kinds of urinary diversions before they made decisions. If the patient decided to undergo the LRN, the possibility of ELRC was proposed.
Study Design
Nineteen patients submitted to ELRC with ureterocutaneostomy were enrolled in the present study. For comparison purposes, twenty-one demographics-matched patients with bladder cancer of comparable tumor stage who underwent TLRC with ureterocutaneostomy were also enrolled. The two procedures were performed by a single surgeon who was proficient in both techniques. All patients gave written informed consent. The study protocol was approved by the Institutional Review Board of our hospital and was conducted in compliance with the Declaration of Helsinki.
The demographic parameters, operative variables, perioperative outcome and oncological outcomes were recorded and analyzed. Comorbidities and complications were also recorded. One day before the operation, patients were required to fast and mechanical bowel preparation with polyethylene glycol electrolye powder plus intravenous hydration and perioperative antibiotics were administered.
statistical analysis
The continuous parametric data were compared using the independent samples t-test. The categorical data were compared using Pearson's χ 2 -test, and Fisher's exact test was used when appropriate. The survival data were compared using Kaplan-Meier survival analysis and the log-rank test. Differences with P values <0.05 were considered significant.
Surgical technique
The procedure of TLRC was performed according to the procedures described by Matin and Gill (6). Bilateral pelvic lymphadenectomy was performed in the area of the common, external and internal iliac arteries and the obturator. In the ELRC cohort, the surgical position was similar to that of TLRC. First, a 2cm longitudinal incision under navel was used and an extraperitoneal space was created with fingers behind rectus abdominis muscle and below the arcuate line. An artificial gasbag was placed into the space with air inflation of 800 to 1000mL. The inflation was maintained for 5 minutes. The first 12mm trocar was placed into the incision above for the 30 degree laparoscope. The other 4 trocars were placed under vision just like TLRC. The subsequent operation steps were performed by reference to the procedures of antegrade extraperitoneal approach to radical cystectomy described by Serel et al. (7). Fi rst, the spermatic cord on left side was identifi ed and severed after ligature. The whole pelvic peritoneum was gently pushed cephalad at the level of the vasa deferentia on either side to visualize the common iliac vessels. The ureter on left side was identifi ed and mobilized to the ureterovesical junction. The transection of the left ureter was performed after dissociating the ureterovesical junction. The same method was used to deal with the spermatic cord and ureter on the right side. The peritoneal refl ection was indentifi ed de pending on the bilateral peritoneal margin as a sign. The peritoneal was separated from the anterior and apex of the bladder. The urachus was cut at the level of the umbilicus. Mobilization of the posterior wall of bladder was performed and the attachment of Denonvilliers' fascia to the rectum was released, maintaining all of its layers on the seminal vesicles. The subsequent procedures of dealing with the verumontanum, seminiferous ducts, bladder collateral ligament and prostate were similar to that of TLRC. Bilateral pelvic lymphadenectomy was carried out for pathological examination (Figure-1). According to patient's decision, ureterocutaneostomy was performed for both the groups.
Patient Characteristics
There was no conversion to open surgery. The patients in the ELRC and TLRC groups had comparable baseline characteristics. Data is shown in Table-1.
Operative outcomes
The operative and postoperative characteristics are shown in Table-2. The ELRC group required a significantly shorter time to exsufflation (1.5±0.7 versus 2.1±1.1d for TLRC; p=0.026) and time to liquid intake (1.8±0.9 versus 2.8±1.9d for TLRC; p=0.035). There were no significant differences in the other parameters of operative characteristics. The incidence of postoperative ileus in the ELRC group was lower than the TLRC group (0 versus 9.5%). However, the difference had no statistic significance (p>0.05). There were no significant differences in the other parameters of postoperative complications (p>0.05). The removed lymph node number in the ELRC group was significantly lower than the TLRC group (9.4±2.6 versus 13.4±3.4, p<0.001). Positive lymph node was observed in 1 patient in the ELRC group and 2 patients in the TLRC patients (Table-3). All the three patients underwent postoperative adjuvant chemotherapy.
The median follow-up was 13.8±8.0 months and 18.2±10.0 months for the ELRC group and the TLRC group respectively. There were 18 and 19 patients alive from the ELRC group and the TLRC group at the last follow-up, respectively. One patient died of pneumonia in the ELRC group and two patients died of heart attack in the TLRC group. Cancer recurrence was observed in 2 and 1 patients in the ELRC group and the TLRC group respectively. The Kaplan-Meier survival curves showed there were no significant differences between the ELRC and the TLRC group in terms of the overall and cancer-free survival rates (p>0.05, data is shown in Figure-2). Data presented as mean±standard deviation or n (%). AsA = American Society of Anesthesiologists; bMI = body mass index; hb = hemoglobin; scr = serum creatinine; ELRC = extraperitoneal laparoscopic radical cystectomy; TLRC = transperitoneal laparoscopic radical cystectomy.
DIsCUssION
In the present study, it was observed that the ELRC group was associated with less time to exsufflation and liquid intake. The results indicated that the existence of a peritonealized pelvis in the ELRC group was benefic for the functional recovery of the bowel. In the transperitoneal radical cystectomy, the peritoneum covering is left on the bladder to allow for a wide perivesicle dissection. Surgery induced inflammatory reactions that arise between the small bowel and the deperitonealized pelvic wall will lead to small bowel palsy, obstruction, ileus, or constipation (8). The results of Zhao J et al. (9) also showed the existence of a nonperitoneali- zed pelvis in the TLRC group adversely affected the functional recovery of the bowel, which is similar with our observations. Keeping the integrity of the peritoneal cavity can prevent the inflammatory reactions induced by the deperitonealized pelvic wall with the small bowel (10).
No postoperative ileus occurred in the ELRC group in our study, which is a better outcome than the TLRC group, although the sample size in the present study was small to achieve statistical significance. Shorted time for patients to exsufflation can help them to take food as early as possible. Keeping a balanced nutrition early after surgery can also reduce the possibility of delayed recovery, which is helpful to decrease the time of the hospital stay. In our study, the hospital stay in the ELRC group was also less than the TLRC group, although the difference had no statistical significance (p=0.097). For the ELRC procedure, the first step is to create an adequate retroperitoneum operation area. The experience of extraperitoneal laparosco-pic radical prostatectomy (11) and extraperitoneal laparoscopic partial cystectomy (12) had already proved the availability of the retroperitoneum operation area. The other difficult step is to mobilize the peritoneum covering the postero-superior surface of the bladder. Sometimes the peritoneum should be removed with the bladder wall when the peritoneal reflection is hard to be identified and then the peritoneum was closed. Zhu et al. had the peritoneal covering of the bladder detached ex vivo after RC. Suspicious peritoneal lesions were sampled and random biopsies were taken. The authors found that patients with pathological stage T1-T2 bladder cancer had a very low possibility of peritoneal involvement (13). Therefore, in our study, the peritoneum covering the surface of the bladder could be kept intact. However, when the lesions were around the bladder apex or over the posterior bladder wall, we still recommend the peritoneum to be removed with the bladder wall to ensure the oncologic adequacy of the procedure.
In the present study, the number of lymph nodes removed in the ELRC group was significantly lower than the TLRC group. The extent of pelvic lymph nodes dissection (PLND) in the ELRC group was unlikely to reach the same level in the TLRC group due to the existence of peritoneum, which is the limitation of this technique. Although there was evidence which indicated that more extended PLND is associated with survival benefit (14), Jensen et al. found that the prognosis after RC and extended PLND in patients with T1-T2 disease was not significantly better than those following RC and limited PLND (15). A meta-analysis study also indicated that compared with non--extended PLND, extended PLND was associated with a better RFS rate for patients with pT3-pT4 disease, but not for patients with ≤pT2 disease (16). For patients with different age and comorbidity status, the beneficial effect of PLND was also different. Larcher et al. (17) found that RC with PLND is associated with improved cancer specific survival relative to RC alone, in younger and healthier RC candidates but not in older and sicker patients. From our study, although the number of PLND was less in the ELRC group, the lymph node status and the survival rate were similar in the two groups. Therefore, the observed benefit of PLND may not be universally applicable to all RC patients. However, we must admit that the debate of the extended PLND in radical cystectomy still goes on and for the selected elderly bladder patients with ≤T2 disease, ELRC with PLND might not necessarily be an oncologically unacceptable approach. Moreover, we propose measures to avoid offering ELRC in patients with >pT2 cases which have a significant risk of peritoneal infiltration and lymph node mestastases.
There were some limitations in this study. First, the nature of a retrospective study made it impossible to avoid the selection bias and attrition bias. Secondly, the sample size of this study was small and all the cases were performed in male patients with only ureterocutaneostomy. We have no idea of the feasibility of this method in female patients because we think the gynecologic organ in the peritoneum seems to be a disturbance for the ELRC surgery. Moreover, the ureterocutaneostomy diversion is not a procedure applicable to the majority of patients and mostly ileal conduit or neo-badder is performed. But for some elderly patients whose operation should be rapidly terminated due to the deteriorated health state, and those with decreased life expectancy due to associated comorbidities or inability to use intestinal segments owing to related problems, it is a less invasive approach and rational option (18). Furthermore, a randomized, prospective study with larger sample and different kinds of urinary diversions would better assess the feasibility of ELRC for the selected elderly bladder patients.
CONCLUsIONs
ELRC seems to be a safe and feasible surgical strategy for the selected elderly bladder cancer patients with ≤T2 disease. The surgical and oncological efficacy of the ELRC is similar to that of the TLRC, but with faster intestinal function recovery. Further studies with a large series including different urinary diversions are needed to confirm our results and to better evaluate the benefit of ELRC in bladder cancer patients.
|
2017-08-30T04:34:44.069Z
|
2016-07-01T00:00:00.000
|
{
"year": 2016,
"sha1": "9f6aab14f10356f0c6db639fe4d2d1670139ff35",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/ibju/v42n4/1677-5538-ibju-42-04-0655.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f6aab14f10356f0c6db639fe4d2d1670139ff35",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224857658
|
pes2o/s2orc
|
v3-fos-license
|
Spatially resolved electrochemiluminescence through a chemical lens†
Electrochemiluminescence (ECL) microscopy is an emerging technique with a wide range of imaging applications and unique properties in terms of high spatial resolution, surface confinement and favourable signal-to-noise ratio. Despite its successful analytical applications, tuning the depth of field (i.e., thickness of the ECL-emitting layer) is a crucial issue. Indeed, the control of the thickness of this ECL region, which can be considered as an “evanescent” reaction layer, limits the development of cell microscopy as well as bioassays. Here we report an original strategy based on chemical lens effects to tune the ECL-emitting layer in the model [Ru(bpy)3]2+/tri-n-propylamine (TPrA) system. It consists of microbeads decorated with [Ru(bpy)3]2+ labels, classically used in bioassays, and TPrA as the sacrificial coreactant. In particular we exploit the buffer capacity of the solution to modify the rate of the reactions involved in the ECL generation. For the first time, a precise control of the ECL light distribution is demonstrated by mapping the luminescence reactivity at the level of single micrometric bead. The resulting ECL image is the luminescent signature of the concentration profiles of diffusing TPrA radicals, which define the ECL layer. Therefore, our findings provide insights into the ECL mechanism and open new avenues for ECL microscopy and bioassays. Indeed, the reported approach based on a chemical lens controls the spatial extension of the “evanescent” ECL-emitting layer and is conceptually similar to evanescent wave microscopy. Thus, it should allow the exploration and imaging of different heights in substrates or in cells.
Introduction
Electrochemiluminescence (ECL) is the light emission induced by an initial electrochemical reaction at the electrode surface. It has been widely investigated and has found successful applications in various elds ranging from fundamental studies on highly exergonic electron-transfer reactions, biosensing, and environmental chemistry to microscopy. [1][2][3][4][5] Nowadays, ECL is a leading transduction technique with important applications in the early diagnosis of many diseases thanks to its unique signal-to-noise ratio. [6][7][8][9][10][11] Besides the regular analytical applications, the combination of ECL with microscopy (ECLM) is an emerging technique that provides high lateral resolution, typical of microscopy, with surface-conned processes since the emission is restricted within the ECL reaction layer (i.e., mm range near the electrode surface). [12][13][14] Recently, we reported the rst example of ECLM for the visualization of single cells and detection of cancer biomarkers on the cellular membrane. [15][16][17] Different approaches have also been proposed to image cells by ECLM. 12,13,[18][19][20][21][22] Furthermore, the surface-conned emission allows ECLM to quantify different analytes simultaneously [23][24][25][26] as well as permitting high resolution visualization of nanomaterials such as nanoparticles and nanorods. [27][28][29][30][31] In ECL, the emission layer is typically conned in the near proximity of the electrode surface 32 and it is limited by the lifetime of the coreactant radicals (vide infra) and cannot be easily controlled. 33 Different strategies have been proposed to extend the ECL emitting layer including the application of the so-called Faraday cage 34 and the choice of different coreactants. 35 However, most of these approaches are limited and a versatile method is still an open challenge. This issue restricts the development of ECLM as well as the sensitivity of bioassays because it does not allow extension of the ECL layer and imaging of objects or subcellular structures located a few microns away from the electrode surface. Here, we report a novel concept based on a combination of ECLM and a chemical lens for the control of the spatial extension of the ECL-emitting layer (Scheme 1).
In a pioneering study, Heinze and co-workers introduced the concept of chemical lens to control the concentration proles of electrogenerated species. 36,37 A chemical lens was proposed to increase the lateral resolution of scanning electrochemical microscopy during electrodeposition in order to downsize the features of a patterned surface. 38 This approach involved the use of an additional species that does not take part in the main reaction but rather scavenges one of its by-products. By changing the concentration of the scavenger, the reaction layer of electrogenerated species can be shrunk to achieve higher spatial resolution.
In the ECL mechanism, the by-product that does not take part directly in the luminescent reaction is the proton (eqn (1)- (5)). This species is released during the deprotonation step of the oxidized form of TPrA (i.e., TPrAc + ), which generates the neutral TPrAc radical (eqn (2)). The chemical connement of H + can in principle be controlled by the buffer capacity of the supporting electrolyte. 39,40 The overall effect will result in the modulation of the concentration proles of both TPrA radicals, i.e., change of the ECL-active region. Indeed, the spatial distribution of ECL is the luminescent signature of the concentration proles of both diffusing TPrA radicals, which react with the immobilized [Ru(bpy) 3 ] 2+ and thus control the extension of the ECL-emitting layer.
Herein, the distribution of the ECL-emitting layer has been investigated by ECLM at the level of single microbeads with different dimensions (8, 12 and 14 mm). The model ECL [Ru(bpy) 3 ] 2+ label was conjugated to the beads (named Ru@bead) resembling bead-based immunoassays 17,41-43 (see the ESI for the details †). ECL emission was generated in a conguration named surface generation-bead emission. 14 This approach could be used as a model system for the ECLM of real samples such as cells, and also to mimic the analytical approach of commercial ECL-based immunoassay systems. 41 The radicals are generated by coreactant oxidation at the electrode surface and freely diffuse in solution, while the ECL emission comes from the labelled microbeads. In this context, the microbead acts as a probe to investigate the reactivity of the coreactant radicals, which allows mapping the active ECL-emitting layer as a function of the proton availability. 44
Results and discussion
In the present work, we investigated the model ECL system tris(2,2 0 -bipyridine)ruthenium(II) ([Ru(bpy) 3 ] 2+ ) as the lightemitting species and TPrA as the sacricial coreactant, which follows an "oxidative-reduction" path. It provides high ECL efficiency in heterogeneous as well as homogeneous formats and it is the most widely exploited ECL system to detect various biomolecules (e.g., proteins, peptides, ligands and oligonucleotides) and DNA, for immunological assays and ECL microscopy. [11][12][13][14][15][16][17][18][19][20][21][22] The heterogeneous ECL mechanism of the [Ru(bpy) 3 ] 2+ /TPrA system is an active area of investigation. 14,32,33,43,44 In such a conguration, the ECL luminophore is not free to diffuse and cannot be directly oxidised at the electrode. 45,46 In the current system under investigation, the [Ru(bpy) 3 ] 2+ label is covalently bound to the bead as in beadbased ECL immunoassays. Since the bead is made of an insulating material, only an innitesimal fraction of the [Ru(bpy) 3 ] 2+ complex, within electron tunnelling distance from the electrode surface, could be directly oxidised. 47 Considering the respective micrometric and nanometric dimensions of the bead and of the tunnelling distance, ECL resulting from the direct oxidation of [Ru(bpy) 3 ] 2+ can be neglected. 14,33,45,48 The ECL is induced exclusively by the direct oxidation of the freely diffusing TPrA (eqn (1)) that, upon oxidation, partially undergoes a deprotonation reaction (eqn (2)). Thus, it forms a high energetic radical species 49 able to reduce the ECL luminophore [Ru(bpy) 3 ] 2+ to [Ru(bpy) 3 ] + (eqn (3)). On the other hand, the pristine oxidized coreactant is continuously produced at the electrode surface, and it can react with [Ru(bpy) 3 ] + to generate the excited state [Ru(bpy) 3 ] 2+ * (eqn (4)). 32,44 Finally, [Ru(bpy) 3 ] 2+ * relaxes to the ground state generating the ECL signal (eqn (5)). The general equation scheme is as follows: TPrA À e À / TPrAc + (1) [Ru(bpy where P1 is the product of the homogeneous TPrAc oxidation. Eqn (2) has been recently proposed by Amatore and co-workers as a pseudo rst order reaction at equilibrium. 44 Due to the limited lifetime of TPrAc + , this mechanism is active at very short distances from the electrode surface, and is the only pathway able Scheme 1 Schematic description of the chemical lens strategy applied to the ECL mechanism of a microbead (Ru@bead) functionalized with the ECL luminophore ([Ru(bpy) 3 ] 2+ ), here denoted as Ru 2+ . The tri-n-propylamine (TPrA) coreactant is oxidized at the electrode generating the cation radical (TPrAc + ) that deprotonates to give the neutral radical (TPrAc to generate ECL when the ruthenium label is immobilized on magnetic beads, on cells or in sandwich immunoassays. 46 Many research groups, including ours, mapped the ECL reactivity and the emission spatial distribution in such a context. 14,15,33,43 The maximum of ECL emission occurs in the micrometric region where concentrations of TPrAc and TPrAc + radicals are locally the highest. 43,44 We also demonstrated, by analysing the side-view ECL image of the beads that only luminophores located within the 3 mm region close to the electrode contribute to the signal. 33 The diffusion proles of TPrAc and TPrAc + radicals are determined by the lifetime of TPrAc + in turn imposing the space region where the ECL emission is obtained. Noteworthily, from the general equation scheme (eqn (1)-(5)), we recognize the essential role of the proton-accepting base in dening the concentration gradient ratio of [TPrAc]/[TPrAc + ], as described by the pseudo rst order deprotonation reaction (eqn (2)). 14,32,44,50 Therefore, we analysed the ECL emission from Ru@bead with a top-view conguration (Scheme 1) with different phosphate buffer (PB) concentrations ( Fig. 1 and S1 †). We demonstrated that the ECL signal from Ru@bead increased with lower PB concentrations. The measurement of the ECL emission from microbeads of different dimensions and at different concentrations of PB conrms that eqn (2) is an equilibrium process (eqn (2 0 )) whose position is critically determined by the availability of protonaccepting phosphate ions.
2À
(2 0 ) Fig. 1 shows that the prole of ECL emission in a top-view conguration from 8 mm, 12 mm, and/or 14 mm microbeads shrinks signicantly upon increasing the PB concentration (0.01, 0.1 and 1 M Fig. S1 †), in line with the proton scavenging ability of phosphate ions. This effect would make the radical cation less available for promoting ECL through eqn (4). The full width at half-maximum (FWHM) of the ECL intensity proles measured on the labelled beads decreases by z43% and z27%, for 12-14 mm and 8 mm microbeads, respectively, reaching the same value of about 6-5.5 mm for 1 M PB (Fig. S2 †).
The shrinking of the ECL proles is also reected in the total ECL emission from the microbeads (Fig. S3 †). The ECL integral decreases as PB concentration increases (Fig. 2). We observed a decrease of z76% and z71% for 12-14 mm and 8 mm beads, respectively. In contrast, the anodic current due to the TPrA oxidation shows an opposite behaviour demonstrating that different phosphate buffer concentrations do not hinder the TPrA oxidation (Fig. S4 †).
The ECL images recorded in the top-view conguration do not provide very precise information on the extension of the ECL-emitting layer since the detected ECL light has to pass through the bead, which modies the optical paths. Indeed, the bead may act as an optical lens. 51 In addition, as pointed out by Amatore et al., the effect of intrinsic distortions, ascribed to optical imaging of the confocal volume of ECL generation, results in a widening of the recorded image, 52 and the broadening of the ECL image is affected by the focal plane of sampling. To further investigate the effects of the PB concentration on the thickness of the ECL-emitting layer, we changed the angle of observation from the top-view to a side-view conguration (Scheme 1). This optical conguration supplements the top-view mapping with a 2D imaging. Fig. 3 shows the photoluminescence (PL) and ECL images of the same single bead modied with the [Ru(bpy) 3 ] 2+ labels. The PL imaging shows the real bead (upper part of the image) and its mirror image (lower part); this latter is formed by the PL reection on the electrode surface. For the sake of clarity, the image obtained by reection is materialized with the hatched zone. The position of the bead and also its interface with the electrode are precisely dened using the PL image. The x and y axes are parallel and normal to the electrode surface, respectively. We set the origins of both x and y axes at the electrode surface (y ¼ 0) where the bead is in contact with the electrode (x ¼ 0). In other words, at x ¼ 0, the y-axis is the symmetry axis passing through the middle of the bead. As reported previously, 33 two micrometric ECLemitting zones are visible in Fig. 3b: (i) the one close to the electrode reects the ECL reactivity while (ii) the one located at the top of the bead corresponds to the focusing effects of the bead which acts as an optical lens. The real chemical information is contained only in the ECL region located close to the electrode surface. From Fig. 3b, we extracted the ECL intensity proles along both y and x axes ( Fig. 3c and d, respectively). As described in a previous report, 33 the ECL-emitting region extends only over 3-4 mm along the vertical axis, which is normal to the electrode surface (Fig. 3c). The FWHM of this ECL intensity prole is 3.1 mm. In Fig. 3c, the small ECL peak located at y z 12 mm corresponds to the already mentioned optical lens effects of the bead, which is based on a completely different physical phenomenon from the chemical lens effects that we aim to study in this work. The ECL intensity prole along the xaxis is displayed in Fig. 3d. The shape is symmetric around the xed point O.
To analyse further the chemical lens effects of the PB concentration on the volumic extension of the ECL-emitting region, we used the side-view conguration to record the ECL images of the [Ru(bpy) 3 ] 2+ -decorated beads and extracted the FWHM values from the ECL intensity proles along both axes (Fig. 4). Increasing the PB concentration from 0.01 M to 1 M causes the decrease of the thickness of the ECL-emitting layer. This is evidenced by the average values of the FWHM along the yaxis, which decrease from 3.1 mm to 2.4 mm (Fig. 4, red squares). The same trend is also noticed along the x-axis with a progressive decrease of FWHM from 8.9 mm to 7 mm (Fig. 4, blue dots). This evolution is consistent with the behaviour observed in the topview conguration and previously reported simulations. 44 The whole body of experimental evidences is in agreement with a diffusion prole of TPrAc + constricted nearer to the electrode surface with increasing PB concentration. This is a consequence of readily available phosphate ions for buffering the hydrogen ions released from TPrAc + deprotonation, thus resulting in a shortened TPrAc + diffusion length. This will reect in a narrower emission prole, associated with a lower ECL emission, since a smaller area of the microbead is reached by the diffusing TPrAc + which triggers the reactions in Scheme 1 (eqn (1)- (5)).
Conclusions
In summary, we have presented here a simple method to modify the ECL layer and also to provide insights into the ECL mechanism. Our strategy, which is compatible with bioanalytical detection and experiments on cells, has a direct impact on the ECL spatial distribution. Indeed, by changing the buffer capacity, we were able to modify the thickness of the ECLemitting layer. Overall, our report paves the way for imaging different heights in substrates or in single cells in an approach conceptually similar to total internal reection uorescence microscopy (TIRFM) where we can control the "evanescent" ECL-emitting layer with a chemical lens.
Conflicts of interest
There are no conicts to declare. Fig. 4 Effect of the PB concentration on the FWHM values measured along the x-axis (blue dots) and the y-axis (red squares) of the bead as defined in Fig. 3a. The values were extracted from the side-view images of single 12 mm [Ru(bpy) 3 ] 2+ -decorated polystyrene beads. Error bars show the standard deviation (n ¼ 9). Lines are only a guide to the eye.
|
2020-10-19T18:12:44.285Z
|
2020-09-14T00:00:00.000
|
{
"year": 2020,
"sha1": "0ae014d84b57e7348ea54395f234f9551b7f7eed",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/sc/d0sc04210b",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3687279898198a0215937d06baa456be8640d70a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
201786153
|
pes2o/s2orc
|
v3-fos-license
|
Association between smoking and in-hospital mortality in patients with acute myocardial infarction: results from a prospective, multicentre, observational study in China
Introduction Smoking is a well-established risk factor for cardiovascular disease. However, the effect of smoking on in-hospital mortality in patients with acute myocardial infarction (AMI) who are managed by contemporary treatment is still unclear. Methods A cohort study was conducted using data from the China AMI registry between 2013 and 2016. Eligible patients were diagnosed with AMI in accordance with the third universal definition of MI. Propensity score (PS) matching and multivariable logistic regression were used to control for confounders. Subgroup analysis was performed to examine whether the association between smoking and in-hospital mortality varies according to baseline characteristics. Results A total of 37 614 patients were included. Smokers were younger and more frequently men with fewer comorbidities than non-smokers. After PS matching and multivariable log regression analysis were performed, the difference in in-hospital mortality between current smokers versus non-smokers was reduced, but it was still significant (5.1% vs 6.1%, p=0.0045; adjusted OR 0.78, 95% CI 0.69 to 0.88, p<0.001). Among all subgroups, there was a trend towards lower in-hospital mortality in current or ex-smokers compared with non-smokers. Conclusions Smoking is associated with lower in-hospital mortality in patients with AMI, even after multiple analyses to control for potential confounders. This ‘smoker’s paradox’ cannot be fully explained by confounding alone. Trial registration number NCT01874691.
Introduction
Smoking is a well-established risk factor for cardiovascular disease. However, the effect of smoking on in-hospital mortality in patients with acute myocardial infarction (AMI) who are managed by contemporary treatment is still unclear. Methods A cohort study was conducted using data from the China AMI registry between 2013 and 2016. Eligible patients were diagnosed with AMI in accordance with the third universal definition of MI. Propensity score (PS) matching and multivariable logistic regression were used to control for confounders. Subgroup analysis was performed to examine whether the association between smoking and in-hospital mortality varies according to baseline characteristics. results A total of 37 614 patients were included.
Smokers were younger and more frequently men with fewer comorbidities than non-smokers. After PS matching and multivariable log regression analysis were performed, the difference in in-hospital mortality between current smokers versus non-smokers was reduced, but it was still significant (5.1% vs 6.1%, p=0.0045; adjusted OR 0.78, 95% CI 0.69 to 0.88, p<0.001). Among all subgroups, there was a trend towards lower in-hospital mortality in current or ex-smokers compared with non-smokers. Conclusions Smoking is associated with lower in-hospital mortality in patients with AMI, even after multiple analyses to control for potential confounders. This 'smoker's paradox' cannot be fully explained by confounding alone. trial registration number NCT01874691.
IntroduCtIon
Smoking is a well-established risk factor of cardiovascular disease. 1 2 However, some previous studies have shown that smokers have a better outcome than do non-smokers following acute myocardial infarction (AMI). This phenomenon is referred to as 'smoker's paradox'. This phenomenon was first introduced in the 1970s, when Helmers found that smokers had a lower risk of mortality than did non-smokers. 3 Some subsequent studies also showed smoker's paradox in patients with acute coronary syndrome. 4 This paradox may be explained by differences in baseline characteristics between smokers and non-smokers. 5 Additionally, the antiplatelet response may differ according to smoking status because of the effect of smoking on pharmacodynamics of clopidogrel therapy. 6 Notably, most studies regarding smoker's paradox were conducted in the era of thrombolysis, while the association between smoking and in-hospital mortality in patients who are treated with the percutaneous intervention (PCI) remains controversial. Some studies have reported that the difference in in-hospital mortality was not significant between smokers and non-smokers after accounting for age and other baseline characteristics. [7][8][9][10][11][12][13] Other studies reported that smokers had a lower in-hospital mortality rate compared with non-smokers, even after adjustment for potential confounders (smoker's paradox). [14][15][16][17][18] Examining the true effect of smoking on outcome among contemporary patients strengths and limitations of the study ► This study used data from a large-scale multicentre registry in a contemporary era of percutaneous intervention. ► We used propensity score matching and multivariable logistic regression model to adjust for confounders, which ensured the robustness of our conclusion. ► The current study did not include data on patients who died before hospitalisation, which may have caused index event bias (a type of selection bias). ► The current study did not adjust for unmeasured confounders. with AMI is important. On one hand, the phenomenon of 'smoking paradox' has a negative effect on quitting smoking in a public health perspective. On the other hand, if smoker's paradox still exists in the contemporary era of PCI, the biochemical basis for this phenomenon should be investigated. This investigation may promote the development of novel therapy for myocardial protection. This study aimed to assess how smoking affects the in-hospital mortality of patients receiving contemporary management of AMI.
Methods data source
A cohort study was conducted using data from the China AMI (CAMI) registry between 01 January 2013 and 31 January 2016. A detailed description of the registry design was published previously. 19 Briefly, the CAMI registry was a prospective, multicentre, observational registry. The project included Chinese patients with AMI and data were collected on patients' characteristics, treatments and outcomes. A total of 108 hospitals covering a broad geographic region participated in the project. This assured a good representation of all of the patients with AMI in China and reduced selection bias. 19 Written informed consent was obtained from each patient who was included in the study. If the patient was not able to communicate, informed consent was obtained from a family member. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki.
study population
We included the study population from the CAMI registry. Eligible patients were diagnosed with AMI and within 7 days of ischaemic symptoms. Diagnostic criteria of AMI were in accordance with the third universal definition of MI. 20 We excluded patients who were aged <18 or >100 years, and those with missing or invalid data on sex, admission diagnosis and smoking status.
Data were extracted by trained researchers using standard definitions to reduce measurement and reporting bias. These data included age, sex, height, weight, clinical presentation (symptoms, ST-segment elevation, anterior wall MI, blood pressure, heart rate, heart failure, cardiac shock, fatal arrhythmia, cardiac arrest and Killip classification), risk factors (hypertension, hyperlipidaemia and diabetes), comorbidities (heart failure, peripheral vascular disease (PVD), stroke, chronic kidney disease and chronic obstructive pulmonary disease (COPD)), medical history (family history of premature coronary artery disease (CAD), prior angina or MI, prior coronary intervention and prior coronary artery bypass grafting (CABG)), initial reperfusion strategy (primary PCI, thrombolysis and conservative therapy), laboratory results (creatinine, haemoglobin and left ventricular ejection fraction (LVEF)) and in-hospital outcome.
Patient and public involvement
We did not involve patients or the public in our work.
definition of variables
All patients were divided into three groups according to smoking status. Current smokers were defined as those who smoked within 1 month before registration. Ex-smokers were defined as those who quitted smoking for at least 1 month. Non-smokers were defined as those who never smoked. Standard definitions of the medical history and physical examination elements were well described in the American College of Cardiology and American Heart Association (ACC/AHA) Task Force on Clinical Data Standards and the NCDR-ACTION-GWTG element dictionary. [21][22][23] ECGs and echocardiograms were interpreted locally.
The primary endpoint was all-cause in-hospital mortality, which was defined as all-cause death during hospitalisation.
statistical analysis
Baseline continuous data are presented as mean±SD or median (25th-75th percentiles) and were compared using one-way analysis of variance. This was followed by the Bonferroni t-test with a corrected p value of 0.05/3. Categorical data are presented as counts and frequencies and were compared using the χ 2 test. Propensity score (PS) matching was used to control for baseline differences. We performed PS matching between current and non-smokers, and between ex-smokers and non-smokers. We used a multivariable logistic regression model to estimate PSs, with smoking as the dependent variable and the following factors as covariates: age, sex, body mass index (BMI), systolic blood pressure, heart rate, admission diagnosis, cardiac arrest, chest pain, ST elevation, anterior wall MI, Killip classification, risk factor (medical history of diabetes, hypertension, hyperlipidaemia, premature family CAD history, heart failure, renal failure and COPD), medical history (previous angina, PCI and CABG), creatinine levels, haemoglobin levels, Global Registry of Acute Coronary Events risk score and primary PCI. These variables were chosen as covariates because the difference in these baseline characteristics reached statistical significance or these variables were previously reported to be associated with patients' outcome.
Matching was performed using the greedy nearest matching algorithm and a 1:1 fashion. The calliper width was equal to 0.01 of the standardised difference of the score. McNemar's and paired t tests were used to compare continuous and categorical variables between the two groups after matching. For each variable in the PS model, we computed the standardised difference between the two groups, with a standardised difference less than 0.1 indicating good balance.
The stepwise selection method was used to compare in-hospital mortality across different groups. Baseline characteristics that significantly differed across the groups and those of clinical importance were included in the model. These variables were the same as those used for propensity matching. A p value<0.1 was used as the entry criterion and <0.05 was used as the removal criterion.
To determine whether the association between smoking and in-hospital mortality varied according to baseline patients' characteristics, we performed the same multivariable logistic analysis in subgroups that were stratified by age, sex, BMI, presence or absence of hypertension, diabetes, hyperlipidaemia, heart failure, prior angina, MI or coronary intervention and admission diagnosis. A two-sided p value<0.05 was considered significant. For the interaction test, a p value<0.1 was considered significant. For all variables included in our study, less than 2% of the data were missing. We used complete-case analysis to deal with missing data. 24 Patients with missing data were excluded from the analysis. We presented data as 'counts/total numbers available (frequencies)' for categorical variables.
results baseline characteristics
From 01 January 2013 to 31 January 2016, a total of 41 590 continuous patients were registered in the CAMI registry. We excluded 1178 patients aged <18 or >100 years, and those with missing or invalid data on sex (n=18), admission diagnosis (n=1237) and detailed smoking status (n=1543). The final cohort included 37 614 patients (figure 1). Baseline characteristics before matching are presented in table 1. A total of 16 664 (44.3%) patients were current smokers, 843 (2.2%) quit smoking before or at 1 year, 3410 (9.1%) quit smoking after 1 year and 16 697 (44.4%) were non-smokers. Current smokers were younger (57.99±11.81 vs 66.59±11.82 years) and had a higher BMI (24.39±2.87 vs 23.98±2.95 kg/m 2 ) compared with non-smokers. The proportion of men (93.7% vs 49.8%) and Killip class I (80.4% vs 71.9%) was higher in current smokers compared with non-smokers. Compared with non-smokers, current smokers were less likely to have hypertension, diabetes, heart failure, stroke or chronic kidney disease, but more likely to have hyperlipidaemia. Among ex-smokers, the proportions of male sex, hyperlipidaemia, heart failure, PVD and stroke were higher than those of current smokers. Ex-smokers also showed a trend towards old age and a low proportion of hypertension and diabetes than did current smokers, but these differences were less significant compared with the differences between current and non-smokers.
In-hospital outcomes
Overall, 2370 patients died before discharge. There were 614 (3.7%) deaths in the current smoker group, 306 (7.2%) deaths in the ex-smoker group and 1450 (8.7%) deaths in the non-smoker group. Causes of mortality are presented in online supplementary table 1. The unadjusted OR for in-hospital mortality was 0.4 (95% CI 0.37 to 0.44, p<0.0001) in current smokers and 0.82 (95% CI 0.72 to 0.93, p=0.0018) in ex-smokers relative to non-smokers (table 2). After adjustment for potential confounders, current smoking status was significantly associated with lower in-hospital mortality relative to non-smokers (adjusted OR 0.78, 95% CI 0.69 to 0.88, p<0.001) (table 2). No difference in in-hospital mortality was detected between ex-smokers and non-smokers (OR 0.89, 95% CI 0.77 to 1.04, p=0.1443).
Ps matching
Before PS matching, there were differences in almost all baseline variables among the different groups (table 1). To control for potential confounding, we matched 8552 current smokers with 8552 non-smokers, as well as 4142 ex-smokers and 4142 non-smokers (online supplementary table 2). The standardised differences were less than 10.0% for all variables after matching, which indicated a good match between two groups. After PS matching, current smokers still had lower in-hospital mortality than did non-smokers (5.1% vs 6.1%, p=0.0045), but the difference in in-hospital mortality was not significant between ex-smokers and non-smokers (7.0% vs 7.4%, p=0.5198) (online supplementary table 3).
subgroup analysis Subgroup analysis indicated significant interactions between smoking status and age (p interaction : 0.0986), sex (p interaction : 0.0163), LVEF (p interaction : 0.0149), previous MI (p interaction : 0.0557) and previous heart failure (p interaction : 0.0086) for in-hospital mortality (table 3). However, there was a trend towards lower in-hospital mortality in the current or ex-smoker group compared with the non-smoker group.
dIsCussIon
Our study used data from the CAMI registry, which is the largest contemporary registry of patients with AMI in East Asia. Our major finding was that in patients with AMI, current smokers had lower in-hospital mortality than did non-smokers in the whole population and almost all subgroups, after adjusting for potential confounders using PS matching.
Comparison with previous studies
Most previous studies were conducted in the thrombolytic era and we only identified four studies that enrolled patients in the current primary PCI era. 13 18 25 26 Of these four studies, three studies used multivariate regression analysis to control for confounders. Our study results are consistent with those from another large-scale study. 18 This previous study also showed that among patients with ST-elevation MI who received primary PCI, smokers (including current and ex-smokers) had a lower adjusted in-hospital mortality risk than did non-smokers. In our study, we further separated current and ex-smokers, and used PS matching to comprehensively control for potential confounders. Several mechanisms have been proposed to explain this paradox phenomenon. First, some studies showed that a suppressive effect of clopidogrel on platelets was greater in smokers than in non-smokers. [27][28][29] A potential explanation for this finding is that smoking can enhance in-vivo bioactivation of clopidogrel via increasing induction of cytochrome P450 (CYP1A2 and CYP2B6) and increased active metabolite concentrations of clopidogrel. 30 31 Therefore, smokers Open access may respond better to clopidogrel therapy and consequently have a lower in-hospital mortality rate than non-smokers. Second, smoking was unexpectedly associated with a lower risk of adverse left ventricular remodelling postinfarction. Symons et al performed cardiac MRI at 4 days and 4 months after MI. They found that smokers had an improved LVEF, which was attributable to a decrease in the end-diastolic volume index, but not an increase in the systolic volume index. 32 However, our results are not consistent with two studies, which found an absence of the smoker paradox after baseline risk adjustment. 13 26 This difference may be related to the selection of the study population and sample size. One previous study enrolled patients with symptomatic CAD, including those who presented with stable or unstable angina, 9 while we included patients with AMI. Patients with stable angina represent a relatively lower risk group. Therefore, enrolment of this patient subset may affect the association between smoking and mortality. The other study had a small sample size (n=382), and it may not have had sufficient statistical power to detect a difference in mortality between smokers and non-smokers.
Interpretation of our results
Our results should be interpreted with caution. Although we adjusted for many common confounders, our study was still subject to selection bias as discussed below in the Strengths and limitations of the study box. Our results should not be interpreted as encouraging patients to smoke. Smoking is well-established as an independent risk factor for mortality and recurrent MI, 33 as well as for subacute stent thrombosis 34 in the long-term, and patients with coronary heart disease can benefit from cessation of smoking. 35 Therefore, we still recommend that patients stop smoking. Our results indicated potential mechanisms underlying the protective effect of smoking. Future studies should investigate novel therapies to protect the myocardium by targeting the relevant pathways. Smoking might lead to a chronic ischaemic state (ischaemic preconditioning) 36 ; therefore, smokers might have better tolerance for an acute ischaemic event, such as a heart attack. The phenomenon could be investigated by examining whether preconditioning therapy or a brief period of reversible ischaemia can protect the myocardium and improve outcome.
Our subgroup analysis showed a significant interaction between smoking status and age, sex, LVEF, previous MI and previous heart failure. However, currently, we cannot reach the conclusion that these baseline characteristics had a significant effect on the relationship between smoking and in-hospital mortality. This is because there was a similar trend among all subgroups that current smokers and ex-smokers had a lower in-hospital mortality risk compared with non-smokers. A significant p value may be attributed to a different OR value between subgroups of smokers and non-smokers, as well as a large sample size of some of the subgroups.
limitations
Our study may have been subject to selection bias. The CAMI registry did not collect data on patients who died before hospitalisation. Failing to account for prehospital deaths may have led to selection bias. The distribution of risk factors was significantly different between smokers and non-smokers. Although we adjusted for known and measured variables, there are likely to be other unmeasured variables leading to selection bias. The CAMI registry was a multicentre, large-scale study that involved more than 100 hospitals. Although a standardised data collection procedure was emphasised, the accuracy of data still greatly depends on the expertise of local investigators. The CAMI registry did not collect detailed data regarding the smoking status. Smoking status might be modified after the onset of MI. However, we asked the patients about their smoking status before the onset of AMI and all patients were enrolled within 7 days of symptom onset. We only assessed the association between smoking and short-term outcome. Future studies are required to investigate this association in the long-term.
ConClusIons
Our study showed that the in-hospital mortality rate was lower in smokers compared with non-smokers in a large-scale, contemporary cohort representing patients with AMI in China. Our findings indicate that future studies should be performed to further explore the potential biological mechanisms that may explain this phenomenon.
|
2019-09-01T14:58:55.864Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "c145815cdd0223c92ed1c10c07aa0d9e1ca0e22d",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/8/e030252.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf7e07abfdb72ecfb8269374237cfa3c7f077ba9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11962654
|
pes2o/s2orc
|
v3-fos-license
|
Morphology of Pyramidal Neurons in the Rat Prefrontal Cortex: Lateralized Dendritic Remodeling by Chronic Stress
The prefrontal cortex (PFC) plays an important role in the stress response. We filled pyramidal neurons in PFC layer III with neurobiotin and analyzed dendrites in rats submitted to chronic restraint stress and in controls. In the right prelimbic cortex (PL) of controls, apical and distal dendrites were longer than in the left PL. Stress reduced the total length of apical dendrites in right PL and abolished the hemispheric difference. In right infralimbic cortex (IL) of controls, proximal apical dendrites were longer than in left IL, and stress eliminated this hemispheric difference. No hemispheric difference was detected in anterior cingulate cortex (ACx) of controls, but stress reduced apical dendritic length in left ACx. These data demonstrate interhemispheric differences in the morphology of pyramidal neurons in PL and IL of control rats and selective effects of stress on the right hemisphere. In contrast, stress reduced dendritic length in the left ACx.
INTRODUCTION
The prefrontal cortex (PFC) exhibits a hemispheric specialization with respect to its functional role in the integration of affective states suggesting that the right PFC is important in eliciting stress responses (see Sullivan [1]). Uncontrollable foot shock (Carlson et al. [2]) or novelty stress (Berridge et al. [3]) resulted in a higher dopamine turnover selectively in the right PFC. The PFC has been subdivided into three main cytoarchitectonic subareas: infralimbic (IL), prelimbic (PL), and anterior cingulate cortex (ACx) (Krettek and Price [4]; Ray and Price [5]). Each of these subareas has specific cortical and subcortical connections (Vertes [6]) and distinct physiological functions. Lesion studies have shown that after acute stress, ventral (IL/PL) (Sullivan and Gratton [7]) and dorsal PFC (PL/ACx) (Diorio et al. [8]) regulate the release of corticosterone and ACTH in an opposite way. Specific behavioral responses such as diminished fear reactivity (Lacroix et al. [9]) were observed after bilateral lesions in the IL (Frysztak and Neafsey [10]), and increased fear reactivity was detected when the region PL/ACx was lesioned (Morgan and LeDoux [11]). Anxiety-like responses were observed after li-docaine infusion into the IL (Wall et al. [12]) or lesioning the right IL (Sullivan and Gratton [13]).
Recent studies in rats showed morphological changes in pyramidal neurons in the PFC following chronic restraint stress (Radley et al. [14,15]; Cook and Wellman [16]) or after chronic corticosterone treatment (Wellman [17]). Chronic exposure to corticosterone also reduced the volume of layer II in all PFC subareas (Cerqueira et al. [18]). Chronic restraint stress for 21 days decreased the number and the length of apical dendrites in Cg1-Cg3 (corresponding to the region PL/ACx) (Cook and Wellman [16]; Radley et al. [14]), an effect accompanied by reduced spine density in the proximal portions of the apical dendrites (Radley et al. [15]). However, these studies did not investigate regional or possible hemispheric differences.
In the present study, we investigated whether pyramidal neurons in the three PFC subareas have a hemispherespecific morphology, and whether their specific dendritic architecture would be remodeled in a lateralized manner in response to chronic stress. As reference for the exact localization of the neurons prior to their morphological reconstruction, we first identified the boundaries between the PFC subareas using antibodies against parvalbumin, the neurofilament protein SMI-32, and neuronal nuclear antigen (NeuN). To reconstruct the morphology of individual pyramidal neurons in layer III which is known to have reciprocal connections with the mediodorsal thalamic nucleus (Groenewegen [19]), we filled cells with neurobiotin using a whole-cell patch-clamp technique. Intracellular neurobiotin staining is a highly sensitive method (Pyapali et al. [20]) for visualizing neuronal processes that are not obscured by more intensely stained portions of the neurons (Hill and Oliver [21]; Oliver et al. [22]). We investigated the morphological characteristics of pyramidal cells in the three PFC subareas paying particular attention to hemispheric differences in dendritic morphology following three weeks of daily restraint stress.
Animals
Adult male Sprague Dawley rats (Harlan-Winkelmann, Borchen, Germany) were housed in groups of three animals per cage with food and water ad libitum. Animals were maintained in temperature-controlled rooms (21 ± 1 • C) with a light/dark cycle of 12 hours on, 12 hours off (lights on at 07:00). All animal experiments were performed in accordance with the European Communities Council Directive of November 24, 1986 (86/EEC), the US National Institutes of Health Guide for the Care and Use of Laboratory Animals, and were approved by the Government of Lower Saxony, Germany. We used the minimum number of animals required to obtain consistent data.
Perfusion and tissue preparation for PFC boundary identification
Male rats (n=5, weighing 220-250 g) were killed by intraperitoneal administration of an overdose of ketamine (50 mg/kg body weight; Ketavet, Pharmacia & Upjohn, Erlangen, Germany), xylazine (10 mg/kg body weight; Rompun, Bayer Leverkusen, Germany), and atropine (0.1 mg/kg body weight; WDT, Hannover, Germany). The descending aorta was clamped and the animals were transcardially perfused with cold 0.9% NaCl for five minutes, followed by cold 4% paraformaldehyde in 0.1 M phosphate buffer (PB) at pH 7.2 for 20 minutes. Post perfusion artifacts were prevented by postfixing heads in fresh fixative at 4 • C (Cammermeyer [23]). The following day, the brains were gently removed and stored overnight in 0.1 M PB at 4 • C. Brains were cryoprotected by immersion in 2% DMSO and 20% glycerol in 0.125 M phosphate-buffered saline (PBS) at 4 • C. A small hole in the left striatum was made with a thin needle to differentiate the left from the right hemisphere. The brains were then cut into blocks containing the entire PFC, frozen on dry ice, and stored at −80 • C before serial cryosectioning at a section thickness of 50 µm. Eight to ten complete series of coronal sections were collected and stored in 0.1 M PBS for immunocytochemistry. A stereotaxic atlas of the rat brain (Paxinos and Watson [24]) was used during the cryosectioning procedure.
Following incubation, sections were thoroughly washed with 0.1 M PBS and incubated with biotinylated goat antimouse antibody (DAKO) diluted 1 : 200 in 0.1 M PBS with 3% NGS and 0.5% Triton X-100, for 1.5 hours, followed by washing in 0.1 M PBS. The sections were then incubated with 1 : 200 horseradish peroxidase-conjugated streptavidin (DAKO) in 0.1 M PBS with 3% NGS and 0.5% Triton X-100 for 1.5 hours.
After washing, sections were stained with a DAB kit (Vector Laboratories, Burlingame, Calif, USA), which contains 3,3 -diaminobenzidine (DAB) as chromogen. Staining time in DAB was 8-10 minutes for all sections; the reaction was stopped by washing the sections in 0.1 M PBS. Sections were mounted on glass slides in 0.1% gelatin and dried overnight at 37 • C, after which they were cleared in xylene for 30 minutes and finally coverslipped with Eukitt (Kindler, Freiburg, Germany). A series of adjacent coronal sections was also mounted on glass slides, dried overnight at room temperature, and stained with cresyl violet to obtain a clear comparison with the immunocytochemical images.
Analysis of immunocytochemically stained sections
Areal and laminar staining patterns were examined microscopically. Coronal sections were analyzed and photographed using a Zeiss Axiophot II photomicroscope (Carl Zeiss, Germany) at magnifications of 2.5 x, 10 x, and 20 x. The prefrontal cortical areas were identified and their boundaries or transition zones were outlined on photomicrographs of the sections, and a contour pattern (delineating IL, PL, and ACx subareas) was drawn and stored as a CorelDRAW file. Localization of intracellularly filled cells (see below) was then corroborated by overlapping a picture of a filled cell with a picture of a boundary contour pattern closest to the same region (anterior or posterior; see Results). SMI-32, parvalbumin, 3 and NeuN stained sections were compared to ensure that the defined areas coincided, and were treated identically for the methods and measurements described below. Stereotaxic coordinates of the PFC were identified with the rat brain atlas (Paxinos and Watson [24]) and cortical layers in the subfields were identified using the accompanying text book (Zilles and Wree [25]).
Chronic restraint stress
Male Sprague Dawley rats initially weighing 150-170 g were housed in groups of three animals with ad libitum access to food and tap water. The first experimental phase (habituation) lasted for 14 days, during which body weight was recorded daily. Animals were randomly assigned to the experimental (stress) and control groups. The second phase of the experiment (restraint stress) lasted for 21 days, during which the animals of the stress group (n = 16) were submitted to daily restraint stress for six hours per day (09:00-15:00). The restraint procedure was carried out according to an established paradigm (Magariños and McEwen [26]). Briefly, rats were placed in plastic tubes in their home cages and had no access to food or water. Control rats (n = 16) were not subjected to any type of stress but were handled daily. At the end of the experiment, 24 hours after the last stress exposure, animals were weighed, deeply anesthetized with a mixture of 50 mg/mL ketamine, 10 mg/mL xylazine, and 0.1 mg/mL atropine by intraperitoneal injection, and decapitated. Brains were rapidly removed and processed for slice preparation (see below). Increased adrenal and decreased thymus weights are indicators of sustained stress. These organs were therefore dissected immediately after decapitation and weighed. The data are expressed in milligrams per 100 grams body weight.
Slice preparation
After dissecting the PFC from the brain, a sagittal cut was made in the left temporal cortex with a razor blade to further differentiate the hemispheres. The blocks containing the left and the right PFC were rapidly submerged in ice-cold oxygenated artificial cerebrospinal fluid (ACSF) of the following composition (in mM): NaCl 125.0; KCl 2.5; L-ascorbic acid 1.0; MgSO 4 2.0; Na 2 HPO 4 1.25; NaHCO 3 26.0; D-glucose 14.0; CaCl 2 1.5 (all chemicals from Merck, Darmstadt, Germany). The PFC was glued to the stage of a vibratome (Vibracut 2, FTB, Bensheim, Germany) and cut in coronal, 400 µm thick slices. The slices were allowed to recover for at least one hour in ACSF bubbled with 95% O 2 , 5% CO 2 at pH 7.3, 33 • C, and then kept at room temperature for up to seven hours.
PFC slices were transferred to a submerged recording chamber, continuously oxygenated with ACSF (flow rate: 1-2 mL/min), and maintained at 33 • C. Cell bodies were visualized by infrared-differential interference contrast (IR-DIC) video microscopy using an upright microscope (Axioskop 2 FS, Carl Zeiss, Germany) equipped with a 40 x/0.80 W objective (Zeiss IR-Acroplan). Negative pressure was used to obtain tight seals (2-10 GΩ) onto identified pyramidal neurons. The membrane was disrupted with additional suction to form the whole-cell configuration. Pyramidal neurons with membrane potentials below −55 mV were excluded from the analysis. Cells were held at −70 mV for about 20 minutes.
Pyramidal cells are readily identified by their specific morphology, and only pyramidal-shaped somata located in layer III of IL, PL, and ACx (readily identified under IR-DIC video: Dodt and Zieglgänsberger [28]) were used for neurobiotin filling. Layer III pyramidal somata, visible by transillumination, tend to be smaller than layer V somata. The border between layers II and III was difficult to identify; however, cells in layer III were mainly found at a depth of about 400 µm from the pial surface (Gabbott and Bacon [29]). Observation of labeled neurons in relation to the PFC boundaries (see Results) verified their location.
Neurobiotin injection lasted for about 20 minutes. Thereafter, the patch pipette was carefully withdrawn from the membrane and the slice was fixed in 0.1 M PB with 4% paraformaldehyde (pH 7.4) and stored at 4 • C for at least 24 hours. Whole slices were processed free floating, first by blocking endogenous peroxidase activity in a 0.1 M PB solution containing 1% H 2 O 2 . After washing, nonspecific binding of antibodies was prevented by incubating the sections for one hour with 5% NGS (DAKO) in 0.1 M PBS and 0.3% Triton X-100. Subsequently, slices were incubated with avidin-biotin peroxidase (diluted 1 : 100; ABC, Vector Laboratories) in 1% NGS (DAKO) and 0.3% Triton X-100 overnight at 4 • C. On the following day, slices were washed and left overnight in PBS. The following day, slices were equilibrated by washing them in TBS (pH 7.6) and the staining reaction was completed by incubation in a solution containing 0.04% NiCl 2 , 0.5 mg/mL DAB, and 0.01% H 2 O 2 (Vector Laboratories) in TBS until a dark brown color appeared, typically in less than 10 minutes. The reaction was terminated by several washes in fresh 0.1 M PBS and finally in doubledistilled water. Tissue sections were then dehydrated in an ascending series of ethanols (30%-100%), cleared with two 10minute incubations in xylene and flat-embedded in Eukitt (Kindler) on glass slides. Slices from at least one stressed and one control animals were always processed simultaneously.
Neuronal reconstruction and morphometric analysis
Labeled cells were examined by light microscopy to ensure that they fulfilled the following criteria for the threedimensional reconstruction: (1) a clearly visible and completely stained apical dendritic tree; (2) at least three main basilar dendritic branches, each branching at least to the third-degree branch order; (3) soma location in layer III of an identified PFC subarea; and (4) visibility of the most distal apical dendrites with dense labeling of the processes (Kole et al. [27]; Radley et al. [15]). To ensure that the analysis was performed blind, each slide was coded by an independent observer prior to neuronal reconstruction, and the code was not broken until all analyses were completed. In a few cases, cell coupling was observed (< 1%); such cells were omitted from the analysis, because the dendrites could not be assigned unequivocally to a single neuron. Somata of intracellularly labeled cells were located at 60-70 µm depth from the slice surface allowing reconstruction of almost all their main dendritic branches. Compromised cells that had truncated main apical or first-order basilar branches were omitted from the analysis. In each animal, 12 neurons were filled with neurobiotin, six in the left and six in the right hemisphere, randomly distributed among the three areas of interest. Complete and optimally labeled pyramidal neurons meeting the above criteria were reconstructed and morphological parameters were quantified using NeuroLucida software (Micro-BrightField, Inc., Colchester, Vt, USA) in combination with an automated stage and focus control connected to the microscope (Zeiss III RS). Data were collected as line drawings consisting of X, Y , and Z coordinates. Dendritic length was measured by tracing dendrites using a 40 x (N.A. 0.75) objective, giving a final magnification of 40 000 x on the monitor. The step size of the circular cursor was 0.16 µm, sufficiently below the limits of light microscopy resolution (about 0.25 µm). Numerical analysis and graphical processing of the neurons were performed with NeuroExplorer (MicroBright-Field). Sholl plots (Sholl [30]) were constructed by plotting the dendritic length as function of distance (corresponding to the radius) from the soma center, which was automatically set to zero. The length of the dendrites within each subsequent radial bin at 10 µm increments was summed. Ethanol dehydration and xylene clearance are known to cause tissue shrinkage (Pyapali et al. [20]). However, previous analyses from our laboratory suggested that the linear shrinkage correction has no direct effect on data used for morphological comparative analysis (Kole et al. [27]). Therefore, we did not apply any correction factor.
Statistical analysis
Body weight (BW) and relative organ weight (in milligrams per 100 grams of BW) of control and stress animals at the end of the experiment were compared using the unpaired t-test.
The total number of labeled neurons that fulfilled the above criteria to be analyzed was 69 in the control and 70 in the stress group. Since these labeled cells were not evenly distributed among the animals, we calculated the means of the morphometric data for each hemisphere/animal. These mean data served as analysis unit for the statistical evaluation and are indicated as "n" in the tables. Data for the total length of dendrites, the total number of branching points, and the total number of branches were evaluated by twoway ANOVA (factors: hemisphere × group) (Statistica software package, Release 6.0 StatSoft Inc., Tulsa, Okla, USA). Numbers of branches per branch order were evaluated using three-way ANOVA (factors: branch order × hemisphere × group). Sholl analysis data were evaluated with three-way repeated measures ANOVA (factors: hemisphere × group × radius) (SPSS version 12.0, SPSS Inc., Chicago, Ill, USA). Bonferroni's post hoc test was used in all cases. Because the morphology of the pyramidal cells shows complex differences along the dendritic trees we restricted our post hoc analyses to distinct radii (10 µm, 20 µm, 30 µm, etc.) and single branch orders (1st, 2nd, 3rd order, etc.). Data are presented as mean ± SEM (standard error of the mean). Differences were considered statistically significant at P < .05.
Prefrontal cortex boundaries definition
According to previous descriptions, the rat PFC can be divided into three subareas: IL, PL, and ACx. As a basis for the reliable localization of neurobiotin labeled pyramidal neurons in the present study, we visualized the boundaries of these subareas using specific antibodies. The three subareas that were reliably found at the same location in all investigated brains were defined as showing differential staining patterns with at least two staining methods.
Immunocytochemical staining with SMI-32 antibody gave a staining pattern that differentiates PL from ACx, and ACx from the premotor cortex in dorsal regions of the PFC (Figure 1(a)). In the PL, the SMI-32 antibody labeled layers III and V. This pattern became lighter and narrower in the ACx, where layer III was lightly stained whereas layer V was darker and broader. ACx could be distinguished from the premotor cortex because in the latter, the deep layers were intensely labeled by the SMI-32 antibody (Figure 1(a), lower panel).
Parvalbumin proved to be a good marker to distinguish all PFC subareas and their respective layers. In IL, layer II was only lightly stained, layer III was slightly darker and layer V showed a pronounced staining. In the PL, layer II was distinctly stained by the PV antibody and layer III appeared wider than in the IL. The strong staining of layer V observed in the IL gradually disappeared in the PL. In the ACx, all layers had more parvalbumin-immunoreactive cells compared to PL. Layer II in ACx showed darker staining compared to the PL (Figure 1(b)).
Immunoreactivity for NeuN provided a boundary between IL and PL, and a clearly layered pattern in all PFC subareas with pronounced staining of layer II (Figure 1(c)). The IL was distinguished by a wide layer I and by densely packed cells in layers II. Compared to IL, the PL had a lighter layer Claudia Perez-Cruz et al. III, and a broader layer V. In ACx, layer V was again broader than in the PL (Figure 1(c)). Using Nissl dyes, layer I can be clearly distinguished, however, it is difficult to distinguish the other cortical layers and to detect borders between PFC subareas (Figure 1(d)). By comparing the location of each neurobiotin filled layer III neuron (see below) with the boundary patterns described above, we were able to accurately define its subarea-specific location.
Intracellular labeling with neurobiotin and dendritic reconstruction
Intracellular neurobiotin labeling provides a reliable and sensitive method to study dendritic morphology (Paypali et al. [20]). In all experimental groups, there was complete staining of the dendritic branches with distal dendrites being reliably visualized (Figure 2). The fact that during injection of the dye cells were alive and healthy and the use of relatively thick slices (400 µm) increased the probability to reconstruct complete dendritic arbors without compromised branches. In both groups, control and stress, we filled a total of 384 cells of which 36% (139 cells) fulfilled the criteria for complete staining suitable for a quantitative analysis of dendritic morphology. Since these labeled cells were not evenly distributed among the animals and to avoid any bias we calculated means/hemisphere/animal. These means served as analysis units for the statistical evaluation (see below).
Sholl analysis (left versus right)
For a close inspection of the dendritic trees in the left and the right hemisphere, Sholl analyses were performed (Figure 3).
For the basilar dendrites in the IL, three-way ANOVA performed with data from both groups, control and stress, (factors: hemisphere × group × radius) revealed significant effects of hemisphere (F (1,572) = 6.12, P < .05) and radius (F (29,572) = 22.55, P < .001). Bonferroni's post hoc test indicated a significant interhemispheric difference for basilar dendrites in controls at 10 µm from soma (df = 31, P < .05) (Figure 3(a), left panel). Also for apical dendrites in the IL three-way ANOVA revealed a significant effect of hemisphere (F (1,1086) = 24.18, P < .001) but no reliable effect of radius. Bonferroni's post hoc test showed that in the control animals, apical dendrites in the right hemisphere were longer than in the left hemisphere. The sites where these right-left differences occurred were proximal to the soma, at 10, 20, and 60 µm (df = 30, P < .05 in all cases) (Figure 3(a), left panel).
For the basilar dendrites in the ACx, three-way ANOVA indicated no reliable interhemispheric difference but only an effect of radius (F (29,399) = 15.55, P < .001). For the apical dendrites in ACx, ANOVA revealed a positive effect of the hemisphere (F (1,841) = 7.81, P < .01) and an effect of radius (F (51,841) = 3.11, P < .001). However, the post hoc test showed no interhemispheric difference with respect to apical dendrites in the ACx of controls (Figure 3(c), left panel).
These data demonstrate a lateralized morphology of apical dendrites on pyramidal neurons in IL and PL but not in the ACx of control rats.
Total dendritic length (left versus right)
The total length of basilar and apical dendrites in control rats showed no significant interhemispheric differences although apical dendrites in the right tended to be longer than in the left hemisphere (Table 1).
Dendritic branches (left versus right)
The complexity of the apical and basilar dendritic trees in the two hemispheres was determined by analyzing numbers of branching points and branches (Table 2).
For basilar dendrites, two-way ANOVA indicated no reliable interhemispheric differences for the total number of branching points and branches in any of the subareas ( Table 2). Numbers of branches of distinct branch orders were evaluated by three-way ANOVA (factors: hemisphere × group × branch order). Table 2). Bonferroni's post hoc test showed reliable interhemispheric differences for the number of branches of the orders 4, 6, and 11 (df = 24, P < .05 in all cases) and of branch order 12 (df = 24, P < .01). In the left IL, dendritic branches of the orders 11 and 12 could not be observed, but were only present in the right IL (Table 2).
For basilar dendrites in the PL, three-way ANOVA depicted no effect of the hemisphere but only an effect of the branch order (F (6,168) = 68.23, P < .001). Also for apical dendrites in the PL there was no effect of the hemisphere but only an effect of the branch order (F (10,253) = 9.86, P < .001). The post hoc test showed no significant interhemispheric difference for any branch order in the PL (Table 2).
In the ACx of control rats, there were no significant interhemispheric differences with respect to the total number of apical and basilar branches and branching points. Threeway ANOVA depicted only effects of the branch order (basal: F (5,118) = 36.11, P < .001; apical: F (10,220) = 9.92, P < .001) ( Table 2).
Total dendritic length and Sholl analysis (stress effects)
Stress reduced the total length of apical dendrites on pyramidal neurons in the PL selectively in the right hemisphere (Table 1). No other stress effect on the total length of dendrites was observed.
Dendrites of pyramidal neurons in stressed rats are shown in Figure 3 (right panel) and in Figure 4. For basilar dendrites in the IL, three-way ANOVA depicted no reliable effect of stress. These dendrites also displayed no significant left-right difference in stressed animals (Figure 3(a), right panel).
In the PL of stressed rats, no reliable interhemispheric differences with respect to apical dendrites on pyramidal neurons were observed (Figure 3(b), right panel). Three-way ANOVA for apical branches in the PL, performed with data from all groups, showed an interaction hemisphere × group (F (1,1018) = 17.40, P < .001). The post hoc test indicated that in the right PL, stress reduced the length of apical dendrites at 160, 170, 190, 420 µm (df = 25, P < .05) and at 180 µm (df = 25, P < .01) (Figure 4(b)).
Dendritic branches (stress effects)
The effect of stress on the total number of branching points and the total number of dendritic branches was also analyzed ( Table 2).
For apical dendrites in the IL, both basilar and apical dendrites showed no effect of group and no interaction. However, the post hoc test depicted a significant stress effect on numbers of branches of the order 12 in the right IL (df = 24, P < .01). These dendritic branches could not be observed in stressed animals (Table 2).
In the PL, the branching pattern of basilar dendrites was apparently not affected by stress. For the total number of apical branching points, two-way ANOVA revealed an interaction group × hemisphere (F (1,31) = 3.28, P < .05) represented by a reliable stress induced decrease in the total number of branching points in the right hemisphere (df = 29, P < .05). Stress also reduced the number of branches of the order 3 selectively in the right hemisphere of the PL (interaction: group × hemisphere; F (1,253) = 7.61, P < .01; post hoc test: df = 23, P < .05) ( Table 2).
For basilar dendrites in the ACx, three-way ANOVA showed an interaction hemisphere × group (F (1,118) = 5.32, P < .05). For apical dendrites, there was a reliable effect of group (F (1,220) = 5.46, P < .05) represented by a stress induced increase in the number of branches of the order 3 in the right hemisphere (df = 20, P < .05). Moreover, there was a significant left-right difference with respect to order 3 branches in the ACx; no dendritic branches of this order could be observed in stressed rats (df = 20, P < .05) ( Table 2). These data show that chronic restraints stress affects dendrites in the right hemisphere of IL and PL. In contrast in the ACx, it is the left hemisphere that is affected by stress.
Effects of chronic restraint stress on body and organ weights
To assess the physiological effects of chronic stress, we measured body weight (BW) and weights of thymus and adrenal glands. In rats subjected to the restraint stress for 21 days, body weight at the end of the experiment was significantly lower than in controls (control: 328. [17]).
DISCUSSION
In the first part of this study, we identified the boundaries between the three PFC subareas. The border between PL and ACx could be visualized with the SMI-32 antibody which labels neurofilaments (Sternberger and Sternberger [32]). The parvalbumin antibody, which stains a subpopulation of cortical interneurons (Gabott and Bacon [29]), strongly stained layer V and was suitable for recognizing the boundaries between IL and PL. The antibody against NeuN, a selective marker for neurons (Mullen et al. [33]), proved to be better than conventional Nissl staining at defining cortical layers II and III. Delineation of the subarea boundaries and of cortical layers was a prerequisite for the exact localization of pyramidal neurons within the rat PFC. Because projections from the mediodorsal thalamic nucleus principally target layer III and V of the PFC (Uylings et al. [34] Gabbott et al. [35]; Krettek and Price [4]) we analyzed pyramidal cells exclusively in layer III. The neurons that we investigated showed apical dendrites extending to the brain surface, and the distance of their somata was up to 550 µm from the pia mater. Therefore, their location agreed with a previous description of the PFC layers (Gabbott and Bacon [29]). In contrast, Brown et al. [36] reported that the somata of the pyramidal neurons they examined were located closer to the cortical surface (distance of 250-280 µm), probably in layer II. It is important to mention that intracellular neurobiotin labeling allows a better staining of distal dendritic branches compared to conventional methods such as Golgi staining (Pyapali et al. [20]).
Lateralized pyramidal neuron morphology in the PFC of control rats
We found intrinsic morphological asymmetries in the prefronto-cortical pyramidal cells of control animals. In PL and IL, there were interhemispheric differences in the length of apical dendrites at certain distances from the soma. These are new findings, because previous studies addressing the morphology of pyramidal neurons in the PFC did not discriminate between the hemispheres (Wellman [17]; Cook and Wellman [16]; Radley et al. [14]). Intrinsic asymmetries have been observed before in several regions of the human cerebral cortex, for example, in entorhinal, auditory-and language-associated cortices (Hayes and Lewis [37]; Hutsler [38]). Simic et al. [39] found larger pyramidal neurons in the human left entorhinal cortex compared to the right, and hypothesized that this asymmetry reflects a more extended dendritic arborization and enlarged neuropil volume in the left hemisphere. The present data show that in PFC subareas PL and IL of the rat, the length of apical dendrites at certain distances from soma differs between the hemispheres.
Hemispheric remodeling of dendrites by chronic stress
In PL and IL, chronic stress abolished the interhemispheric differences in the length of dendrites observed at certain distances from the soma. In the right PL, stress also reduced the total length of the apical dendrites. The stress-induced factors that lead to these changes are not yet known, however, one may speculate that dopamine which is known to decrease excitability of PFC pyramidal neurons plays a role (Jedema and Moghaddam [40]; Gulledge and Jaffe [41]). Electrophysiological recordings have shown that dopamine enhances the spatiotemporal spread of activity in the rat PFC, at least in part via layer III pyramidal neurons (Bandyopadhyay et al. [42]). As mentioned above, stress can increase dopamine turnover in the right PFC (Carlson et al. [2]; Berridge et al. [3]; Slopsema et al. [43]; Thiel and Schwarting [44]) and chronic stress reduces the spontaneous activity of dopaminergic neurons in the ventral tegmental area (VTA) which project to the PFC (Moore et al. [45]; Benes et al. [46]). In coincidence with this, it was found that repeated stress reduces dopamine (Mizoguchi et al. [47]), norepinephrine (Kitayama et al. [48]) and serotonin in the PFC (Mangiavacchi et al. [49]). Although it is not known whether in the present study, the chronic restraint stress induced a deficit in dopamine and/or other monoamines, it may be hypothesized that the changes in dendrites observed here are related to maladaptive neurochemical processes. In contrast to PL and IL, pyramidal neuron dendrites in the ACx of control rats showed no interhemispheric differences, but the stress induced a left-right difference. Previous reports described a stress induced decrease in apical dendritic length using a similar (Cook and Wellman [16]; Brown et al. [36]) or the same stress protocol (Radley et al. [14]). While we investigated only layer III pyramidal neurons, the former studies focused on cells more widely distributed over layers II-III (Cook and Wellman [16]; Radley et al. [14]). Dendritic architecture is crucial for connectivity within neuronal networks. Sensory input or environmental enrichment has been shown to promote the formation of spines on proximal dendrites (Turner and Lewis [50]). In hippocampal pyramidal neurons, extensive dendritic sprouting and enhanced spine density were observed when axonal afferents were increased (Kossel et al. [51]) whereas the loss of afferents can lead to dendritic atrophy (Valverde [52]; Benes et al. [53]; Deitch and Rubel [54]). One may speculate that the stress-induced dendritic changes observed in the present study are related to alteration in axonal input.The observation that the dendritic alterations emerged at certain distances from soma may be related to the fact that axonal input to the PFC is site-specific and depends on the cortical layer.
Potential consequences of dendritic remodeling and clinical implication
The stress-induced reduction in the total length of dendrites in the right PL is in line with previous findings showing lateralization of the PFC mediated stress response. Right side lesions of the IL/PL reduced the peak stress-induced corticosterone response (Sullivan and Gratton [7]) and decreased anxiety (Sullivan and Grattron [13]), suggesting that a compromised right PFC activity results in a lack of control over the physiological and behavioral responses to stress. Especially the prelimbic and the anterior cingulate cortex are important to react to environmental stimuli (Cardinal et al. [55]). Therefore, one may assume that alterations in the morphology of PFC pyramidal neurons have an impact on the stress response. Similar stress-related processes as in the PFC have been observed in the amygdala. Chronic restraint stress increased the length of apical dendrites of pyramidal cells in the basolateral amygdala (Vyas et al. [56]) which sends projections to and receives input from PL and ACx (Vertes [6]). Like in the PFC, the activity of the basolateral amygdala appears to be lateralized under stress conditions (Adamec et al. [57]). Under normal conditions, the PFC inhibits the basolateral amygdala (Rosenkranz and Grace [58]); but under stress, this inhibition might be impaired thus contributing to an overreactivity of this nucleus. It is possible that the morphological remodeling of the pyramidal neurons in the rat PFC that we describe in the present study is related to a presumptive stress-induced change in information transfer between PFC and amygdala.
Lateralization appears to be characteristic for normal PFC functioning. Studies in humans indicated that reduced lateralization correlates with pathological conditions or with aging processes as fronto-cortical activity during cognitive performance tends to be less lateralized in old compared to young adults (Dolcos et al. [59]). A recent investigation on the human dorsolateral PFC demonstrated a hemispheric asymmetry in pyramidal cell density in normal subjects (higher density left compared to right), which was reversed in schizophrenic patients (Cullen et al. [60]). Rotenberg [61] suggested that in depressed patients, the right PFC hemisphere is over-activated, and subjects with major depression displayed a reduced size of neurons in layer III of the right orbitofrontal cortex (Cotter et al. [62]). However, studies in depressed patients that did not focus on hemispheric differences reported on decreased activity in the PFC area ventral to the genu of the corpus callosum (Drevets et al. [63]), and on reduced cortical thickness, glial size, and glial densities in supragranular layers of the orbitofrontal cortex (Rajkowska et al. [64]). Our study in rats shows that chronic restraint stress has a strong effect on the morphology of pyramidal neurons in the right hemisphere, at least in PL and IL.
The neurophysiologcal consequences of the dendritic alterations are not yet known. A shortening of even a few dendrites on CA1 hippocampal pyramidal neurons enhanced the back-propagation of action potentials (Golding et al. [65]; Schaefer et al. [66]). Moreover, experiments in our laboratory revealed that a stress-induced decrease in the length of apical dendrites of CA3 pyramidal neurons in the rat hippocampus correlates with reduced onset latency of excitatory postsynaptic potentials (Kole et al. [27]). However, functional studies are required to assess the physiological implications of such morphological remodeling in the rat PFC.
CONCLUSIONS
This is the first study showing intrinsic hemispheric differences in the dendritic morphology of pyramidal neurons in subareas of the rat PFC. In PL and IL of control rats, interhemispheric differences in the length of apical dendrites at certain distances from the soma were observed. Chronic stress abolished these right-left differences and reduced the total length of apical dendrites in the right PL. In contrast in the ACx, there was no hemispheric difference in controls but stress induced a left-right difference. These chronic stressinduced regional changes may be correlated with the specialized functions of PFC subareas in stress-related pathologies, and provide additional support for previous studies of stressdependent activation of the right PFC.
|
2014-10-01T00:00:00.000Z
|
2007-06-11T00:00:00.000
|
{
"year": 2007,
"sha1": "5da8b0097bd4d3288e2c8444c0760baa90858194",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/np/2007/046276.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b9713454ec2774b7b7880c7a3f283abf26d30bf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
199851380
|
pes2o/s2orc
|
v3-fos-license
|
How Did Language Evolve? Biological, Psychological, and Linguistic Perspectives
The topic of language origin and evolution has been considered for a long time as a difficult question to address scientifically because of poverty of empirical data and limitations in methodology (Müller, 1861). These considerations have led to the well-known edicts by the Société de Linguistique de Paris in 1866 and the Philological Society of London in 1872 that forbade all members from presenting speeches on the topic. THEORIA ET HISTORIA SCIENTIARUM, VOL. XVI Ed. Nico laus Copern icus Univers i ty 2019
Introduction
The topic of language origin and evolution has been considered for a long time as a difficult question to address scientifically because of poverty of empirical data and limitations in methodology (Müller, 1861). These considerations have led to the well-known edicts by the Société de Linguistique de Paris in 1866 and the Philological Society of London in 1872 that forbade all members from presenting speeches on the topic. 9 How Did Language Evolve? Biological, Psychological some classical examples illustrating how the results coming from them can shed light on the phylogenetic roots of our own typical way of communicating.
Biological Predispositions for Language: The Evolution of the Vocal Tract
Language largely relies on our biology (e.g. Lenneberg, 1967;Caplan, Roch Lecours, & Smith, 1984;Fitch, 2012). The study of the biological predispositions of language can be addressed from different points of view, since stating that language depends on our biology means saying, at a general level, that it relies on our bodies, genes and brains. Therefore, anatomy, genetics and neurosciences can all contribute to point out the main biological features necessary for language to evolve. Here we focus on one specific anatomic feature that for a long time was thought to be crucial for the evolution of speech: the shape of the vocal tract. Research exploring the morphology and physiology of the vocal tract benefits from empirical findings in comparative anatomy and aims "to obtain the evidence of presence or absence of critical conformations associated with unique human behaviors like speech, and assumes that unique behaviors like language must be determined by unique anatomical or physiological arrangements" (Gong et al., 2018, p. 121).
Investigations exploring the evolution of the vocal tract start from a fundamental observation: the human supralaryngeal vocal tract (SVT) is anatomically different from that of other living primates (Lieberman, 2007). The human SVT is divided into two parts: a horizontal portion in the oral cavity including the mouth and oropharynx; a vertical portion in the throat, i.e., the pharynx, which is located behind the tongue and above the larynx, extending from the palate down to the vocal cords. In modern adult human beings, these two portions form a right angle to one another and are approximately equal in length. This anatomic configuration of the vocal tract is crucial for speech as "the supralaryngeal vocal tract acts as an acoustic filter that determines the phonetic quality of the sounds" (Lieberman, 2007, p. 40). On the contrary, in nonhuman primates the larynx is located high in the throat (near the base of the mandible) and the tongue is long and largely restricted to the oral cavity. The result is a disproportionate shape of the SVT in nonhuman primates. For this reason, the range of vocal sounds available to these animals is widely constrained.
From the point of view of language evolution, the questions to be addressed are the following: when did fully human vocal tract appear? Did our hominin ancestors own this anatomical configuration necessary for speech? To answer these questions, it is necessary to reconstruct the anatomy of the SVT of a fossil, and, particularly, the position of the larynx in the throat. The problem is that the SVT is a soft tissue and bones are all that remain in the fossil record. However, there are some indirect clues that can be used to infer the possible location of the larynx in extinct hominins (cf. Fitch, 2010). Among these, the basicranial angle − the base of the skull connected to the vertebral column that in the human beings form a 90-degree angle − and the hyoid bone -the bone supporting the root of the tongue. The first important investigation on this topic was that of Lieberman and Crelin (1971). The authors tried to determine the probable vocal tract of fossil hominins by establishing correlations between the basicranial angle and the vocal tract in living nonhuman primates and then making inferences based on this angle in a fossil. They analyzed the basicranial flexion of a Neanderthal fossil and suggested that it was similar to that observed in modern chimpanzees rather than to that of modern adult human beings. Starting from these observations, the authors proposed that the larynx of Homo neanderthalensis was located high in the throat and that, because of this, he was unable to produce fully articulated language. According to the authors, in fact, fully human speech emerged relatively late, about 50,000 years ago, in Homo sapiens (who first appeared as species 200,000 years ago) (Lieberman, 2007, p. 59).
The hypothesis advanced by Lieberman and Crelin has been disputed over the years and nowadays the idea that the SVT of Homo neanderthalensis was like that of chimpanzee is not supported anymore (cf. Mithen, 2005). An important study in this respect was the analysis of the hyoid bone of a Neanderthal fossil found in Kebara (Israel) (Arensburg et al., 1990). As mentioned above, the hyoid bone, which provides an anchoring structure for the tongue as well as most of the other muscles of the vocal tract, can be used as an indirect clue to infer the position of the larynx (the larynx hangs below hyoid bone). Researchers who analyzed the hyoid bone of Kebara fossil observed that its shape was similar to that of modern human beings. This observation led to conjecture that the larynx was located low in the throat too. This opened the way to the hypothesis that a Neanderthal might have had a vocal apparatus similar, albeit not identical, to that of Homo sapiens. Further findings contributed to corroborate such a view. Martinez and colleagues (2008) analyzed two hyoid bones from the middle Pleistocene site of the Sima de los Huesos (Spain) dating back at least 530,000 years. Human fossils found in this site have been assigned to the species Homo heidelbergensis, the last common ancestor of Homo neanderthalensis and Homo sapiens, representing the ancestral European population that evolved into the Neanderthals. The two hyoid bones analyzed were like to those of modern human being and Neanderthal in both their morphology and dimensions and different from the hyoid bones of chimpanzees and more ancient homininis (i.e., Australopithecus afarensis). Thus, as the authors highlighted, "the genus Homo has been characterized by a modern-humanlike anatomy of the hyoid bone since at least 530 ka" (Martinez et al., 2008, p. 123).
However, it should be noted that prominent researchers in the field of language evolution (e.g. Fitch, 2010Fitch, , 2017 have suggested that it is the neural control of the vocal tract, rather than the robust morphology of the SVT, which is significantly different between humans and primates, and thus likely contributes to observed differences for speech propensity. To this regard, one of the fossil clues brought into play to reconstruct the neural vocal control of our ancestors is the enlargement of the thoracic canal in Homo sapiens relative to earlier hominins and other primates (MacLarnon & Hewitt, 1999). The relevance of such fossil clue for the evolution of the neural basis of speech is related to the fact that in the thoracic spinal cord there are neurons that control muscles involved in breathing. As these muscles are implicated in the fine control of pulmonary pressures during vocal production, it has been hypothesized that an enhanced breathing control (made possible by an enlargement of the thoracic canal) represented an adaptation to fine vocal control and speech. Starting from these observations, MacLarnon and Hewitt (1999) analyzed the thoracic canal diameter in extant primates, modern Homo sapiens and extinct hominins, such as Australopithecus afarensis and Homo ergaster. Their analysis revealed that the thoracic vertebral canal of these hominins was of similar relative size to that of extant nonhuman primates, and substantially smaller than that of modern humans. Interestingly, from this study emerged that Neanderthals had a thoracic vertebral canal of similar relative dimensions to that of modern Homo sapiens. Although our fossil evidence for the evolution of thoracic canal size is limited, according to Fitch (2010) the few existing data on that are plausible and well supported. In his opinion, indeed, The solid data come from living primates and modern humans, from the "Turkana Boy" Homo ergaster skeleton, and from several Neanderthal specimens, and indicate that thoracic expansion occurred sometime in the million-year period of evolution after Homo ergaster but before Neanderthals (e.g. later Homo erectus or H. heidelbergensis). (Fitch, 2010, p. 335) Overall, these findings, in conjunction with a broader range of evidence coming from other disciplines (for a discussion: Dediu & Levinson, 2013), contributed to reassess the antiquity of speech from 50,000/100,000 years to half a million years. Indeed, according to Dediu and Levinson (2013, p. 6), "the number and diversity of clues, taken together, clearly point in the direction of a modern capacity for speech in the common ancestor of Neanderthals and modern humans."
Psychological Prerequisites for Language: The Role of Mindreading
In addition to investigations centered on the evolution of speech, research on language evolution is also characterized by theoretical models and empirical studies aiming at unveiling the cognitive capacities underlying the evolution of human communication Corballis, 2011;Scott-Phillips, 2015;Ferretti, 2016;Ferretti et al., 2017). Attention to comparative cognitive research can be explained by increasing understanding that knowledge of how cognition forms in evolution, what cognitive abilities are required for language (and what is their evolutionary history), along with the relationship between cognition and communication in various species are crucial for understanding the complexity of language evolution. As can be expected, also this kind of research cannot rely on direct evidence: same as speech, cognition does not fossilize. However, it is possible to attempt to reconstruct the cognitive evolution of our extinct ancestors exploiting data coming from primatology and cognitive ethology: the mind of our closest relatives, the great apes, with whom we share a common ancestry dating back six or seven million years, can be used for developing models of cognitive capacities of the last common ancestor with chimps and to make inferences about the psychological equipment of the extinct hominins. Additionally, comparative data from non-primate species allows researchers to model formation of analogous cognitive and communicative traits, which likely contributed to language evolution in humans. Research in various species of vertebrates, ranging from birds to mammals, contributes to understanding which processes and environmental pressures (e.g. intense competition for mating, social environment, altricial offsprings) influenced development of specific features of cognition and communicative systems.
Against this background, investigating the origin and evolution of language corresponds to examine the cognitive capacities underlying language processing in order to clarify to what extent these capacities are shared with the great apes. Therefore, the starting point of the perspectives embracing this view is the analysis of the cognitive foundations of human communication. The interpretive perspective better able to account for this aspect is the ostensive-inferential model of communication or relevance theory (RT; Sperber & Wilson 1986/1995. Following Grice's intuition, according to which an essential feature of most human communication, both verbal and non-verbal, is the expression and recognition of intentions (Grice, 1957(Grice, , 1969, RT views communication as an inferential pragmatic process in which the generation and the detection of communicators' intentions are central. Indeed, according to RT, a linguistic interaction is characterized by the speaker's meaning, a complex communicative intention aimed to achieve a certain effect on the hearer's mind by means of the hearer's recognition of the intention to achieve this effect. Scott-Phillips (2015, p. 64) highlighted that "when we communicate with others, we must know something about their minds in order to understand their intended meanings, and indeed to tailor our own utterances to them". At a general level, identification of others' intentions is made possible by a specific cognitive system defined "theory of mind" or "mindreading". These terms are used to describe the ability to attribute mental states such as beliefs, intentions, and feelings to others and to explain and predict the actions that derive from them (e.g. Premack & Woodruff, 1978;Baron-Cohen, 1995;Westra & Carruthers, 2018). Theory of mind allows us to entertain "metarepresentations": we do not only mentally represent other's mental states but also process multiple levels of mental representations. In other words, we can think about what you know about what she thinks and so on (Scott-Phillips, 2015). The classical experimental paradigm allowing to evaluate such ability in humans is the so called "false belief task" (Wimmer & Perner, 1983;Baron-Cohen, Leslie, & Frith, 1985), which requires the understanding that others my hold and act on false beliefs.
According to several authors, the mindreading system is not only involved in language processing but had a crucial role in its origins (Sperber, 1995;Dunbar, 1996;Origgi & Sperber, 2000;Scott-Phillips, 2015). Sperber (1995) proposed that human communication is a by-product of human metarepresentational capacities. The ability to perform sophisticated inferences about each other's states of mind evolved in our ancestors as a means of understanding and predicting each other's behavior. This in turn gave rise to the possibility of acting openly so as to reveal one's thoughts to others. As a consequence, the conditions were created for the evolution of language. Language made inferential communication immensely more effective. It did not change its character. All human communication, linguistic or non-linguistic, is essentially inferential. Whether we give evidence of our thoughts by picking berries, by mimicry, by speaking, or by writing -as I have just done -, we rely first and foremost on our audience's ability to infer our meaning. (p. 199) Stating that human meta-representational capacities are at the foundation of language implies saying that theory of mind has a logical and temporal priority over language. Therefore, to corroborate this view it should be demonstrated that the ability to mentally represent the other's mental states is also present in our closest relatives. Do the great apes have the ability of theory of mind? This question is at the center of a lively debate (Dennett, 1978;Byrne & Whiten, 1988;Heyes, 1998;Hare, Call, & Tomasello, 2000;Andrews, 2017;Krupenye et al., 2016), which started at the end of the 1970s with the classical paper by Premack and Woodruff (1978) Do the chimpanzee have a theory of mind? The authors answered affirmatively to the question suggesting that chimpanzees are able to interpret other's behaviors (namely, human's behavior) by attributing mental states. However, despite the positive answer offered by the authors, determining whether non-human primates have a a capacity of this kind is a controversial fact. For example, years later Premack (1988), referring to new studies, suggested that "if the chimpanzee have a theory of mind, it will be weaker than the human one. (…) the states of mind the chimpanzee is most likely to instantiate are the sensory ones -seeing, wanting, expecting" (p. 175). A similar conclusion was also endorsed by Call and Tomasello (2008): there is solid evidence from several different experimental paradigms that chimpanzees understand the goals and intentions of others, as well as the perception and knowledge of others. Nevertheless, despite several seemingly valid attempts, there is currently no evidence that chimpanzees understand false beliefs. Our conclusion for the moment is, thus, that chimpanzees understand others in terms of a perception-goal psychology, as opposed to a full-fledged, human-like belief-desire psychology. (p.187) However, over recent years the situation has profoundly changed. In this respect, it has been crucial a study promoting a new experimental paradigm (the anticipatory looking test) that allowed to show that chimpanzees, bonobos, and orangutans recognize that others' actions are driven not by reality, but beliefs about reality, even when those beliefs are false (Krupenye et al., 2016). In other words, this study demonstrated that apes can ascribe a false belief to an agent, challenging the view that this ability is uniquely human (see also Buttelmann et al., 2017).
If great apes can mentally represent the mental states of others (at least to some extent), it can be expected that they also would be able to exploit this ability for communication. Is there evidence on that? In this respect the results obtained by Crockford and colleagues (2012) on chimpanzees' vocal communication are particularly interesting. Authors used an alarmcall-based field experiment, observing the response of members of a group of wild chimpanzees to a snake model, a viper, positioned on their path of travel. The results showed that chimpanzees were more likely to give alarm calls in response to a snake in the presence of unaware group members than in the presence of aware group members. According to the authors "chimpanzees keep track of information available to receivers and intentionally inform those who lack certain knowledge (…).
[They] communicate missing information that is relevant and beneficial to receivers" (Crockford et al., 2012, p. 145). In other words, chimpanzees are able to monitor the information available to others: they recognize knowledge and ignorance in others and control vocal production to selectively inform them. They inform ignorant group members of danger with such reasoning as "I know something that you don't know, and I know that this information is useful to you." (for a discussion: Adornetti, 2015). Overall, these results (see also Schel et al., 2013) can be used to support the hypothesis mentioned above according to which some level of mindreading abilities might be a pre-requisite for evolvement of human communication; and indeed might represent a homologous trait found in modern day primates. From this point of view, theory of mind represented a crucial psychological prerequisite for language to evolve.
Linguistics Perspectives on Language Evolution: The Case of Syntax
One of the most contentious and controversial debate in language evolution research concerns the origin and evolution of an important linguistic dimension: syntax (Bickerton, 1990;Pinker & Bloom, 1990;Jackendoff, 1999;Hauser, Chomsky, & Fitch, 2002;Heyne & Kuteva, 2007;Berwick, 2011;Hurford, 2012;Boeckx & Benítez-Burraco, 2014a;Tallerman, 2014). Syntax can be described as "the rule-governed combination of small meaningful units (morphemes) into hierarchical structure (phrases and sentences) whose meanings are some complex function of those structures and morphemes" (Fitch, 2010, p. 104). At a general level, researchers aimed at investigating the evolutionary roots of syntax can be positioned along two major views. On the one hand, there are authors who suggested that its emergence happened abruptly, for example via a mutation affecting the Homo sapiens brain that gave rise to modern language (e.g. Bickerton, 1990;Chomsky, 2010;Berwick & Chomsky, 2016). On the other hand, there are scholars who proposed that syntax evolved slowly and gradually with a smooth improvement in linguistic ability (e.g. Pinker & Bloom, 1990;Jackendoff, 1999;Tallerman, 2014). In turn, authors embracing this gradualist development of syntax can be ascribed to two lines of thought: those who explained this development by referring to Darwinian biological evolution (e.g. Pinker & Bloom, 1990;Pinker & Jackendoff, 2005) and those who, on the contrary, invoked socio-cultural evolution (e.g., Heyne & Kuteva, 2007;Christiansen & Chater, 2008;. The best-known exponent of the view according to which syntax emerged suddenly in Homo sapiens is Noam Chomsky. Although for a long time Chomsky considered the investigations on language evolution "a complete waste of time" (Chomsky, 1988, p. 183), over the last twenty years he has taken part in the debate (Hauser, Chomsky, & Fitch, 2002;Fitch, Hauser, & Chomsky, 2005;Hauser et al., 2014) suggesting that the faculty of language, namely Universal Grammar (UG) -an innate computational system in the brain specific for language processing -, seems to have crystallized fairly recently among a small group in East Africa of whom we are all descendants, distinguishing contemporary humans sharply from other animals, with enormous consequences for the whole of the biological world. It is commonly and plausibly assumed that the emergence of language was a core element in this sudden and dramatic transformation. (Berwick & Chomsky, 2011, p. 20, our emphasis) According to this perspective, UG appeared quite recently, some 70,000-100,000 years ago in Homo sapiens, and does not appear to have undergone modification since then (Bolhuis et al., 2014). An important claim of such perspective is that communication (the element of externalization) is a secondary (if not irrelevant) aspect of language, not its key function. For example, according to Chomsky, language serves primarily as an internal instrument of thought. Therefore, "[T]he earliest stage of language would How Did Language Evolve? Biological, Psychological have been a language of thought, available for use internally" (Chomsky, 2010, p. 55).
To corroborate their scenario, proponents of the abrupt emergence of UG usually refer to putative proxies for language in the fossil and archaeological records (cf. Chomsky, 2010;Hauser et al., 2014). 1 Specifically, they mention the first signs of symbolic material culture dating back about 100,000 years (Tattersall, 2008(Tattersall, , 2018 or even later, around 50,000 the period associated with the notion of the Upper Paleolithic Revolution (Bar-Yosef, 2002). According to the paleoanthropologist Ian Tattersall (2008), in fact, it is only in the period following about 100 thousand years ago that we find undoubted evidence of symbolic behavior patterns among populations of Homo sapiens. These patterns include small ochre plaques bearing distinct geometrical designs (Henshilwood et al., 2003) and body ornaments (small shells pierced to be worn as a necklace) (Henshilwood et al., 2004). Then, even if the symbol-ready brain was acquired some 200,000 years ago (when our biological species emerged), it was not used, "until it was recruited by what had necessarily to have been a cultural or behavioral stimulus. (…) given the suddenness with which the new capacity emerged, the most plausible candidate is without question the invention of language" (Tattersall, 2018, p. 294). 2 The symbolic revolution model has been confuted by numerous findings (see Henshilwood & Marean, 2003;McBrearty & Brooks, 2000;Wurz, 2010), from which emerged that many patterns of behavior considered typical of the symbolic revolution were more ancient and present in species other than Homo sapiens. Specifically, they first appeared (even if in discontinuous and rudimentary ways) during the Middle Stone Age, the period that began around 280,000 years ago and ended around 50,000 years ago. Therefore, as these findings support a new scenario according to which the symbolic thought emerged gradually and was not unique to Homo sapiens, they contradict the symbolic revolution hypothesis. In turn, this new scenario also undermines the idea of a sudden emergence of syntax some 70,000-100,000 years ago, given that the proponents of the abrupt emergence of UG usually consider 1 The idea that the archaeological record can shed light about the emergence of the language faculty has been disputed by some scholars. See, for example, Botha (2008Botha ( , 2016 and Bouchard (2013). 2 When he speaks of language, Tattersall has in mind Universal Grammar: "the possession of articulate language [underpinned by Universal Grammar within the framework elaborated by Hinzen (2012)] is the most immediately striking attribute of Homo sapiens today" (Tattersall, 2018, p. 298). the symbolic revolution as a proxy for language in the archaeological records (cf. Tallerman, 2014).
As mentioned above, against the idea of an abrupt origin, other scholars proposed that the development of syntax can be explained through gradualist processes. The first attempt in this regard was that by Pinker and Bloom (1990). Adhering to the neo-Darwinian research program of evolutionary psychology (Barkow, Cosmides, & Tooby, 1992) and starting from the assumption that communication is the main function of the language faculty, the authors maintained the UG has to be considered as a biological adaptation for communication shaped by natural selection. They wrote: For universal grammar to have evolved by Darwinian natural selection, it is not enough that it be useful in some general sense. There must have been genetic variation among individuals in their grammatical competence. There must have been a series of steps leading from no language at all to language as we now find it, each step small enough to have been produced by a random mutation or recombination, and each intermediate grammar useful to its possessor. Every detail of grammatical competence that we wish to ascribe to selection must have conferred a reproductive advantage on its speakers, and this advantage must be large enough to have become fixed in the ancestral population. And there must be enough evolutionary time and genomic space separating our species from nonlinguistic primate ancestors. (Pinker & Bloom, 1990, p. 721) Over the years, Pinker and Bloom model has been challenged (e.g. Botha, 2002) both by authors who are within the UG paradigm (e.g. Boeckx & Benítez-Burraco, 2014b) and scholars who do not embrace Chomsky model of language (Christiansen & Chater, 2008;Kirby, Cornish, & Smith, 2008;Kirby, Grifths, & Smith 2014;Kirby, 2017). In recent years, indeed, a growing body of work has begun to show that many aspects of language structure are the result of cultural transmission, rather than being genetically encoded biological traits, as the UG model assumes (cf. Thomas & Kirby, 2018). For example, Christiansen and Chater (2008) advanced the idea that language evolution has to be considered as a process of cultural change, in which syntactic structures are shaped through repeated cycles of learning and use by domain general mechanisms (instead of domain-specific innate computational system in the brain for language processing). They state: How Did Language Evolve? Biological, Psychological We propose that language has adapted through gradual processes of cultural evolution to be easy to learn to produce and understand. Thus, the structure of human language must inevitably be shaped around human learning and processing biases deriving from the structure of our thought processes, perceptuomotor factors, cognitive limitations, and pragmatic constraints. Language is easy for us to learn and use, not because our brains embody knowledge of language, but because language has adapted to our brains. (Christiansen & Chater, 2008, p. 490) This view of language evolution is at the center of numerous empirical studies aimed at clarifying how language is passed on via social-cultural transmission, using, for example, formal modelling and iterated learning paradigm (for a discussion: Smith, 2012;Kirby, 2012). What these studies have been revealing is that the structure of language emerges from the process of cultural evolution that, in turn, affects the fitness of the learners acquiring that language. In other words, language seems to be the results of a complex co-evolutionary dynamics, the characteristics of which are the subject of current research.
The Present Issue
The issue opens with Domenica Bruni article, which is devoted to the presentation of the research program of the evolutionary psychology (EP). EP is a Neo-Darwinian theoretical approach to psychology that explains human cognitive traits as mental adaptations shaped by natural selection. As we mentioned above, EP's main assumptions also inspired the model of language evolution advanced by Pinker and Bloom. Bruni discusses the case of emotions showing as they are biological adaptations evolved to solve specific ancestral problems faced by our ancestors.
The contribution by Alessandra Chiera lies within the context of a psychological investigation, as it is interested in identifying the cognitive prerequisites for human language to evolve. The paper focuses specifically on the issue of language evolution from a protoconversational perspective. Indeed, starting from the assumption that face-to-face communication represents the most natural setting for language, Chiera states that also in an evolutionary perspective conversation has to be recognized as the central unit of analysis. Against this background, a set of low-level mechanisms of alignment are acknowledged as critical for linguistic communication to evolve in the absence of a full-fledged code. The focus on this kind of mechanisms frames the discussion within a sensorimotor account of language evolution. Alessandra Falzone analyses the peripheral and central structures of vocal articulation in the framework of the Evo-Devo (evolutionary developmental biology) perspective, discussing the main biological constraints that might have acted as necessary "mechanical triggers" upon which language function could have evolved. The biological framework also characterizes the contribution from Piera Filippi who focuses on the role of emotional communication in the emergence of language. The author suggests that emotional modulation of the voice may have prompted the emergence of language abilities and that, following co-evolutionary dynamics, these abilities retro-act on each other, pushing the evolution of language forward.
Marek Placiński and Monika Boruta-Żywiczyńska examine language evolution from the perspective of linguistics. They present an empirical research aimed at investigating the topic of the natural word order by means of the silent gesture paradigm developed by Goldin-Meadow et al. (2008). That study revealed that participants tended to produce Subject-Object-Verb (SOV) word order of a transitive event, regardless of the syntax of their native language. Placiński and Boruta-Żywiczyńska obtained different result compared to this previous finding and discuss possible interpretations for them. A linguistic approach is also the framework of Katarzyna Rogalska-Chodecka article, which is centred on the presentation and comparison of the results of some experiments about the transmission of linguistic structures conducted with the use of the iterated learning methodology. Taken together, results of these studies suggest that the common-sense intuition that communication might constitute a key factor in language evolution should be approached with caution.
Olga Vasileva discusses the longstanding debate prevailing in language evolution and comparative psychological research, namely the problem of continuity and discontinuity in animal and human communication. This debate remains an important meta-theoretical assumption in the field of language evolution. The paper first provides a brief overview of the debate by discussing examples of prominent research work in comparative communication. It further discusses how the problem of continuity can be approached in light of more general evolutionary thinking. Finally, it is suggested that the problem of continuity can be partly resolved by focusing on cognitive and behavioural trait distribution both between and within species. Specifically, it is proposed that conceptualising given traits (e.g. pointing gesture) as habitual, rather than human-unique, is informative for modelling the process of language evolution in humans.
Valentia Deriu provides a discussion of a book that recently has been at the center of a lively debate within language evolution research: "Speaking our Minds" by Thom Scott-Phillips (2015). In the book, Scott-Phillips embraces the model of language advanced by Sperber and Wilson who, as we have seen, consider human communication as an exercise of mindreading. Deriu gives an overview the book's major claims and ideas, along with the discussion of the debate that followed its publication.
Przemysław Żywiczyński presents a review of the book "From Bacteria to Bach and Back" by Daniel Dennett (2017). As is often the case with Dennett's works, the book deals with the major philosophical problems addressed in a Darwinian perspective. Żywiczyński discusses some of the main points -the emphasis on eliminativism and the new way of conceiving evolution with regard to the concept of Darwinian Spaces, among othersby highlighting the strength and the shortcomings of Dennett's explanations. A particularly debated issue concerns that of language evolution, which is framed within a mem-centric perspective.
|
2019-08-16T19:12:35.300Z
|
2019-09-04T00:00:00.000
|
{
"year": 2019,
"sha1": "0cb8e984a932f966722e540197727760b4a1d7f9",
"oa_license": null,
"oa_url": "https://apcz.umk.pl/czasopisma/index.php/THS/article/download/ths.2019.001/18132",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ea589d407647385bc2907675b0a34e0e4d504b3f",
"s2fieldsofstudy": [
"Linguistics",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Sociology"
]
}
|
26630790
|
pes2o/s2orc
|
v3-fos-license
|
Current management of pediatric soft tissue sarcomas
Pediatric soft tissue sarcomas are a group of malignant neoplasms arising within embryonic mesenchymal tissues during the process of differentiation into muscle, fascia and fat. The tumors have a biphasic peak for age of incidence. Rhabdomyosarcoma (RMS) is diagnosed more frequently in younger children, whereas adult-type non-RMS soft tissue sarcoma is predominately observed in adolescents. The latter group comprises a variety of rare tumors for which diagnosis can be difficult and typically requires special studies, including immunohistochemistry and molecular genetic analysis. Current management for the majority of pediatric sarcomas is based on the data from large multi-institutional trials, which has led to great improvements in outcomes over recent decades. Although surgery remains the mainstay of treatment, the curative aim cannot be achieved without adjuvant treatment. Pre-treatment staging and risk classification are of prime importance in selecting an effective treatment protocol. Tumor resectability, the response to induction chemotherapy, and radiation generally determine the risk-group, and these factors are functions of tumor site, size and biology. Surgery provides the best choice of local control of small resectable tumors in a favorable site. Radiation therapy is added when surgery leaves residual disease or there is evidence of regional spread. Chemotherapy aims to reduce the risk of relapse and improve overall survival. In addition, upfront chemotherapy reduces the aggressiveness of the required surgery and helps preserve organ function in a number of cases. Long-term survival in low-risk sarcomas is feasible, and the intensity of treatment can be reduced. In high-risk sarcoma, current research is allowing more effective disease control. http://www.wjgnet.com/esps/ http://www.wjgnet.com/esps/helpdesk.aspx
RMS soft tissue sarcoma is predominately observed in adolescents. The latter group comprises a variety of rare tumors for which diagnosis can be difficult and typically requires special studies, including immunohistochemistry and molecular genetic analysis. Current management for the majority of pediatric sarcomas is based on the data from large multi-institutional trials, which has led to great improvements in outcomes over recent decades. Although surgery remains the mainstay of treatment, the curative aim cannot be achieved without adjuvant treatment. Pre-treatment staging and risk classification are of prime importance in selecting an effective treatment protocol. Tumor resectability, the response to induction chemotherapy, and radiation generally determine the risk-group, and these factors are functions of tumor site, size and biology. Surgery provides the best choice of local control of small resectable tumors in a favorable site. Radiation therapy is added when surgery leaves residual disease or there is evidence of regional spread. Chemotherapy aims to reduce the risk of relapse and improve overall survival. In addition, upfront chemotherapy reduces the aggressiveness of the required surgery and helps preserve organ function in a number of cases. Long-term survival in low-risk sarcomas is feasible, and the intensity of treatment can be reduced. In high-risk sarcoma, current research is allowing more effective disease control.
Key words: Pediatric tumor; Rhabdomyosarcoma; Soft tissue sarcoma; Non-rhabdomyosarcoma pediatric soft tissue tumor
INTRODUCTION
Pediatric soft tissue sarcomas are part of a hetero geneous group of tumors originating from embryonic mesodermal tissues during the process of differentiation into various mesenchymal tissue components of the human body. These tumors constitute 6% to 8% of all cancers in children less than 15 years of age [15] . Age standardized incidence rates in Western countries are slightly increased compared with Asian countries [5] . Of all soft tissue sarcomas in this age group, approximately 50% to 60% are rhabdomyosarcoma (RMS), whereas the remainder are nonRMS soft tissue sarcomas (NRSTS), a designation that includes a variety of rarer soft tissue tumors including fibrosarcomas, syn ovial sarcomas, the extraosseous Ewing's family of tumors, malignant peripheral nerve sheath tumors (MPNSTs) and inflammatory myofibroblastic tumors (IMT) [6,7] . According to the International Classification of Childhood Cancers, version 3, Kaposi sarcoma is also categorized as a NRSTS tumors [8] . Approximately two thirds of RMSs are diagnosed before 6 years of age, and the incidence decreases with age [7,9] . In contrast, NRSTSs occur in older children, increasing in incidence throughout adolescent years [6] . In African countries wherein the human immunodeficiency virus is endemic, an exceptionally increased incidence of Kaposi sarcoma has been reported [2] . Although most soft tissue sarcomas occur spo radically, these lesions are associated with cancer predisposition syndrome in some patients, (e.g., Li Fraumeni syndrome, which is linked to p53 germline mutations). Neurofibrosarcomas typically develop in individuals affected with neurofibromatosis type 1, an autosomal dominant disorder caused by mutations in the neurofibromatosis 1 gene (NF1). Individuals harboring germline mutations of NF1 are also prone to the development of embryonal RMS [10] . At the somatic level, specific chromosomal translocations and an expression of chimeric transcription factors are molecular signatures in a number of pediatric sarcomas. In RMS, PAXFOXO1 fusion is a characteristic of the unfavorable histology or alveolar RMS. Such specific molecular patterns help differentiate sarcoma subtypes in which accurate pathological diagnosis may be difficult at a histopathological level.
The outcomes of pediatric soft tissue sarcomas have improved significantly during the past 3 decades [11] . The prognosis of pediatric soft tissue sarcoma, particularly RMS in younger children, is far better than that for sarcomas in adults. With modern evidencebased medicine, a multidisciplinary therapeutic approach not only increases survival rates but also provides a better chance to preserve the affected organ, particularly in the extremities and genitourinary organs. This article reviews current management practices for pediatric soft tissue sarcomas, with an emphasis on RMS and some soft tissue tumors that are more commonly found in the pediatric age group.
SARCOMA
As soft tissue sarcomas are derived from primitive mesenchymal cells during their development into various mature mesenchymal tissue types (including muscle, fascia and fat), these tumors can be located in any part of the human body. The most common sites of primary RMSs are the head and neck, the genitourinary system and the limbs. The classic presentation is a growing lump that may or may not affect the function of nearby organs. RMS in some organ systems may cause specific symptoms. For example, frequent urination can be an initial presentation of an RMS that arises within the urinary bladder. Obstructive jaundice is one manifestation of bile duct RMS. Multiple plexiform neurofibrosarcomas can be benign tumors that follow neurofibrosarcoma in an individual with neurofibromatosis. Localization of the tumor site and its relation to the surrounding organs is typically accomplished by an imaging study, preferably magnetic resonance imaging (MRI) and/or computerized tomography (CT) [12] . From a surgical standpoint, the location, proximity to vascular structures, and potential morbidity caused by surgical resection determine the "resectability" of a sarcoma. To date, no serum markers are available for the diagnosis of soft tissue sarcomas. Imageguided core needle biopsy typically, but not always, provides a definitive diagnosis [13] . During a biopsy, extra tissue can be collected for further studies, i.e., electron microscopy and molecular diagnosis. Repeat biopsy using an open technique is performed when a histopathological diagnosis cannot be made upon examining a small strip of tissue obtained from a needle coringout. Suspected lymph node metastasis should be confirmed by histopathology, particularly in sarcoma of the limbs and the paratesticular area.
Pretreatment clinical staging aims to categorize the disease according to the tumor site, size, local invasion, regional lymph node involvement and distant metastasis. The metastatic workup includes bone marrow aspiration/biopsy, bone scintigraphy, and axial imaging studies of the brain, lung and liver (CT or MRI).
Sangkhathat S. Current management of pediatric sarcomas
A spinal tap for cerebrospinal fluid is indicated in cases of suspected parameningeal tumor. A recent systematic review suggested the potential benefit of a functional imaging study, such as positron emission tomography (PETCT), for increasing the accuracy of pretreatment staging, particularly in the evaluation of nodal status and distant metastasis [14,15] . Sentinel lymph node biopsy using radiotracer exhibits feasibility and good concordance with PETCT results in pediatric soft tissue sarcomas [16,17] .
RMS
RMS is a malignant mesenchymal tumor originating from immature striated muscle. Approximately 40% of RMSs occur in the head and neck region, 20% occur at genitourinary sites, 20% in the extremities, and 20% in other locations [9,12] (Figure 1). On hematoxylin and eosin histology, the tumor is characterized by the presence of spindleshaped or small roundcell rhabdomyofibr oblasts with eosinophilic cytoplasm. Crossstriations can be observed in some cases with relatively high tumor differentiation. Immunohistochemical studies that support the diagnosis of RMS include actin, desmin, myoglobin, myogenin and MyoD. Pediatric RMS cases are generally categorized into 2 types: embryonal RMS (80%) and alveolar RMS (15%20%) [12] . The botryoid subtype is a variant of embryonal RMS commonly located in the genitourinary tract, vagina, and biliary and nasopharyngeal sites. The spindle cell subtype is another subtype of embryonal RMS found in paratesticular locations. Alveolar RMS is observed in older children and generally has a more unfavorable histology.
After the diagnosis is made, pretreatment staging is performed according to a standard classification, such as the InterGroup Rhabdomyosarcoma Pretreat ment Staging Classification ( Table 1). The value of pretreatment staging involves determining disease prognosis. In addition to the stage, completeness of tumor removal defines the "clinical group" of RMS. Management plans for RMS can be divided into local control and systemic therapy and generally rely on risk categorization as determined by the Intergroup Rhabdomyosarcoma Studies (IRS) stage with the clinical group (Table 2). Surgery therefore has an integral role in the initial stages of decisionmaking on multimodality treatment in pediatric RMS.
Local control
Surgery has been the most effective method to eli minate pathology. Surgery should be conducted in a manner that maintains function and cosmesis. In primary surgery, the extent of the initial surgery is generally subject to the judgment of the surgical team. In principle, resectability means that a tumor and its tumorfree surrounding tissue can be removed without operative risk or unacceptable postoperative morbidity. In the cases in which an excisional biopsy is performed without awareness of an adequate surgical margin, re excision of the tumor bed should be considered [18,19] . 96 November 8, 2015|Volume 4|Issue 4| WJCP|www.wjgnet.com Grossly resected tumor with evidence of regional spread ⅡA: Grossly resected tumor with microscopic residual disease ⅡB: Involved regional nodes completely resected with no microscopic residual disease ⅡC: Involved regional nodes grossly resected with evidence of microscopic residual disease Ⅲ Incomplete resection with gross residual disease after biopsy or after gross or major resection of the primary tumor Ⅳ Distant metastatic disease present at diagnosis Table 1 Intergroup rhabdomyosarcoma study pretreatment staging and clinical grouping classification 1 T: Tumor; T1: Confined to the anatomic origin; T2: Extension and/or fixation to surrounding tissue; 2 Size, a: ≤ 5 cm; b: > 5 cm; 3 N: Regional nodes; N0: Regional nodes not clinically involved; Nx: Clinical status of regional nodes unknown; N1: Regional nodes clinically involved; 4 M: Metastasis; M0: No distant metastasis; M1: Distant metastasis present (includes positive cytology in pleural, peritoneal or cerebrospinal fluid).
Risk group Histology Pretreatment stage Clinical group
Embryonal 2, 3 Ⅲ Alveolar 1, 2, 3 Ⅰ, Ⅱ, Ⅲ High Any 4 Ⅳ Table 2 Risk group stratification for rhabdomyosarcoma according to the International Rhabdomyosarcoma Study and the size of the initial tumor had no influence on the response [23] . According to both studies, parameningeal RMS appeared to have a poorer response rate even when chemotherapy was administered with radiation [24] . Although the number of cases was lower, genitourinary tract sites (except bladder and prostate) exhibited better response rates [21] . The delayed primary resection strategy has reduced the extent of surgery in pelvic RMS. Pelvic exenteration, a historical standard in bladder and vaginal RMS, is rarely practiced today.
Second-look exploration aims to confirm the clinical/ radiological response and to achieve oncologic resection when possible. Imaging evaluation may underestimate the degree of the response. According to IRSⅢ data, 46% of patients who achieved partial remission were found to be in complete remission at surgical exploration, and an additional 28% could be converted to complete remission. In addition, 30% of patients who had clinically stable disease after induction chemotherapy exhibited pathological complete remission, and an additional 43% could be converted to complete remission [23] . To achieve oncologic resection, radical organ removal must be performed in some situations. In urinary bladder RMS that arises at the base of the bladder or prostate, a partial cystectomy is not sufficient given the high risk of local failure. Total cystectomy and urologic conduit is a surgical option that potentially leads to longterm, diseasefree survival with an acceptable quality of life. Bladderpreserving surgery is reserved for cases with a good response to induction chemoradiation therapy. In addition, the tumor location must allow a 2 to 3cm tumorfree margin, and at least twothirds of the bladder must be retained [20,25] . In vaginal RMS, residual tumor after chemotherapy is an indication In general, factors determining resectability include anatomical characters, such as site, size and vital stru cture involvement. However, in cases in which primary definitive surgery is not likely to provide complete resection without significant morbidity, delayed pri mary resection after upfront chemotherapy should be considered with an aim for organ salvage without compromising the longterm survival outcome. When delayed primary resection after neoadjuvant treatment is planned, compliance should also be considered. Intractable symptoms from the tumor and psychosocial factors may impact therapeutic compliance. Symptom control surgery during induction therapy (i.e., temporary urinary diversion in urinary bladder RMS) might be indicated [20] . Radical surgery is indicated in patients who are unable to tolerate intensive chemoradiation.
A recent multiinstitutional data review demonstrated that approximately 90% of clinical group Ⅲ embryonal RMS patients experienced a volume reduction of 33% or greater after induction chemotherapy [21] . Although the effect of the chemotherapeutic response on eventfree survival (EFS) remains unclear, the study found that cases with at least a partial response experienced significantly enhanced overall survival (OS), particularly in head and neck RMS [21] . These results were consistent with a recent report using functional imaging tool 2fluoro2 deoxydglucosepositron emission topography from the Memorial Sloan Kettering Cancer Center suggesting that the response after induction chemotherapy significantly predicted both EFS and OS [22] . An earlier report from the Intergroup Rhabdomyosarcoma Study Ⅳ (IRS Ⅳ) revealed an 81% response to chemotherapy in group Ⅲ RMS cases with no significant difference in the response rate between embryonal and alveolar RMS, Sangkhathat S. Current management of pediatric sarcomas for total hysterectomy with gonadal preservation [26] . Pancreaticoduodenectomy is the operation of choice in cases in which the RMS involves the distal common bile duct [27] . Lymph node management is of prime importance in pretreatment staging and clinical grouping of RMS, both of which determine the risk category. Radical lymph node dissection does not impact outcome. Enlarged nodes detected clinically or by radiologic evidence should be excised for histopathological examination (Figure 2). Regardless of radiologic evidence, lymph node sampling is indicated in extremity RMS and for children older than 10 years with paratesticular tu mor [9,11,28] . When an adjacent node is positive, more distant nodes should be searched for and biopsied. To reduce the morbidity caused by extensive lymph node sampling, the concept of sentinel lymph node sampling, which is the current standard in melanoma and breast cancer, has also been adapted for pediatric soft tissue sarcoma. Trials in pediatric sarcomas had relatively small numbers in each series [16,17,29] . Most studies used the lymphoscintigraphy technique and reported that the technique was feasible in pediatric sarcomas; however, specific data regarding identification rates and false negative rates in RMS remain inconclusive.
Radiation therapy is unnecessary for embryonal RMS in clinical group Ⅰ (completely resected) when it provides better failurefree survival in alveolar RMS. Radiation enhances local control in cases with residual disease after definitive surgery, positive locoregional lymph nodes and unresectable RMS after tumor biopsy. Radiation doses to microscopic residual tumors (total 36 Gy) are typically less than those for gross residual or primary unresectable tumors (50.4 Gy) [11] . Orbital tumors are an exception as their clinical group Ⅲ requires 45 Gy.
Data from the German trial CWS91 indicated that hyper fractionated accelerated radiotherapy may reduce the total radiation dose in RMS (32 Gy in lowrisk and 48 Gy in highrisk patients) without compromising treatment outcomes [30] . Alternative radiation therapy techniques, such as intensity modulated radiation therapy, bra chytherapy, and proton beam therapy, are used in some centers with the aim of reducing locoregional side effects [31] .
Systemic therapy
Chemotherapy is an essential component of the multi modality treatment of RMS. The standard regimen in nonmetastatic RMS is a combination of vincristine, actinomycinD and cyclophosphamide (VAC) [32] . Omitting cyclophosphamide from the regimen has been atte mpted in lowrisk RMS cases to reduce the cumulative dose of cyclophosphamide (IRSD9602 protocol). Although VA produced an excellent outcome in a subset of lowrisk RMS cases, including group ⅠⅡ, stage 12 and group Ⅲ orbital tumor (subset 1, Table 2), the data suggested that cyclophosphamide should be retained with vincristine and actinomycinD in the other subset of lowrisk RMS cases (subset 2: group ⅠⅡ, stage 3 and group Ⅲ, stage Ⅰ except orbital tumor) because the failurefree survival was poorer than that of comparable patients in the IRSⅣ study who received a triple drug regimen [33] . IRSⅣ data also demonstrated that substitution of cyclophosphamide with ifosfamide (VAI) or substitution of actinomycinD/cyclophosphamide with ifosfamide and etoposide did not improve the failure free survival in nonmetastatic RMS [34] . A subsequent study from the Children's Oncology Group (COG), directed toward a shorter duration of VA and dose reduction of cyclophosphamide (ARST0331) in low risk RMS patients, was recently published. Although the study reported an increased incidence of local failure with use of the shorter therapy, the study recommen ded its use in lowrisk RMS cases given the reduced toxicity [35] . For intermediate and highrisk patients, successive COG trials have attempted to improve the survival outcome by incorporating novel agents, such as doxorubicin, ifosfamide and etoposide (VDC/IE), and irenotecan (VAC/VI), with the aim of reducing the cumulative cyclophosphamide dose [36] . Various molecular targeting drugs are being explored in high risk RMS cases, including vascular endothelial growth factors (bevacizumab), mTOR (temsirolimus) and IGF 1R (temozolomide). A phase Ⅱ trial of temozolomide has demonstrated its safety and feasibility; however, the preliminary response rate was not impressive [37] . For metastatic RMS, another study found that the incorporation of VDCIE or Ⅵ with VAC therapy resulted in improved outcomes in embryonal RMS [38] .
Outcome of current multimodality management in RMS
Since the establishment of the International Rhabdo myosarcoma Study Group in 1972 (currently the Cooperative Soft Tissue Sarcoma Study Group), survival of pediatric RMS patients has been steadily improving. Before the era of multimodality treatment, surgery alone resulted in survival rates less than 20% (11). With the new available treatments, the fiveyear OS increased from 55% in IRSⅠ to 63% in IRSⅡ and to 71% in IRSⅢ and IRSⅣ. Data from IRSⅡ to IRSⅣ revealed an 88% 3year failurefree survival in low risk embryonal RMS. Intermediaterisk embryonal RMS had a 4year failurefree survival rate of approximately 68% to 78%; however, survival in highrisk patients remains poor at less than 25% [39] .
NON-RMS SOFT TISSUE SARCOMAS
NRSTSs are a heterogeneous group of rare mesenchy mal tumors that exhibit a wide variety of histopathologies and biologies. The majority of NRSTSs occur more frequently in adult patients, and the prognosis is gener ally poorer than for pediatric sarcomas. Given their heterogeneity, ambiguity in pathological diagnosis is common, and care should be taken when obtaining tissue samples [40] . A multidisciplinary conference before the initiation of treatment for any individual case allows the team to arrive at a consensus and understand the role of each discipline in the treatment process. Surgery has a primary role in the treatment of resectable NR STSs, whereas adjuvant treatment relies on a Children's Oncology Group risk stratification guideline [41] (Table 3).
Basically, radiation therapy is administered to patients whose resection margins are close to the tumor (except for very lowrisk tumors). Chemotherapy provides a poorer response than pediatric RMS and is advocated in select types of NRSTS. Ifosfamide and doxorubicin are backbones recommended as postoperative adjuvant therapy for localized resectable STSs [42,43] . A recent systematic review found that autologous hematopoietic stem cell transplantation following high dose chemotherapy in locally advanced or metastatic NRSTS did not result in better OS than standarddose chemotherapy [44] .
Extraosseous Ewing's sarcoma family of tumors
Extraosseous primitive neuroectodermal tumors, namely, Ewing's sarcoma and Adkin tumor of the chest wall, are grouped together as the Ewing's family of tumors because they share a chromosomal translo cation, t(11;22)(q24;q12), leading to a chimeric fusion, EWSR1FLI1 [45] . Extraosseous Ewing's sarcoma presents predominately in the second decade of life.
The tumor is comprises 15% to 20% of all Ewing's sarcomas [46,47] . The tumor can occur anywhere in the body but commonly presents on the extremities, chest and pelvis. Histologically, the tumor belongs to the small round blue cell group and demonstrates positive immunoreactivity to the surface glycoprotein CD99. Poor prognostic indicators include axial site tumors, particularly in the pelvic region; large tumor size; late stage; poor response to induction chemotherapy; advanced age; and high serum lactate dehydrogenase levels [4649] . The definitive treatment for extraosseous EFSTs is surgical removal. Complete resection is the best option for cure, and the likelihood of achieving negative surgical margins is increased when induction chemotherapy is administered [50] . Although these tumors are relatively sensitive to radiation, radiation is reserved for cases with positive surgical margins or incompletely resected tumor because late effects of radiation, such as a second malignancy, are of concern. Postoperative chemotherapy aims to improve OS and reduce the likelihood of local recurrence. The therapeutic regimen in extraosseous EFST follows that used in either NRSTS or Ewing's sarcoma of the bone and typically comprises ifosfamide/etoposide with or without carboplatin (ICE) alternated with a combination of vincristine, doxorubicin and cyclophosphamide (VDC) [47] . The results from the EICESS92 study and the successive trial EuroEwing 99R1 from the European InterGroup Cooperative Ewing's Sarcoma Study concluded that ifosfamide can be substituted with cyclophosphamide in the consolidation phase in standard risk EFST (localized tumor with either good histological response after induction chemotherapy, small tumor resected at diagnosis, or receiving radiotherapy alone as a local treatment) [48,51] . A report from the French Society of Pediatric Oncology (SFOP EW93) suggested that induction with cyclophosphamide and doxorubicin followed by histopathological response based chemotherapy (VAC or VAC/VIE or IE + high dose busulfan/melphalan) provided superior outcomes to those of an ifosfamidebased regimen (VAI) for all cases [52] . Other studies reported a fiveyear OS of between 60% and 70% in nonmetastatic extraosseous EFSTs [46,52,53] , whereas another study reported that Sangkhathat S. Current management of pediatric sarcomas metastatic EFST exhibited a 5year OS of approximately 25% [53] .
MPNST
MPNSTs, malignant schwannomas, neurofibrosarcomas and neurogenic sarcomas account for approximately 6% of all NRSTSs [54] , and approximately half of these cases are associated with neurofibromatosis type 1 syndrome [55] . An individual with the NF1 mutation has a cumulative 8% to 13% lifetime risk of developing MPNST [56] . MPNST develops within benign neurofibro mas in NF1 patients [57] . In the pediatric age group, the incidence increases with age, with more than 80% of cases diagnosed at the age 10 years or older [58] . Among pediatric NRSTSs, the tumor has the worst prognosis, with a 5year OS of 43% to 59% [59] . Complete surgical removal is the only chance for cure. Unfortunately, a number of MPNSTs involve the nerve root, preventing complete removal (Figure 3). Radiation therapy is recommended in cases with residual tumor after surgery; however, no evidence indicates that this improves survival [60] . Studies have reported that adjuvant chemo therapy exhibits only minimal benefit [58,61,62] .
Synovial sarcoma
Synovial sarcoma (SS) is an aggressive spindle cell tumor that accounts for approximately 10% of all STSs [63] . Although the tumor is principally located in the lower extremities, primary SS at other sites (including the head and neck, hands, retroperitoneum, digestive system and mediastinum) have been reported [6466] . Histologically, SS contains spindle cells with a varying degree of epithelial differentiation [67] . On immunohistochemical study, SS is marked with both mesenchymal and epithelial markers. The cytogenetic signature of SS is a reciprocal translocation t(X;18) (p11.2;q11.2) that leads to a chimeric fusion between SS18 from chromo some 18 and one of the SSXs (SSX1, SSX2 or SSX4) from chromosome X. The SS18SSX2 fusion protein activates canonical Wnt/betacatenin signaling, which suggests a future therapeutic target in a subset of SS [68,69] . The current management of SS is based on risk categorization, and risk determinants include the clinical group (as in RMS), size (5 cm) and sites [70] . Low risk tumors include group I SS and are less than 5 cm in size. Axial site tumors (head and neck, trunk, lungs and pleura) are considered high risk [71] . According to the European Pediatric Soft Tissue Sarcoma Study Group Trial (EpSSG NRSTS2005), lowrisk SSs are best treated with surgery alone, with 91.7% experiencing 3year EFS and 100% OS [71] . In that study, the surgical strategy recommended in most lowrisk cases was conservative surgery. Survival in intermediaterisk SSs (group Ⅰ, size > 5 cm and group Ⅱ) after surgery followed by chemotherapy (ifosfamide and doxorubicin) with or without radiation is comparable with that of the lowrisk group. Chemotherapy is the mainstay treatment in high risk (group Ⅲ or axial SS) patients. The chemotherapy response rate in group Ⅲ SS was 55%, and OS was 74% [71] .
Congenital infantile fibrosarcoma
Unlike tumors in the adulttype NRSTS group that are typically found in teenagers and adolescents, congenital infantile fibrosarcoma (CIF) can be noted during the first month of life and is often misdiagnosed as a hemangioma or vascular malformation. A rapid growth rate and ulceration are clinical clues necessitating a biopsy [72] . Histologically, CIF is densely packed with spindle cells arranged in bundles and fascicles. Tumor cells typically exhibit positive immunoreactivity with the mesenchymal marker vimentin but are negative for desmin and S100 protein. A chromosomal translo cation t(12;15)(p13;q25), which leads to a fusion ETV6NTRK3, has been reported. The tumor is locally aggressive, and distant metastasis is rarely reported. Destruction of adjacent bony structures can be obse rved ( Figure 4). Surgical removal of the lesion is the recommended primary treatment. Adjuvant treatment is generally unnecessary except when the mass is very large and involves vital structures. In such instances, neoadjuvant chemotherapy may help downsize the tumor [73,74] . Prognostic factors include the site and extent of the lesion at the diagnosis. Extremity IF has a more favorable outcome than do axial tumors. In addition, pediatric CIF has a better outcome than adult fibrosarcoma. The fiveyear OS is approximately 90% [75,76] .
Desmoplastic small round cell tumors
Desmoplastic small round cell tumors (DSRCT) is a rare, highly aggressive mesenchymal tumor originating on the peritoneal surface typically in an adolescent [77] . The tumor can also be found at other sites, such as the head and neck, pleura, kidneys, ovaries and testes [7881] and was first described in 1989 in a pathological case report by Gerald and Rosai [82] . Histologically, DSRCT exhibits 100 November 8, 2015|Volume 4|Issue 4| WJCP|www.wjgnet.com
A B
H Sangkhathat S. Current management of pediatric sarcomas small round cells arranged in nests within abundant desmoplastic stroma [77] . Central necrosis and trabecular or indian fire arrangements are also observed [83] . The tumor expresses polyphenotypic differentiation with co expression of epithelial, mesenchymal and neuronal markers [77] . In addition, nuclear staining of the WT1 protein has been reported [83] . The tumor is highly aggressive, and approximately 60% of patients die of the disease within 2 years [84] . Complete resection is not possible in the majority of cases. Surgical debulking of the primary tumor followed by radiation therapy is recom mended [85,86] . The tumor appears to respond to multi agent chemotherapy consisting of cyclophosphamide, doxorubicin, vincristine, ifosfamide and etoposide; however, recurrent disease is common [85,87] . The use of alternative therapies, including molecular targeting therapy and intraperitoneal infusion of chemotherapy, is reported infrequently [8890] . In one study, the 3year OS was reported at 44%, with a 5year OS of 15% [85] . IMT IMT (IMT, also known as inflammatory pseudotumor or plasma cell granuloma) is a rare benign tumor with recurrence potential that most often occurs in children and young adults. The lung is the most common site of IMT. Other reported sites include the urinary bladder, intestine and mesentery, spleen, liver and kidney [9194] . The etiology of IMT may include certain infections, such as Epstein Barr virus. Whether the tumor is a true neoplasm or an inflammatory response remains controversial. However, recurrence after surgery is common and malig nant transformation has been reported [95,96] . Studies have demonstrated that a number of IMT involve fusion between the AKT gene in chromosome 2 (2q23) and various fusion partners, including IMT, TPM3, TPM4, CLTC, CARS, RANBP2, ATIC, SEC 31L1 and PPFBP1 [97103] . CT typically reveals a coin lesion that is difficult to differentiate from other causes of similar lesions. The diagnosis is typically made by tissue biopsy that often exhibit spindle shaped myofibroblast-like cells and chronic inflammation comprising plasma cells, lymphocytes and histiocytes [104] . Surgical resection is the only treatment option. Radiation and chemotherapy have no role in IMT ( Figure 5).
STSs in the pediatric age group are a heterogeneous group of rare mesenchymal tumors. Survival outcome in pediatric STS has improved since cooperative studies were initiated by various international organizations, particularly the International Rhabdomyosarcoma Study Group. Treatment of these tumors relies on knowledge of their natural history and tumor biology as this infor mation is used to categorize STSs according to their risk. Although surgery has been the main treatment in localized lowrisk tumors, good outcomes are not achieved without adjuvant radiation and chemotherapy. Future studies in the treatment of STS are directed 101 November 8, 2015|Volume 4|Issue 4| WJCP|www.wjgnet.com Sangkhathat S. Current management of pediatric sarcomas toward the use of molecular diagnosis as an integral part of tumor classification. While novel modalities for the treatment of advanced stage tumors are under investigation, trials should be conducted on the reduction of treatment intensity in lowrisk patients.
|
2018-04-03T00:44:55.195Z
|
2015-11-08T00:00:00.000
|
{
"year": 2015,
"sha1": "c2a3fe03dff8b608cbe6ef2457d4ef43d79845e3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5409/wjcp.v4.i4.94",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b91570be09914847f44ed37e3cb824eb2d731b60",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268624659
|
pes2o/s2orc
|
v3-fos-license
|
Review of DC-DC Partial Power Converter Configurations and Topologies
: The Partial Power Processing (PPP) concept has garnered attention as it enables the down-sizing of converter and component ratings. Unlike conventional power processing, PPP addresses a portion of the transferred power, leading to a reduction in conversion losses. Throughout this paper, the state of the art of isolated and non-isolated DC-DC converter topologies will be revised. Partial Power Converter (PPC) systems represent one of the main streams of PPP, which, based on isolation requirements and converter connections, can further be divided into isolated converters, such as: Input-Parallel-Output-Series (IPOS), Input-Series-Output-Parallel (ISOP), and, Input-Series-Output-Series (ISOS), or non-isolated converters. This work intends to evaluate and differentiate the characteristics of each type of topology while developing analytically possible connections that may require further research and reviewing metrics that help in fair comparisons of different PPC arrangements, operating under different conditions. A thorough revision is provided for DC-DC converter topologies due to their increased importance in present-day applications, such as energy storage
Introduction
Recent trends in electrical power, such as the rising electrification across sectors and the expanding integration of renewable resources and energy storage applications, have ignited heightened interest in power electronics and power converters [1][2][3].The demand in the power electronics field has led to the development of new concepts as well as revisiting concepts that are already in use, which brought the PPP field into existence.One of the major examples of PPP related to renewable energy generation is the Double-Fed Induction Generator (DFIG) [4].The DFIG was introduced to the wind energy generation field as an advancement to overcome the disadvantages of the Adjustable-Speed Generator (ASG) [5].The main advantage of DFIG over ASG is that it enables the use of the partial power converter.This leads to a reduction in the total cost of the system due to reducing the size of the inverter as well as the filter's passive components.In PPP, as the name indicates, a power converter is used to process only a part of the whole power, thus reducing the losses and permitting a reduction in components size [6].
The majority of the applications of PPP are based on DC-DC converters due to the nature of the current flow, and the fact that several applications have varying input or output voltage, which is a common situation for PVs and battery applications, while it is not common to encounter the same in AC, where the input and output voltages are usually well defined.Nevertheless, PPC can be integrated into the DC side of DC-AC converters to obtain a PPP feature for AC applications.Furthermore, pure DC-AC topologies also exist, such as in the work of [7], where they propose handling power conversion by higher rating IGBT master units, and a SiC MOSFET slave unit is coupled by line-frequency transformer to deal with partial power voltage regulation and harmonic compensation.Another interesting application for AC-DC appears as an ancillary feature in the work of [8], where PPP is implemented to provide hold-up time compensation.
Another application of DC-AC partial power topologies was demonstrated by [9], where the inputs of two DC-DC converters were connected in parallel to the same PV module, and hence a differential-mode sinusoidal output was achieved directly.This paper will follow the nomenclature suggested by Anzola et al. [4], which segregates PPP into three broad families, as demonstrated in Figure 1.The first group in Figure 1 is the Differential Power Processing (DPP) topology.It deals mainly with current differences in series connected elements; this is also referred to as Parallel Current Regulator (PCR) in the work of Santos, Zientarski, and Martins [2].The Photo-Voltaic (PV) optimizer is one salient example of the Differential Power Converter (DPC).Active battery cell balancing topologies also belong to this category.Such devices deal with the mis-match current between elements connected in series [1,4,10].This is a desirable feature that extends the capability of PV arrays or battery cells, where those systems are usually composed of series-connected sources, with the performance of the whole group limited by the weakest link.[4], with the scope of the current work highlighted.
The second branch of the PPP family tree is PPC (referred to as Series Voltage Regulator (SVR) by [2]), which can further be distinguished into two groups based on isolation requirements; however, it is to be noted that whether isolation is required for the converter does not imply the fact that the overall system will not have inherited galvanic isolation between its input and its output.The main advantage of PPC operation is its ability to interface a varying voltage on one side (either source or load) to a fixed voltage on the other side, which is a valuable feature for Maximum Power Point Tracking (MPPT) systems and battery charging applications [6].The sub-group of the isolated topologies further splits into sub-groups: Input-Series-Output-Parallel (ISOP) (also referred to as Series-Input-Parallel-Output (SIPO) in some studies), Input-Parallel-Output-Series (IPOS) (or Parallel-Input-Series-Output (PISO)), Input-Series-Output-Series (ISOS), and Input-Parallel-Output-Parallel (IPOP).Various possible arrangements are revised in the upcoming Section 3. The sub-group of non-isolated PPP deals solely with the Fractional Power Converter (FPC), and will be revised in Section 4.1.
The last group seen in Figure 1 is the mixed strategies, where topology belonging to this group mixes the two previous designs (i.e., the DPC and PPC) in order to obtain the advantages of both groups while avoiding their shortfalls [4].
Section 2 will revise the present-day parameters used to benchmark designs and the performance of proposed PPP topologies and comment on the grouping and categorizing of the various PPP families.Section 3 analyzes the fundamental current and voltage relations of the isolated PPP and uses the parameters developed in Section 2 to derive theoretical operating limits.An assorted collection of designs is also reviewed in Section 3.2 (for IPOS), Section 3.4 (for ISOP), and Section 3.6 (for ISOS).Section 4.1 will review possible FPC architectures, and some examples from the literature will be revised in Section 4.2.Finally, Section 5 includes the conclusions of this work.
Comparison Metrics
Several aspects and merits are implemented to enable comparison between various topologies.This section will revisit and define such attributes, starting with one of the main features of any power conversion system, efficiency (η).Given the fact that the PPC treats only a fraction of the whole transferred power, this will give rise to two different dimensionless parameters [4]: System efficiency η sys , which is defined as the ratio of load power (P Load ) to the source power (P source ), and further in terms of Source/Load currents (I Source , I Load ), and Source/Load voltages (V Source , V Load ), as stated in (1) [11-13].
Another efficiency is directly related to PPC operation, which is the Converter efficiency (η conv ), defined in (2) [4], where (V in , V out ) are the input and output voltages of the converter, respectively, and (I in ) is the current entering the converter, while (I out ) is the current leaving the converter.The sign convention indicates power flow through the converter; that is, current entering through the positive terminal indicates power flow into the converter, while current flowing out of a positive terminal indicates power flowing out of the converter.
The nature of PPC converter operation requires the definition of another attribute, which is the processed power ratio K pr , presented in Equation (3).K pr defines the ratio of the power processed by the converter (P conv ) to the overall power drawn from the source (P Source ) [4,13,14].
In addition to the previous equations, the static voltage gain (G V ) is also a key parameter in defining PPC operation, given in Equation (4) [6,[15][16][17].
It is worth mentioning that this work considers G V to be always positive, i.e., G V ≥ 0, since negative values in a given topology (indicating a reversed source or load) will be equal (i.e., absolute) or refer to another topology, as will be seen in Section 3.
The stress factor coefficients provide a quantitative approach to evaluate and compare converter designs (topology) [15,[18][19][20], independently of their power ratings.The three main components of any power converter are semiconductor switches, magnetic windings, and capacitors.The stress factor calculations can be simplified assuming a lossless converter (i.e., η conv = 100%) and further assuming a large enough inductor to suppress any ripple current [15].Equations ( 5)- (7) relate to the Semiconductor Stress Factor (SCSF), Winding Stress Factor (WSF), and Capacitor Stress Factor (CSF), respectively [21,22].
where W j is the weight factor of the jth component and can be considered 1 as a starting point [18], and V max SC , V max C are the maximum voltages seen or blocked by the semiconductor switch and the capacitor, respectively.I rms SC , I rms L , and I rms C stand for the root-mean-square current passing through the semiconductor switch, the capacitor, and the inductor.P in is the input power to the converter and D i is the duty of the ith cycle.V max L is the maximum voltage seen by the magnetic component (i.e., inductor).|V i | is the absolute value of the winding voltage in the ith operating state [18].After developing the stress factor for each single component, the global stress factor for all semiconductors, capacitors, and inductors can be summed as in ( 9)- (11).
Among the reviewed literature, several papers have implemented the component stress factor to evaluate their topologies and differentiate their performance in various operations.Chao [23] has demonstrated an inverse relation between stress factors and the turn ratio in their IPOS and ISOP PPC converters.Values well below 0.01 were reported in [15] for all SCSFs, WSFs, and CSFs.To have perspective, ref. [24] reported figures larger by orders of magnitude for the component stress factors.
The work of Zeintarski et al. [25] illustrates the component stress factors over a range of G V for two proposed PPCs: the Full-Bridge Series-Connected Partial Power Processor (FBSPPC) and the Full-Bridge Push-Pull Series-Connected Partial Power Processor (FBPPSPPC).Both topologies show a reduction in component stress factors with G V values approaching unity.
Load power P Load also has a direct impact on the component stress factors, where it is observed in [20] that increasing P Load will lead to larger stress on components in the case of a full-power rated converter compared to PPC.
Lastly, non-active power (N) is the energy stored in the reactive element (capacitor or inductor) and not transferred from the input to the output of a DC-DC converter operating in the steady state [2,20], defined also in Institute of Electrical and Electronics Engineers (IEEE) standard 1459 [26] and measured in Volt-Ampere Reactive (VAR).Inductor non-active power (N L ) and capacitor non-active power (N C ) are evaluated by (12) and (13), where E L and E C are the energy stored in inductor and capacitor in joules, D is the dimensionless duty cycle of each switching period T s .v(t) and i(t) are the instantaneous voltage and current.
Equation ( 14) represents the Fryze power factor (F), which defines the ratio of non- active to active power [2].It also contributes to the converter power losses [27] and requires over-sizing the components, although it does not contribute to the transfer of real power.Isolated topologies show a direct influence of the transformer turn ratio (n) to the Fryze power factor.For a given topology, G V will have a direct impact.
Figure 2 indicates how the Fryze power factor can be affected by different topologies and operation modes.Both the Symmetrical Half-Bridge Current-Fed Power Processor (SHBCFPP) and Full-Bridge Phase-Shift Current-Fed Power Processor (FBPSCFPP) appear in the work of [2] as prototypes rated for 2200 W. The FBSPPC and FBPPSPPC are 225 W and 112.5 W, respectively, presented by Zientarski et al. in [25].
Isolated PPP Architectures
The required isolation refers solely to the converter topology, since a galvanic path will exist between the system's input and output.Non-isolated converters cannot be used in these topologies due to two main constraints: the inherited risk of short-circuit and the fact that a non-isolated converter will end up processing full power [28].A workaround to overcome those shortfalls will be revised in Section 4.1.
As commented earlier, PPC deals with a new way to connect power converters.At first glance, such connections might be misleading in that they influence different topologies.Based on the work of [28], it is proposed to use the concept of a dummy converter as a systematic approach for segregating and evaluating all possible architectures.Figure 3 illustrates the three main connection groups, i.e., PISO, SIPO, and ISOS.IPOP is left out as it represents a specific case study that will be commented on in Section 3.7.
The dashed connection boxes in the same figures indicate the possibility of a variety of connections, which will be examined in the subsequent sections.In accordance with the assigned notations, the power is transferred from the source side into the load side; hence, the DC current flow is fixed in Figure 3 to indicate leaving the source and entering in the load.
Input-Parallel-Output-Series Topology (IPOS)
By consulting Figure 3a, and ensuring that V in is equal to V Source , four generic connections can be derived.Figure 4 presents the three possible connections while highlighting the series connection between the two converters (in red) and the parallel connection of V Source to V in (in blue).The fourth connection is shaded out as it is not realizable, as will be discussed in the upcoming part.showing source connected in parallel to the load, which will be analyzed further in Section 3.7.Writing the equations of V source and V Load as functions of V in and V out can further simplify the interactions between the different topologies.The equations are tabulated in Table 1, showing that V in is held to V Source and the three possibilities of V Load .
Revisiting the voltage equations in Table 1, another feature can be deduced.The V Load equation in Figure 4a indicates that the system will have overall step-up operation, although the converter itself can be either step-up or step-down; hence, several literature sources refer to this topology as step-up [2,4,29].Following the same analysis for the V Load equation in Figure 4b, it shows that it requires a step-up converter to prevent a negative V Load ; if the converter is step-down, it will lead to negative load voltage.The third case, i.e., Figure 4c, needs a step-down converter to maintain a positive V Load ; otherwise, it will lead to negative voltage on the load.
Analyzing the last case by applying Kirchoff's Voltage Law (KVL) shows V Load = −V in − V out , or in other words, a negative G V .By trying to apply the same approach to circuit (d) in Figure 3, if the designated I Load is kept in the direction to flow into V Load , it has to leave the partial converter from the negative terminal.Following the path of I Load , it will leave the negative terminal of the load and enter the positive terminal of the dummy converter's output.Flowing directly through the dummy converter, I Load will leave from the input side's positive terminal, which means it has to join I Source , and both of them enter the partial converter.This case implies positive power flow (consumption) by both ends of the partial converter, which is unrealizable.
The above features can be illustrated by deriving the relations between the different voltages in terms of K pr and G V [11,15,30].Equations ( 15)-( 17) refer to the topologies of Figure 4a,b,c, respectively.Equation (18), however, is developed to further provide a mathematical proof of the unviability of the circuit in Figure 4d, since it will develop a negative power processing ratio.
The above equations can be further visualized in Figure 5. Referring back to the IPOS (a) V Load formula in Table 1, it can be seen that V Load will always be bigger than V Source ; hence, G V is always ≥1, and this achieves overall step-up operation, and IPOS (a) cannot operate in scenarios where V Load is set lower than V Source .IPOS (b), on the other hand, can work throughout the entire range of G V , which can be translated as having negative V Load (i.e., swapped).However, negative K pr values appear when operating −1 < G V < 0. This can be interpreted as having power flowing in the reverse direction through the converter.Another feature that can be concluded about IPOS (b) is that the majority of power will be processed by the converter, and the converter will never process less than 50% of the total input power.
To realize IPOS (c), V Load will always be lower than V Source , as seen in Table 1, and it can also have negative values (reverse connected).This operation mode produces an overall step-down operation for positive values of K pr .
The last case of IPOS (d) is only plotted for integrity, but it is rendered inapplicable, as seen earlier. (15)with the part of G V < 1 shaded out.IPOS (b) from (16).IPOS (c) from (17) with G V > 1 shaded out.IPOS (d) was plotted to maintain the review integrity.
IPOS-Based Converters
An example of Figure 4a is a step-down Dual Active Bridge (DAB) operation that can be seen in the work proposed by Mishra et al. [31] as a battery emulator based on a DAB converter with step-up IPOS topology.Although the DAB was utilized, the authors commented that a common-mode circulation current will be flowing between the input and the output of the converter systems, which requires further study.Analytical work performed by the authors shows that the η sys is always higher than the η conv .
Omar et al. [14] utilized a current-fed dual-inductor push-pull in step-up formation, also coinciding with Figure 4a.The authors state the main advantage is soft switching.The proposed design of the converter also permits reverse power flow; however, this operation mode was not analyzed in their work.
Zapata et al. [11] used a single flyback converter, which is claimed to have reduced current ripple at the input and divided the individual converters' power ratings.Table 2 summarizes the salient features of the reviewed systems, while Figure 6 illustrates some designs from the literature.The DAB converter in Figure 6a has its input connected to V Source , while its output is connected in series to V Source , and both are connected to V Load to give a straightforward example of the IPOS (a) case.
The outstanding feature of Figure 6b is that the authors reversed the V Load .Carrying out KVL around the circuit yields V Load + V out − Vin = 0, while if the load was wired as designated, it would yield IPOS (b).
Figure 6c demonstrates another attribute, which is the reversed V out .The right-hand side of the converter was flipped so that the negative of the converter's output is connected to the positive side of the load, thus fulfilling the IPOS (c) topology.
As it can be seen from [16,20,29,33], for example, the PPC efficiency holds high values throughout a wide range of operating conditions, contrary to the full-power converter, which achieves high efficiency generally at a specific operating point.
Input-Series-Output-Parallel Topology (ISOP)
Applying the same systematic approach of the previous Section 3.1 to the ISOP topology demonstrated in Figure 3b, four connections can be derived, as demonstrated in Figure 7. Table 3 contains a summary the equations describing the behavior of each topology.By utilizing the definition of η sys and G V , Equations ( 19)-( 21) are developed relevant to the topologies in Figure 7a,b,c, respectively.
By plotting the Equations ( 19)-( 22), Figure 8 can visualize the behavior of each ISOP topology.Starting with the ISOP (a) curve, it is seen that G V is bounded between 1 and 0, since V Source will always be bigger than V Load , so G V < 0 is not realizable.The ISOP (b) and (c) topologies can operate throughout the whole range of G V (including negative source voltage) with linear characteristics.ISOP (d) is also plotted in Figure 8 for review integrity, but it will not be practically achievable.One of the advantages of ISOP topology is that it reduces stress on semiconductor switches in high voltage applications [36]. 19with G V > 1 shaded out.ISOP (b) trend (20) and G V < 1 shaded out.ISOP (c) trend from (21).ISOP (d) was plotted in this figure as another graphical proof that K pr will be negative over the whole range of operation, hence rendering this mode invalid.
ISOP-Based Converters
The work of Tao, Wang, and Zhuo [13] demonstrated using a CLLC converter to achieve four-quadrant operation.Their work is based on ISOP (a), providing bi-directional power flow between a DC bus and a battery of either a higher or lower voltage.
Renaudineau et al. [37] implemented ISOP (b) to generate rectified sinusoidal DC from a PV string input.In their simulation, they mitigated the harmonics content by relieving the inverter from high-frequency switching and dedicating it to unfolding only.
In [38], Anzola, Aizpuru, and Arruti proposed ISOP (a) for EV fast charging applications.Their simulation shows a steep drop of N L and N C as the State of Charge (SOC) builds up.A down-scaled prototype pf the PPP demonstrates a reduction of 65% in the size of the magnetic components, i.e., the transformer and inductor, when compared to the full power converter.
Table 4 displays a comparison between the reviewed ISOP systems, while Figure 9 presents examples of ISOP topologies.Figure 9a represents a DAB-based ISOP (a) topology, where V Load is connected in parallel to V out of the converter, while the same (i.e., V Load ) is connected in series to V in , and then the sum of both voltages is connected to V Source .ISOP (b) topology is demonstrated in Figure 9b, where the flayback converter's output is connected in parallel to V Load , while its input voltage is connected in series between its output voltage and the source.
The full-bridge converter displayed in Figure 9c displays an inverted left-hand side, where the negative side of V in is connected to the positive side of V Source .[37]; note the inverted V in .
Input-Series-Output-Series Topology (ISOS)
One more configuration can be deduced in the isolated topology group, which is Input-Series-Output-Series (ISOS).Figure 10 displays all the four combinations.By examining Figure 10a, the series connection (marked in red) can be seen on both sides of the input and the output.
By developing the power balance of the converter (for an ideal converter) in ( 23) with the aid of the current equations in Table 5, it can be seen that topologies (b) and (c) in Figure 10 are not achievable.
However, since I in = −I out , I out has a reversed direction in comparison to the designated direction in Figure 10b.This negative current reflected in (24) leads to a net positive power pouring into the converter, turning the converter into one that sinks power instead of transferring it.
A similar case can be deduced in the topology of ISOS (c); however, this time V out has a reversed sign but still leads to the same conclusion of the converter ending up sinking power.These observations halt any further study of those two arrangements.
On the other hand, in Figure 10d, the current will flow out of the positive terminals of the Load and Source as well, which is not a valid power transfer mode.An attempt to flip the Load terminals in Figure 10d will end up yielding an identical topology to Figure 10a.
Maintaining the analysis method used in Sections 3.1 and 3.3, the ISOS equations can be derived and summarized in Table 5.
Source
Load Figure 10 Diagram Viability Deriving K pr in terms of G V produces an extra term in Equation ( 25).This term is represented by the ratio of V in to V Load , and its effect will further be commented on in the upcoming Section 3.6.
By plotting Equation (25) for several values of V in /V Load as in Figure 11, the generic trends and the influence of V in /V Load ratio can be seen.
The trends in Figure 11 imply the necessity to hold V in /V Load at a steady value to maintain proper and predictable operation of the system.
ISOS-Based Converters
Within the surveyed and reviewed literature, the ISOS topology was the least encountered.The work seen in [41] contains an intermediate ISOS, utilized as a Half DC Bus Boost Converter, but no further details about its performance characteristics were developed.Ref. [42] described two converters connected in ISOS formation, with one of them having a 1:1 ratio, which can be thought of as the dummy converter stated earlier; however, it is introduced to achieve full galvanic isolation.
The work of Lopusina and Grbovic [43] illustrates explicitly an ISOS-topology converter; nevertheless, it is based on a non-isolated converter and hence will be seen in the upcoming Section 4.2.
Input-Parallel-Output-Parallel Topology (IPOP)
This arrangement represents a special case, since two out of its four variants will lead to short-circuiting the source with the load, leaving these scenarios out of the analysis.In the other two cases, V Source will always be connected in parallel to V Load , which leads to unity G V as expressed in (26).
Furthermore, |V in | = |V out | can be seen in Figure 12, where cases (b) and (d) are also greyed out short-ciruits.Although cases (a) and (c) in Figure 12 theoretically exist, their practical applications might not be of much interest due to the fact that V Load is held steady to V Source .Such fundamental restrictions limited the interest in further research on this type of topology, and therefore no related literature was found.
Non-Isolated PPP Architectures
Revisiting the topologies developed in the previous parts (Sections 3.1 and 3.3), it can be seen that some topologies actually will not suffer from the short-circuit mentioned at the beginning of Section 3 when using non-isolated converters.Ref. [21] analytically developed all possible combinations for such arrangements, and based on that work, further analysis of operation is carried out in the following subsection.For simplicity purposes, this work will assume the use of a DC-DC boost converter, such as the one shown in Figure 13.Nevertheless, the analysis remains the same for any other type of non-isolated converter being used.
Fractional Topology
Looking back into the IPOS topologies demonstrated in Figure 4, it can be seen that for configuration IPOS (b), the negative terminal on each of the ports of the converter is actually connected to the same The same observation can also be seen in configuration ISOP (b) in Figure 7, as was already demonstrated for ISOS in Section 3.6.Such a remark means that a non-isolated converter can be connected directly in those topologies, where Figure 14 demonstrates the possible combinations of such non-isolated PPP topologies.
On the other hand, ref. [28] suggested solving the short-circuit problem by simply inverting the converter terminals.Figure 14c,d illustrate inverted boost converter connections.Such topologies are referred to in the literature as the Fractional Power Converter (FPC) [6].It is to be noted that such configurations might lead to loss of partial power processing, as seen in [28].The Active Voltage Balancer seen in Figure 14e is just one proposed arrangement, as seen in [44]; other arrangements might be proposed.
The voltage and current equations are stated again in Table 6 for easy reference; however, the same equations were already developed in Table 1 for IPOS (b), and Table 3 for ISOP (b).Equations ( 16), (20), and (25) remain valid to represent K pr as a function of G V for nonisolated IPOS-b, ISOP-b, and ISOS-a, respectively.
Fractional Topology-Based Converters
Due to the recent interest in the applications of non-isolated PPP, only a few examples could be found in the reviewed literature where an explicit non-isolated converter is used [4,29].Kim and Parkhideh [45] presented a comparison between non-isolated and isolated converters for PVs and battery applications, where they stated higher efficiency for isolated PPC.The work presented in [6], based on the proposed Modified Inductor Boost Converter in [46], states that a K pr of less than 25% is achieved for a power conversion system of 750 W. The proposed topology in this work follows case (c) in Figure 14.
Another example can be found in the analytical and simulation work of the Cuk-based PPP converter in [47] connected according to case (a) in Figure 14.The authors suggested several Cuk-based converter topologies.However, no values were given about operating K pr or G V .
In [48], the authors studied several scenarios of boost, buck, and buck-boost converters in the ISOS formation.A buck-boost prototype was implemented in the configuration of case (d) to develop a battery charger of 1.2 kW.
The arrangement presented by [43] and demonstrated in Figure 15c has several salient points to be commented on.Firstly, it treats the voltage ratio V in /V Load mentioned earlier in Section 3.5 by using an Active Voltage Balancer that transfers bipolar DC into unipolar [44], and it is used to stabilize the Load voltage.The selected topology for the Active Voltage Balancer is a Series Resonant Balancer Converter (SRBC).Inductor FPC topology seen in [6,46].(c) Boost FPC studied by [43]; shaded area represents Active Voltage Balancer.
Second of all, due to the introduction of the Active Voltage Balancer in the design, a straightforward comparison is no longer valid with other topologies as the extra losses and the component count of the balancer have to be accounted for.
Conclusions
This review focuses on partial power processing technology, providing a comprehensive review from three aspects: structural classification, theoretical operation limits, and prototype examples.It discusses the principles of partial power processing technology, summarizes and clarifies the classification and naming of partial power structures in existing research, revisits the component stress factors and non-active power factor, and provides certain guidance for researching partial power DC converters.
Compared with traditional full-power solutions, partial power DC converters can achieve direct transmission of main power, with only a small portion of the system's power being processed internally with a DC-DC converter, resulting in performance improvements in cost, volume, power density, efficiency, and thermal design.However due to the specific nature of its circuit structure, there are certain limitations in its application scenarios, and the applicability of partial power solutions needs to be considered in combination with specific scenario characteristics.
Existing research has essentially validated the energy efficiency advantages of PPC compared to traditional full-power converters.This work intends to contribute on the entry and foundation levels to the field of partial power conversion and act as a reference and base for further future development.In the future, further research can be conducted from the following two perspectives: In terms of research content, fault tolerance and fault detection techniques can be of interest as can exploring configurations based on resonant converters.Research and optimization for partial power solutions can be performed in multi-domain environments, such as vehicle-to-grid applications, green hydrogen production, kinetic energy recovery and regeneration for electric mobility, power supply for new data centers, hybrid energy storage systems, energy routers, etc.
Figure 1 .
Figure1.PPP family tree influenced by[4], with the scope of the current work highlighted.
Figure 2 .
Figure 2. Fryze power factor for different topologies of isolated PPC operating at different G V .
Figure 3 .
Figure 3. PPC configurations: (a) Isolated converter's output connected in series with the dummy converter's output to produce IPOS topology.(b) Series connection between the input of the isolated converter and the input of the dummy converter to produce ISOP topology.(c) Series connections for both sides of the isolated and dummy converters giving rise to ISOS topology.(d) IPOP diagram showing source connected in parallel to the load, which will be analyzed further in Section 3.7.
Figure 4 .
Figure 4. IPOS configurations: (a-c) are three realizable variants of IPOS topology, and (d) is the fourth analytical case, not achievable in reality.The arrow inside the dummy converter indicates the direct power flow, while the arrow inside the isolated converter indicates the power flow within the converter itself.Note the sign change of the converters.
Figure 6 .
Figure 6.Examples of IPOS topologies in the literature: (a) IPOS-a topology implemented in [31].(b) IPOS-b seen in work of Liu et al.[15], with reversed V Load .(c) IPOS-c topology from[33]; notice the reversed V out arrow to indicate flipped output.
Figure 7 .
Figure 7. ISOP configurations: (a-c) are three realizable variants of ISOP topology, and (d) is the fourth analytical case, not achievable in reality.The arrow inside the dummy converter indicates the direct power flow, while the arrow inside the isolated converter indicates the power flow within the converter itself.Note the sign change of the converters.
Figure 8 .
Figure 8. ISOP K pr (G V ): ISOP (a) topology trend from(19) with G V > 1 shaded out.ISOP (b) trend(20) and G V < 1 shaded out.ISOP (c) trend from(21).ISOP (d) was plotted in this figure as another graphical proof that K pr will be negative over the whole range of operation, hence rendering this mode invalid.
Figure 10 .
Figure 10.ISOS configurations: (a,d) are two realizable variants of ISOS topology, while (b,c) are analytical cases not achievable in reality.The arrow inside the dummy converter indicates the direct power flow, while the arrow inside the isolated converter indicates the power flow within the converter itself.Note the sign change of the converters.
d) no I Source = I in = I out = I Load I Load = I in = I out = I Source (a) yes I Source = I in = −I out = −I Load I Load = −I in = I out = −I Source (b) no I Source = I in = I out = I Load I Load = I in = I out = I Source (c) no I Source = I in = −I out = −I Load I Load = −I in = I out = −I Source (d) no
Figure 11 .
Figure11.ISOS K pr (G V ): Linear relation between K pr and G V at different V in /V Load ratios.
Table 6 .
Summery of non-isolated PPP equations.
|
2024-03-23T15:22:23.828Z
|
2024-03-21T00:00:00.000
|
{
"year": 2024,
"sha1": "3773f9578d500b7382dfc8e43f699b085e417b38",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/17/6/1496/pdf?version=1711072833",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "642238fc88076d272092ee1047e65b188ed87484",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
}
|
231771374
|
pes2o/s2orc
|
v3-fos-license
|
Potassium Channels and Their Potential Roles in Substance Use Disorders
Substance use disorders (SUDs) are ubiquitous throughout the world. However, much remains to be done to develop pharmacotherapies that are very efficacious because the focus has been mostly on using dopaminergic agents or opioid agonists. Herein we discuss the potential of using potassium channel activators in SUD treatment because evidence has accumulated to support a role of these channels in the effects of rewarding drugs. Potassium channels regulate neuronal action potential via effects on threshold, burst firing, and firing frequency. They are located in brain regions identified as important for the behavioral responses to rewarding drugs. In addition, their expression profiles are influenced by administration of rewarding substances. Genetic studies have also implicated variants in genes that encode potassium channels. Importantly, administration of potassium agonists have been shown to reduce alcohol intake and to augment the behavioral effects of opioid drugs. Potassium channel expression is also increased in animals with reduced intake of methamphetamine. Together, these results support the idea of further investing in studies that focus on elucidating the role of potassium channels as targets for therapeutic interventions against SUDs.
Introduction
Substance use disorders (SUDs) are biopsychosocial disorders include neuropsychiatric symptoms such as loss of control of drug taking, repeated relapses to drug taking after intervals of forced or voluntary abstinence, and continued drug use in the presence of adverse consequences as described in the Diagnostic Statistical Manual of the American Psychiatric Association [1]. Individuals who use licit and illicit drugs, including alcohol, cocaine, methamphetamine, or opioids, do not all develop SUDs because of genetic and environment factors that enable them to be resilient [2]. On the other hand, several approaches have been taken to develop treatment approaches to alter the clinical course of SUDs in those patients who meet diagnostic criteria and subsequently seek medical and psychological treatment [3][4][5]. These approaches have met with variable degrees of success. However, there still remains a substantial need to develop pharmacotherapies that can provide important relief for individuals who have progressed from the use of these drugs for their reward properties to their compulsive abuse even in the presence of adverse medical and social consequences. Thus, the goal of the present review is to discuss the possibility of targeting potassium channels as potential pharmacological treatment for SUDs.
Classification of Potassium Channels
Ion channels are integral pore-forming transmembrane proteins that selectively control the influx and efflux of important physiological ions including Na + , K + , Ca2 + , and Cl − into and from cells or intracellular organelles. They serve to control cytoplasmic and intraorganellar concentrations of these ions as well as regulate membrane potential and The inwardly rectifying (K IR ) subgroup, named after its inward flow of K + ions, was first cloned in 1993 [31]. These channels are arranged into homo-or hetero-tetramers that react to magnesium and polyamines [32]. This group contains 16 genes (K IR 1-K IR 7) that are divided into 4 families: K + transport channels (KCNJ1, KCNJ10, KCNJ13, KCNJ15); classical K IR channels (KCNJ2, KCNJ4, KCNJ12, KCNJ14, KCNJ16, KCNJ18); G-protein K + channels (KCNJ3, KCNJ5, KCNJ6, KCNJ9); ATP-sensitive K + channels (KCNJ8, KCNJ11).
K + Channels and Neuronal Function
Potassium channels perform multiple functions in the neuron. These include maintaining resting cell membrane potential, modulating neuronal excitability, neurotransmitter release, and maintaining homeostasis [11,33,34]. The sub-cellular localization of K + channels in axon terminals [35] provides a mechanism via which they can control cellular communication via dopamine (DA) neurotransmission in reward circuitries [36][37][38].
In fact, KCNQ2 and KCNQ3 channels that are expressed in the brain as heterotetramers (KCNQ2/3) [39] are localized on dopamine neurons in the ventral tegmental area (VTA) [37].
Activation of KCNQ2/3 has been shown to reduce midbrain dopamine neuronal excitability and to also attenuate psychostimulant-induced increases in extracellular dopamine in the nucleus accumbens [40][41][42]. Activation of KCNQ2/3 was shown to reduce the firing of midbrain dopaminergic neurons, to inhibit striatal dopamine synthesis [40]. Retigabine, another KCNQ activator, was also reported to prevent d-amphetamine-induced DA efflux in the nucleus accumbens and d-amphetamine-induced locomotor hyperactivity [42]. Similar results have been obtained using striatal slices whereby retigabine was able to inhibit KCl-dependent release of DA [43]. Together these results implicate potassium in the regulation of DA release and stimulant-induced behavioral activation.
The accumulated evidence suggests that K + channels can regulate and/or facilitate the development of neuroadaptations that may be important in drug-induced progression of behaviors that are associated with the development of a SUD diagnosis [48]. Indeed, K + channel signaling appears to be impacted by exposure to various rewarding drugs that include alcohol [49][50][51], cocaine [52], methamphetamine [53][54][55], and opioids [56,57], morphine [58,59], oxycodone [60], and fentanyl [61]. Detailed effects of these drugs on potassium channels are described below.
Potassium Channels and Alcohol Use Disorder
Alcohol consumption is widespread in the general population in the world [62,63]. However, although relatively few individuals meet criteria for alcohol use disorder (AUD) [64], AUD is associated with significant medical, neurological, and psychiatric comorbidities with consequent increased morbidity and mortality [65]. Presently, there are few FDAapproved pharmacological treatments for AUD. These include disulfiram, acamprosate, and naltrexone and these medications have limited success at reducing the high rates of relapse and are not necessarily efficacious in some populations [66,67]. Thus, there is a need to develop other pharmacological agents based on the elucidation of neurobiological mechanisms that might be the substrates of AUD.
Herein, we have reported both the causal and non-causal role of K + channels in AUD. Studies that show changes in gene expression should be considered descriptive and of need for further validation. However, data from pharmacological interventions and knockout animal models should be considered as having shown potential causal relationships. High throughput microarray analyses using rodent models of AUD have found differential expression of genes that encode K + channels in the brain of animals with different levels of alcohol intake [68,69]. Significantly, FDA-approved potassium channel regulators reduce alcohol intake in rats [70].
Alcohol Use Disorder and K V Family
Rinker et al. (2017) identified significant changes in the expression of several genes that belong to K V families in the PFC and NAc of mice that showed different levels of alcohol intake [68]. In the PFC, they reported that voluntary ethanol consumption (VEC) results in changes in Kcna5, Kcnc3, Kcnd2, and Kcnq5 expression [68]. Chronic [68]. These observations are, in part, consistent with a previous report that chronic ethanol exposure caused downregulation of Kcnq2 mRNA expression was observed in the mouse amygdala [71]. Another K V channel, K V 4.2/KCND2, has been reported to show alcohol exposure-induced decreases in protein levels with increasing concentrations of alcohol in both in-vitro and in-vivo model [72,73]. These results are consistent with the report that administration of retigabine, an activator of K V 7.2/KCNQ2 and K V 7.3/KCNQ3, led to a reduction of voluntary alcohol intake in rodents [74,75]. It is to be noted that retigabine has been withdrawn from the market by its manufacturer GlaxoSmithKline.
AUD and K 2P Families
Members of K 2P subfamilies are also differentially regulated by alcohol intake [50,80]. Specifically, K 2P 7.1/Kcnk13/Thik1 mRNA expression in the mouse ventral tegmental area (VTA) was upregulated by acute ethanol exposure [50] whereas mRNA levels of K 2P 1.1/Kcnk1/Twik1, a weak inward rectifying channel, were down-regulated in the mouse cerebellum by chronic ethanol exposure [80]. Those results are interesting because Twik1 functions to set the resting membrane potential in cerebellar cells [87]. Interestingly, chronic nicotine exposure also down-regulated Twik1 expression in the amygdala [88], suggesting the possibility that Twik1 might participate in the long-term effects of rewarding drugs on the brain. Genome-wide studies have suggested the possibility that KCNK2 gene might be linked to AUD in the native Indians and the Euro-Americans [89].
AUD and K Ca Families
K Ca (Ca 2+ -activated) channels are targets for both acute and chronic alcohol exposure [70,[82][83][84][85]. Large (BK) and small conductance (SK) K Ca channels influence cell repolarization and impact dendritic Ca 2+ spikes [84]. Rinker et al. (2017), using their highthroughput assay following different levels of ethanol exposure, have reported significant changes in BK channel expression of Kcnma1 and Kcnmb2, in both PFC and NAc of mice [68]. In addition to changes in mRNA expression, acute ethanol administration induced significant decreased KCNMA1 protein levels in primary hippocampal neuronal cultures [82]. Marrero et al. (2015) reported that moderate drinkers did not exhibit changes in KCNMA1 expression whereas heavy drinkers showed decreased KCNMA1 expression [86].
Unlike, BK channels whose levels are changed with acute ethanol exposure, the expression of SK (KCNN) channels is influenced by chronic ethanol administration, with reduced expression of K Ca 2.3/Kcnn3 channels and KCNN3 protein levels being observed in the rodent NAc [79]. Interestingly, protracted withdrawal from chronic ethanol reduced the function and trafficking of KCNN3 in the rodent NAc [78].
Potassium channels might also play a role in the fetal alcohol syndrome. Specifically, Ramadoss et al. (2008) infused pregnant ewes with ethanol or saline using a "3 days/week binge" pattern during the third trimester [90]. They reported that chronic ethanol causes a 45% reduction in the total number of fetal cerebellar Purkinje cells. They also found that TASK1 channels were expressed in Purkinje cells and that the TASK3 isoform was expressed in granule cells of the ovine fetal cerebellum. Importantly, pharmacological blockade of both TASK1 and TASK3 channels simultaneous with ethanol was able to prevent the reduction in fetal cerebellar Purkinje cell number [90].
The study by Herman et al. (2015) exhibits the role of K IR 3.3/Kcnj9 as a critical gatekeeper of ethanol incentive salience and projects as a potential target for the treatment of excessive ethanol consumption [81]. Similar to Herman et al. (2015), knock-out (KO) of K IR 3.3/KCNJ9/protein enhanced ethanol conditioned place preference [94,95].
Rinker et al. (2017) also identified several K IR genes, in addition to K V , K Ca and K 2P genes in all the three paradigms of alcohol intake [68] (voluntary, CIE and heavy use-see Table 2). Interestingly, the major gene expression changes were identified in the NAc.
Potassium Channels and Cocaine Use Disorder
In recent years, overdose death related to cocaine use disorder (CUD) is on the rise and have almost tripled in the US [96]. Amongst the US population of cocaine users, about 5.5 million (2.0% of the population) were above the age of 12 and about 2.2% of high school seniors used cocaine [97]. Cocaine exerts its actions by blocking the dopamine (DA) transporter, with secondary increases in DA in the synaptic cleft [98]. These facts have led to therapeutic approaches that include the development of anti-DAergic drugs in attempts to treat CUD. These approaches have not met with a high degree of success [99,100], suggesting there are potential targets that might offer other avenues to fight cocaine addiction.
Recent studies have indeed suggested a causal role for potassium channels in CUD. Specifically, Mooney & Rawls, (2017) reported that flupirtine, an agonist of the voltagegated KCNQ2/3 channel, can reduce the development of cocaine place preference and locomotor activation [101]. Studies using knock-out mice models have reported that Kcnj6 and Kcnj9 knockout mice exhibited reduced cocaine seeking compared to WT mice [102]. However, only Kcnj6 knock-out mice displayed increased locomotion behavior [102]. In contrast, McCall et al., (2017) reports increased cocaine self-administration in mice with Kcnj6 deletion in VTA DA neurons [52]. The discrepancy observed between these two studies may be due to the way that the gene was deleted. The former uses whole animal KO whereas the latter use site-specific ablation. In addition, Kcnj6 ablation is known to play a key in the regulation of K IR 3/GIRK channel activity in midbrain DA neurons [103,104] (See Table 3).
Potassium Channels and Methamphetamine Use Disorder
METH use disorder (MUD) is highly prevalent throughout the world and characterized by loss of control over drug use despite adverse consequences as well as strong urges to seek and use the drug [1]. Cognitive and psychiatric deficits consequent to structural and functional pathologies have been well documented [105,106].
Researchers in the Cadet laboratory have recently published several papers in which they attempt to mimic human MUD conditions in rodent models [53,55,[107][108][109]. In those models, they have introduced an additional DSM5 criterion that is related to compulsive drug intake despite adverse consequences [107]. In that model, they use drug selfadministration and then introduced footshock punishment to represent adverse consequences that humans might encounter during their frequent experiences with drugs. In the case of METH, shocks are introduced after the animals had escalated their intake and reached a plateau of consistent daily consumption of large amount of METH. Our group has shown that the introduction of footshocks administered contingently with METH can help to dichotomize rats into punishment or shock-resistant (compulsive, vulnerable to addiction, addicted) and shock-sensitive (not vulnerable, non-addicted) animals [107,[109][110][111]. In addition to the behavioral manifestations, the resistant and sensitive rats have also been shown to exhibited interesting transcriptional and biochemical changes in certain brain regions including the dorsal striatum and nucleus accumbens [107,110,112]. It is to be noted here that unlike alcohol and cocaine, studies with METH have focused on non-causal association with potassium channels and requires further elucidation.
MUD and K V Family
Recently, we used that model to investigate potential alterations in global DNA hydroxymethylation in the nucleus accumbens (NAc). We used the NAc because behavioral phenomena that occur METH self-administration (METH SA) are thought to be regulated by interconnections between distinct brain regions that include that structure [113]. Neuroplastic changes in the NAc are thought to participate in the development and maintenance of drug-taking behaviors [114]. We found, for the first time, that rats that continued to take METH compulsively exhibited differential DNA hydroxymethylation in comparison with both control and nonaddicted rats [55]. Rats that suppress their intake of METH in the presence of footshocks also showed differences in DNA hydroxymethylation from control rats, suggesting that exposure to the drug can result in prolonged effects on this epigenetic marker. The changes in DNA hydroxymethylation in the NAc of non-addicted rats were observed mostly at intergenic sites located on long and short interspersed elements. Of significant relevance to the present review, we also observed differentially hydroxymethylated regions in genes that encoded voltage (K V 1.1, K V 1.2, K Vb 1 and K V 2.2) potassium channels [55]. In order to test if these changes in DNA hydroxymethylation were accompanied with changes in transcription, we used quantitative PCR and measured their mRNA levels in the experimental groups of rats. We found, indeed, that the mRNA levels of these potassium channels were increased in the non-addiction rats. These results suggest that increased expression of these channels might participate in suppressing METH self-administration by rats.
The potential role of potassium channels in METH taking behaviors was also investigated in a subsequent paper. In that paper, we tested the possibility that a single preexposure to METH could potentiate its intake during self-administration experiments [53]. Animals were therefore pre-treated with saline or METH prior to METH SA. The experiment consisted of three experimental groups: (1) a single saline injection followed by saline self-administration (SS); (2) a single saline injection followed by METH SA (SM); and (3) a single METH injection followed by METH SA (MM). METH-pretreated rats escalated METH SA earlier and took more METH than saline-pretreated animals. Because compulsive METH takers and METH-abstinent rats show differences in potassium (K + ) channel mRNA levels in their nucleus accumbens (NAc), we tested the possibility that the expression of K V potassium channels might also help to distinguish between rats that escalated METH earlier (MM group) than the other group (SM). Increased levels of mRNA and protein expression of voltage-gated K+ channels (Kv1: Kcna1, Kcna3, and Kcna6) were indeed found in the NAc of rats that escalated METH later and took less METH [53]. Rats with increased mRNA expression also showed decreased DNA methylation at the CpG-rich sites near the promoter region of these genes.
It is to be noted that potassium channels may also be involved in the toxic effects [115] that have been reported after various doses of METH [116]. Specifically, Zhu et al., (2018) treated primary cultured hippocampal neurons with METH and reported that the drug caused time-and dose-dependent increases in K V 2.1 protein expression which was accompanied by elevated cleaved-caspase 3 and declined bcl-2/bax ratio, markers of neuronal apoptosis as previously reported after METH injections into rats [117]. Blockage of K V 2.1 with the inhibitor, GxTx-1E, or its knockdown attenuated the toxic effects of the drug [116].
MUD and K Ca Family
The investigators from Cadet's laboratory also documented changes in DNA hydroxymethylation in genes that encode K Ca potassium channel genes in the NAc of animals exposed to METH and subsequently dichotomized by footshocks [55]. These genes included K Ca 2.1/KCNN1 and K Ca 2.2/KCNN2. Similar to the observations for K V channels, these changes were associated with increased expression of their mRNA levels. We also investigated the expression of calcium-activated K + channels in animals that had received saline or METH prior to being put through the METH SA experiments [53]. Interestingly, only Kcnn1 (K Ca 2.1) showed increased expression in the NAc of rats that received saline first (SM rats) in comparison to those that received an injection of METH first (MM rats). In contrast, Kcnn3 (KCa2.3/SKCa3) and Kcnma1 mRNA levels were increased in all rats that had self-administered METH [53], suggesting that METH SA is enough to alter the expression of these genes irrespective of the amount of METH taken. (See Table 4).
Potassium Channels and Opioid Use Disorders
Opioid use disorders (OUDs) have reached epidemic levels [117]. The number of opioid-related overdose deaths from both prescription and illicit opioids reached close to 450,000 between 1999-2018 [118]. In 2018 alone, 46,802 people died of opioid overdoses, representing 69.5% of all drug overdose deaths [119]. Some of these terrible consequences might be related to treatment availability which is quite limited. Presently, FDA-approved medications for OUD include the mu-opioid receptor full agonist methadone, mu-opioid receptor partial agonist buprenorphine, and mu-opioid receptor antagonist naltrexone [120]. There are therefore windows for the development of additional medications that may be derived from a basic understanding of the effects of opioid drugs.
OUDs and K V Family
There is causal evidence that potassium channels might be involved in the effects of opioid drugs, suggesting the possibility that activators of potassium channels might be useful in the treatment of pain and/or opioid addiction [121]. For example, a genomewide association study using opioid-dependent humans has identified a risk variant in the voltage-gated K+ channel gene K V 6.2/Kcng2 [57]. In addition, the use of selective K V 7/KCNQ2 -3 K+ channel activator, flupirtine, increased the analgesic effects of morphine in animal pain models [122,123]. Moreover, another KCNQ K + channel activator, retigabine/ezogabine, elevated pain threshold and prolonged withdrawal latency after thermal stimulation [124]. Interestingly, the synergistic analgesic effect observed may eventually translate to reduced opioid prescription. The effects of retigabine could be blocked by the potassium channel antagonist, linopirdine [124]. In-vitro studies have shown the effects of opioid agonists, fentanyl, heroin, and methadone, on voltage-dependent potassium channels, K V 11.1/KCNH2/hERG [61,125,126]. It needs to be emphasized that the pharmacological activation of cardiac hERG potassium channels have adverse effects and warrants caution in clinical research. (see Table 5).
OUDs and K Ca Family
Knock-down of BK channel, Kcnb3 via small interference RNA was reported to ameliorate the hyperalgesia and anti-nociceptive effects of chronic morphine [128]. In addition to the big conductance K + channel, the small conductance SK channel also plays a specific role in reward circuits following morphine exposure. Specifically, Fakira et al. (2014) treated mice with escalating doses of morphine over 4 days followed by a challenge dose of morphine a week later. The dosing morphine schedule resulted in increased locomotor activity and increased SK2 activation [127]. The animals that received repeated morphine followed by a morphine challenge exhibited increased expression of SK2 protein levels in whole hippocampal homogenates but decreased SK2 expression in post-synaptic density [127]. A more recent study also provided evidence for the involvement of SK channels in the actions of opioid drugs [58]. The authors sought to determine if SK channels located in the nucleus accumbens played any role in morphine withdrawal. They found that firing of neurons in the shell of the nucleus accumbens was enhanced secondary to decreased expression of SK channels during morphine withdrawal, with SK2 and SK3 protein levels being significantly decreases after 3 weeks of withdrawal [58]. These observations support a role of potassium channels in the longterm effects of morphine in the brain.
OUDs and K IR Family
Opioids also exert significant effects on KIR family channels. These have been reported on the G-protein gated class that includes GIRK1-4/KCNJ3, 5, 6 and 9. The analgesic effects of morphine were potentiated in knock-out mice with Kcnj6/ GIRK2/ K IR 3.2 deletion [129]. Similarly, the antinociceptive effects of oxycodone were attenuated by knock-down of Kcnj3/ GIRK1/ K IR 3.1channels using short interfering RNA [60]. Studies by Kotecki et al. (2015) using Kcnj3 and Kcnj6 knock-out mice models have shown that enhancement of morphine-induced locomotion.
Blockers of ATP-sensitive class of K IR subfamily, glibenclamide and tolbutamide sulfonylureas, effectively reversed the peripheral antinociceptive effect of fentanyl [130]. Glibenclamide was also shown to play an active role in morphine reward [131] by facilitating morphine-induced conditioned place preference [132].
Conclusions
This review has provided descriptive and causal evidence for diverse roles of K + channels in the behavioral manifestation of substances of abuse including alcohol, cocaine, methamphetamine, and opioids. These substances can alter the expression of potassium channels at both mRNA and protein levels. These changes were found to occur in various brain regions including the nucleus accumbens and hippocampus that are involved in various aspects of addiction. In addition, we reviewed, evidence that activators of K + channels can suppress behaviors induced by some of these rewarding agents. These behaviors include drug acquisition, maintenance, and withdrawal-associated phenomena. The evidence discussed herein supports the view that there is a need to invest in the development of pharmacotherapeutic agents that target K + channels in the brain. These studies will help to develop non-dopaminergic agents against SUDs since DAergic drugs have not been shown to be very efficacious against these psychiatric disorders.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-02-03T06:18:23.234Z
|
2021-01-27T00:00:00.000
|
{
"year": 2021,
"sha1": "a2c31c58f56162c0cf4ccd877b91f7480cb43af2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/3/1249/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d7b5faeadba18b36ba2e45371e04a73408550c1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6453433
|
pes2o/s2orc
|
v3-fos-license
|
First Results from NuSTAR Observations of Mkn 421
Mkn 421 is a nearby active galactic nucleus dominated at all wavelengths by a very broad non-thermal continuum thought to arise from a relativistic jet seen at a small angle to the line of sight. Its spectral energy distribution peaks in the X-ray and TeV gamma-ray bands, where the energy output is dominated by cooling of high-energy electrons in the jet. In order to study the electron distribution and its evolution, we carried out a dedicated multi-wavelength campaign, including extensive observations by the recently launched highly sensitive hard X-ray telescope NuSTAR, between December 2012 and May 2013. Here we present some initial results based on NuSTAR data from January through March 2013, as well as calibration observations conducted in 2012. Although the observations cover some of the faintest hard X-ray flux states ever observed for Mkn 421, the sensitivity is high enough to resolve intra-day spectral variability. We find that in this low state the dominant flux variations are smooth on timescales of hours, with typical intra-hour variations of less than 5%. We do not find evidence for either a cutoff in the hard X-ray spectrum, or a rise towards a high-energy component, but rather that at low flux the spectrum assumes a power law shape with a photon index of approximately 3. The spectrum is found to harden with increasing brightness.
Introduction
Mkn 421 is one of the nearest and best-studied blazarsactive galactic nuclei (AGN) whose relativistic jet is oriented almost directly along our line of sight. Like the other AGN of this type, Mkn 421 shows a flat radio spectrum, optical polarization, rapid and correlated variability, and other characteristics of relativistically beamed AGN. Its energy output shows the usual two peaks, located respectively in the X-ray and TeV γ-ray bands, which is typical a e-mail: mislavb@astro.caltech.edu for the high-peaked BL Lac (HBL) class [1]. Its proximity and brightness in many spectral bands make it an important object to study in the context of AGN jet physics.
The non-thermal and polarized continuum observed in HBL objects from the radio to the X-ray suggests that this part of the spectral energy distribution (SED) is due to electron synchrotron radiation. The γ-ray part of the SED is likely due to the inverse Compton scattering by the same electrons responsible for the synchrotron radiation, and the seed photons are most likely the synchrotron photons internal to the jet. This scenario is the basis of the so-called Synchrotron Self-Compton (SSC) model, which has been successfully invoked to explain the complete SED of the BL Lac class of objects; see e.g. [2,3]. In the context of the SSC model, the variability in X-rays and high-energy γ-rays is expected to be high and correlated since they are produced by the same high-energy electrons. The time scales for energy loss of those electrons are very short, in agreement with the variability amplitude observed in Mkn 421 (spanning approximately two orders of magnitude) and rapid intra-day variability observed during epochs of high activity; see e.g. [4,5].
In order to provide insight into the radiative processes, the distribution of radiating particles, constraints on the particle acceleration, and thus the structure of the relativistic jet, we conducted a multi-wavelength study of Mkn 421 focused on the X-ray and TeV γ-ray bands. Coordinated simultaneous observations were carried out from December 2012 to May 2013 with the MAGIC and VERITAS ground-based Cherenkov-telescope arrays, and the Swift and NuSTAR orbiting X-ray observatories. The campaign was supported by coordinated (but not necessarily simultaneous) observations by ground-based optical, infrared and radio observatories, and the orbiting Fermi γ-ray observatory. In this report we present a preliminary analysis of the NuSTAR data from January through March 2013.
NuSTAR (Nuclear Spectroscopic Telescope Array) is a hard X-ray (3-79 keV) observatory launched into low Earth orbit in June 2012 [6]. It features the first focusing X-ray telescope to extend high sensitivity beyond the ∼10 keV cutoff shared by all currently active focusing soft X-ray telescopes. The inherently low background associated with concentrating the X-ray light enables NuSTAR to achieve approximately a one-hundred-fold improvement in sensitivity over the collimated or coded-mask instruments that operate, or have operated, in the same spectral range. Part of the NuSTAR primary mission is aimed at advancing our understanding of astrophysical jets through observations of archetypal blazars, such as Mkn 421, with unprecedented spectral and temporal resolution in the underexplored hard X-ray band.
NuSTAR Observations
In order to maximize the strictly simultaneous overlap of observations by NuSTAR and ground-based TeV γ-ray observatories during the dedicated campaign, the observation times were arranged according to visibility of Mkn 421 at the MAGIC and VERITAS sites. Coordinated observations in 2013 were performed on January 10, 15 and 20, February 6, 12 and 17, and March 5, 12 and 17. A typical NuSTAR observation spanned 10 hours, resulting in 15-20 ks of source exposure after accounting for orbital modulation of visibility and intervals of high background radiation. In addition to these dates, NuSTAR observed Mkn 421 for pointing calibration on 2012 July 7 and 8 (70 ks in total 1 ) and on 2013 January 2 (10 ks). The data The Fermi-LAT, MAXI and Swift-XRT count rates were taken from publicly available data [7][8][9]. Binning is weekly for the Fermi-LAT and MAXI data, per-observation for the Swift-XRT data and per-orbit for the NuSTAR data. Note that the uncertainties on data points from the latter two are too small to be seen in this plot. The NuSTAR count rates are plotted separately for its two modules, FPMA and FPMB. The dotted vertical lines mark mid-points of the NuSTAR observations and the dashed horizontal lines denote long-term median count rates.
were reduced using the standard NuSTARDAS pipeline 2 , version 1.2.0. Figure 1 shows the observations listed above with publicly available light curves in X-ray [7,8] and GeV γ-ray bands [9] in order to provide a broader context.
Analysis of Flux Variability
The variations in count rate between the NuSTAR observations, and the range covered in each observation alone, are entirely dominated by the intrinsic variability of the target, i.e. they are in clear excess of the observational uncertainties. For example, the calibration observation taken in July 2012 shows flux variability of up to 30% within an hour and a distinct increase in which the rate doubled in a roughly linear fashion over a 12-hour period (see upper panel of Figure 2). We note that it coincided with highly increased activity in the GeV γ-ray band observed by Fermi-LAT [10], but we lack sufficient X-ray coverage to physically relate these two events. The MAXI light curve shown in Figure 1 suggests that the peak of the Xray emission occured between mid-July and mid-August 2012, after the NuSTAR observation. even though the lowest and the highest observed fluxes span approximately an order of magnitude. Modest count rates are apparent from both MAXI and Swift-XRT data in comparison with long-baseline light curves and the intense flaring episodes covered in the literature, e.g. [11]. Very high fluxes in X-ray and γ-ray bands have been observed later in the campaign, peaking in mid-April 2013 [12][13][14]. The data from this flaring period will be presented in a dedicated publication. With the possible exception of the lowest-flux states, the observed count rates are not consistent with being constant during any of the observations -as demonstrated by light curves in Figures 1 and 2. In order to quantify the variability on timescales shorter than the observations, we divide the data into individual NuSTAR orbits, as they represent the most natural, although somewhat arbitrary, way of partitioning the data. The orbits are approximately 90 minutes long and contain up to 50 minutes of source exposure. We treat each orbit independently and fit two simple light curve models to the observed count rate in each one: a constant rate during an orbit, R(t) = R orb , and a linear trend in time, R(t) ∝ t. We examine the count rate in the 3-30 keV band, where the background contribution is negligible. The top panel of Figure 2 provides an example of the two models fitted to the July 2012 data binned to 10-minute time intervals. The lower panels of Figure 2 show the results of the fitting procedure performed on all orbits. We find that the majority of orbits are better described by a linear trend than a constant one. Linear trends account for most of the orbit-to-orbit variability and approximate smooth variations on super-orbital timescales of a few hours. On a 10minute timescale, the variability amplitude typically does not exceed the observational count rate uncertainty of approximately 3%. Based on the mildly overpopulated tail of the χ 2 / d.o.f. distribution for the linear trend fits, we estimate that up to 20% of orbits show excess variance beyond the simple linear trend. Subtracting the trend and comparing the residual scatter to the median rate uncertainty within each orbit, (σ R ) orb , gives a distribution slightly skewed towards values greater than unity (see lower right panel of Figure 2). This is consistent with intrinsic suborbital variability on a ∼10-minute timescale in 20% of orbits. We find no evidence for an increase in variability with flux.
Spectral Analysis
For spectral analysis we use spectra grouped to a minimum of 20 photons per bin and perform the modeling in Xspec [11,15]. Lower Three Panels: The green points with errorbars (1σ uncertainties) are best-fit parameters of the broken power law model fitted to Mkn 421 spectra of individual NuSTAR orbits. For the orbits where the lower bound on E b is below 4 keV (i.e. close to the lower end of the NuSTAR band) the break energy was fixed to 7 keV and the uncertainty is estimated to range between 4 and 12 keV. The grey data points are for the same model (also 1σ uncertainties) fitted to 2001 RXTE data, from [15].
ii) the observations with higher mean flux systematically prefer values of Γ lower than Γ ; iii) the observations with lower mean flux systematically prefer values of Γ higher than Γ ; and iv) the best-fit χ 2 increases with flux, owing to curvature apparent in the higher-flux spectra. These trends are entirely consistent with the observed hardening as the source brightens, shown in Figure 3. The modeling implies that a Γ ≈ 3 power law fits the low-flux spectra well. However, the curvature and the consequent poorer fits of the power law model at high flux may either be intrinsic or simply an effect of superposition of spectra with a range of different photon indices. These two possibilities can be resolved with a more complicated spectral model and time-resolved analysis.
We therefore replace the power law with a purely phenomenological broken power law model: F(E) ∝ E −Γ 1 up to the break energy E b , and F(E) ∝ E −Γ 2 at higher energies. By fitting spectra of each orbit individually, we partially mitigate time-averaging, as each orbit covers a relatively narrow flux range. The broken power law model provides better fits to the spectra of high-flux orbits, indicating that the curvature is intrinsic. It is, however, degenerate for low-flux orbits as both photon indices coverge to Γ ≈ 3 for any value of the break energy in the 4-12 keV range. In those cases we fix E b to 7 keV, which is the median best-fit value for the full-observation spectra.
We find evidence that the low-energy photon index (Γ 1 ) strongly varies with flux, while the flux dependence of the high-energy index (Γ 2 ) is weaker in the flux range of the data presented here. The break energy is difficult to constrain since for many orbits the uncertainties in the best-fit value reach below the low-energy end of the NuS-TAR band at 3 keV, effectively causing Γ 1 to become unbound by data. In Figure 4 we show the best-fit parameters Γ 1 , E b and Γ 2 as functions of 2-10 keV flux for all NuS-TAR orbits up to the end of March 2013. The equivalent results from [15] are plotted for comparison, demonstrating that the spectral trends found in the NuSTAR data smoothly continue into the previously accessible flaring regime, now covering nearly two orders of magnitude in flux.
Summary and Conclusion
The data and the analysis presented here are preliminary, but indicative of the new results uncovered by NuSTAR in the hard X-ray band. Its high sensitivity enabled probing the flux state significantly fainter than previously possible with comparable spectral and temporal resolution. The observed variability ranges over an order of magnitude, including instances of flux-doubling over a half-day period and occasional variability on ∼10-minute timescales. In this low-flux regime we find that the hard X-ray spectrum does not cut off steeply, nor show any sign of an increase towards the inverse-Compton component of the SED, but rather saturates at Γ ≈ 3. The spectrum hardens with an increase in flux, which can be phenomenologically described with a broken power law model: the break energy E b ≈ 7 keV separates the soft and the hard power law slopes, both of which change with flux. This can be understood in terms of acceleration and cooling processes of radiating particles in the Mkn 421 jet, and the resulting shape of the high-energy tail of the relativistic electron energy distribution. These results, together with a broader analysis of the multi-wavelength data more physical modeling, will be presented in more detail in a paper currently in preparation.
|
2013-10-03T14:14:37.000Z
|
2013-06-10T00:00:00.000
|
{
"year": 2013,
"sha1": "bd1f3b22a1ce729907bfd012eb24e948ab96a6c4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1051/epjconf/20136104013",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "4194ed0db6f04c46b24e5ea5523b14ac072009a6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
6182891
|
pes2o/s2orc
|
v3-fos-license
|
Noncommutative Field Theory on Homogeneous Gravitational Waves
We describe an algebraic approach to the time-dependent noncommutative geometry of a six-dimensional Cahen-Wallach pp-wave string background supported by a constant Neveu-Schwarz flux, and develop a general formalism to construct and analyse quantum field theories defined thereon. Various star-products are derived in closed explicit form and the Hopf algebra of twisted isometries of the plane wave is constructed. Scalar field theories are defined using explicit forms of derivative operators, traces and noncommutative frame fields for the geometry, and various physical features are described. Noncommutative worldvolume field theories of D-branes in the pp-wave background are also constructed.
Introduction and Summary
The general construction and analysis of noncommutative gauge theories on curved spacetimes is one of the most important outstanding problems in the applications of noncommutative geometry to string theory. These non-local field theories arise naturally as certain decoupling limits of open string dynamics on D-branes in curved superstring backgrounds in the presence of a non-constant background Neveu-Schwarz B-field. On a generic Poisson manifold M , they are formulated using the Kontesevich star-product [47] which is linked to a topological string theory known as the Poisson sigma-model [19]. Under suitable conditions, the quantization of D-branes in the Poisson sigma-model which wrap coisotropic submanifolds of M , i.e. worldvolumes defined by first-class constraints, may be consistently carried out and related to the deformation quantization in the induced Poisson bracket [20]. Branes defined by second-class constraints may also be treated by quantizing Dirac brackets on the worldvolumes [18].
However, in other concrete string theory settings, most studies of noncommutative gauge theories on curved D-branes have been carried out only within the context of the AdS/CFT correspondence by constructing the branes as solutions in the dual supergravity description of the gauge theory (see for example [15,16,39,41,5]). It is important to understand how to describe the classical solutions and quantization of these models directly at the field theoretic level in order to better understand to what extent the noncommutative field theories capture the non-local aspects of string theory and quantum gravity, and also to be able to extend the descriptions to more general situations which are not covered by the AdS/CFT correspondence. In this paper we will investigate worldvolume deformations in the simple example of the Hppwave background NW 6 [50], the six-dimensional Cahen-Wallach lorentzian symmetric space CW 6 [14] supported by a constant null NS-NS background three-form flux. The spacetime NW 6 lifts to an exact background of ten-dimensional superstring theory by taking the product with an exact four-dimensional background, but we will not write this explicitly. By projecting the transverse space of NW 6 onto a plane one obtains the four-dimensional Nappi-Witten spacetime NW 4 [52], and occasionally our discussion will pertain to this latter exact string background. Our techniques are presented in a manner which is applicable to a wider class of homogeneous pp-waves supported by a constant Neveu-Schwarz flux.
Open string dynamics on this background is particularly interesting because it has the potential to display a time-dependent noncommutative geometry [32,39], and hence the noncommutative field theories built on NW 6 can serve as interesting toy models for string cosmology which can be treated for the most part as ordinary field theories. However, this point is rather subtle for the present geometry [32,40]. A particular gauge choice which leads to a time-dependent noncommutativity parameter breaks conformal invariance of the worldsheet sigma-model, i.e. it does not satisfy the Born-Infeld field equations, while a conformally invariant background yields a non-constant but time-independent noncommutativity. In this paper we will partially clarify this issue. The more complicated noncommutative geometry that we find contains both the transverse space dependent noncommutativity between transverse and light-cone position coordinates of the Hashimoto-Thomas model [40] and the asymptotic time-dependent noncommutativity between transverse space coordinates of the Dolan-Nappi model [32].
The background NW 6 arises as the Penrose-Güven limit [53,37] of an AdS 3 × S 3 background [11]. While this limit is a useful tool for understanding various aspects of string dy-namics, it is not in general suitable for describing the quantum geometry of embedded Dsubmanifolds [38]. In the following we will resort to a more direct quantization of the spacetime NW 6 and its D-submanifolds. We tackle the problem in a purely algebraic way by developing the noncommutative geometry of the universal enveloping algebra of the twisted Heisenberg algebra, whose Lie group N coincides with the homogeneous spacetime CW 6 in question. While our algebraic approach has the advantage of yielding very explicit constructions of noncommutative field theories in these settings, it also has several limitations. It does not describe the full quantization of the curved spacetime NW 6 , but rather only the semi-classical limit of small NS-NS flux θ in which CW 6 approaches flat six-dimensional Minkowski space. This is equivalent to the limit of small light-cone time x + for the open string dynamics. In this limit we can apply the Kontsevich formula to quantize the pertinent Poisson geometry, and hence define noncommutative worldvolume field theories of D-branes. Attempting to quantize the full curved geometry (having θ ≫ 0) would bring us deep into the stringy regime [43] wherein a field theoretic analysis would not be possible. The worldvolume deformations in this case are described by nonassociative algebras and variants of quantum group algebras [26,4], and there is no natural notion of quantization for such geometries. We will nonetheless emphasize how the effects of curvature manifest themselves in this semi-classical limit.
The spacetime NW 6 is wrapped by non-symmetric D5-branes which can be obtained, as solutions of Type II supergravity, from the Penrose-Güven limit of spacetime-filling D5-branes in AdS 3 × S 3 [48]. This paper takes a very detailed look at the first steps towards the construction and analysis of noncommutative worldvolume field theories on these branes. While we deal explicitly only with the case of scalar field theory in detail, leaving the more subtle construction of noncommutative gauge theory for future work, our results provide all the necessary ingredients for analysing generic field theories in these settings. We will also examine the problem of quantizing regularly embedded D-submanifolds in NW 6 . The symmetric D-branes wrapping twisted conjugacy classes of the Lie group N were classified in [61]. Their quantization was analysed in [38] and it was found that, in the semi-classical regime, only the untwisted euclidean D3-branes support a noncommutative worldvolume geometry. We study these D3-branes as a special case of our more general constructions and find exact agreement with the predictions of the boundary conformal field theory analysis [28]. We also find that the present technique captures the noncommutative worldvolume geometry in a much more natural and tractable way than the foliation of the group N by quantized coadjoint orbits does [38]. Our analysis is not restricted to symmetric D-branes and can be applied to other D-submanifolds of the spacetime NW 6 as well.
The organisation of the remainder of this paper is as follows. In Section 2 we describe the twisted Heisenberg algebra, its geometry, and the manner in which it may be quantized in the semi-classical limit. In Section 3 we construct star-products which are equivalent to the Kontsevich product for the pertinent Poisson geometry. These products are much simpler and more tractable than the star-product on NW 6 which was constructed in [38] through the noncommutative foliation of NW 6 by D3-branes corresponding to quantized coadjoint orbits. Throughout this paper we will work with three natural star-products which we construct explicitly in closed form. Two of them are canonically related to coordinatizations of the classical pp-wave geometry, while the third one is more natural from the algebraic point of view. We will derive and compare our later results in all three of these star-product deformations.
In Section 4 we work out the corresponding generalized Weyl systems [1] for these starproducts, and use them in Section 5 to construct the Hopf algebras of twisted isometries [21,22,64] of the noncommutative plane wave geometry. In Section 6 we use the structure of this Hopf algebra to build derivative operators. In contrast to more conventional approaches [25], these operators are not derivations of the star-products but are defined so that they are consistent with the underlying noncommutative algebra of functions. This ensures that the quantum group isometries, which carry the nontrivial curvature content of the spacetime, act consistently on the noncommutative geometry. In Section 7 we define integration of fields through a relatively broad class of consistent traces on the noncommutative algebra of functions.
With these general constructions at hand, we proceed in Section 8 to analyse as a simple starting example the case of free scalar field theory on the noncommutative spacetime NW 6 . The analysis reveals the flat space limiting procedure in a fairly drastic way. To get around this, we introduce noncommutative frame fields which define derivations of the star-products [6,42]. Some potential physical applications in the context of string dynamics in NW 6 [28,27,8,24,45] are also briefly addressed. Finally, as another application we consider in Section 9 the construction of noncommutative worldvolume field theories of D-branes in NW 6 using our general formalism and compare with the quantization of symmetric D-branes which was carried out in [38].
Geometry of the Twisted Heisenberg Algebra
In this section we will recall the algebraic definition [61] of the six-dimensional gravitational wave NW 6 of Cahen-Wallach type and describe the manner in which its geometry will be quantized in the subsequent sections.
Definitions
The spacetime NW 6 is defined as the group manifold of the universal central extension of the subgroup S := SO(2) ⋉ Ê 4 of the four-dimensional euclidean group ISO(4) = SO(4) ⋉ Ê 4 . The corresponding simply connected group N is homeomorphic to six-dimensional Minkowski space 1,5 . Its non-semisimple Lie algebra n is generated by elements J, T and P i ± , i = 1, 2 obeying the non-vanishing commutation relations with u, v, θ ∈ Ê and a = (a 1 , a 2 ) ∈ 2 . In these global coordinates, the metric (2.3) reads (2.5) The metric (2.5) assumes the standard form of the plane wave metric of a conformally flat, indecomposable Cahen-Wallach lorentzian symmetric spacetime CW 6 in six dimensions [14] upon introduction of Brinkman coordinates [13] (x + , x − , z) defined by rotating the transverse space at a Larmor frequency as u = x + , v = x − and a = e i θ x + /2 z. In these coordinates the metric assumes the stationary form revealing the pp-wave nature of the geometry. Note that on the null hyperplanes of constant u = x + , the geometry becomes that of flat four-dimensional euclidean space 4 . This is the geometry appropriate to the Heisenberg subgroup of N , and is what is expected in the Moyal limit when the effects of the extra generator J are turned off.
The spacetime NW 6 is further supported by a Neveu-Schwarz two-form field B of constant field strength defined to be non-vanishing only on vectors tangent to the conjugacy class containing g ∈ N [3]. It is the presence of this B-field that induces time-dependent noncommutativity of the string background in the presence of D-branes. Because its flux is constant, the noncommutative dynamics in certain kinematical regimes on this space can still be formulated exactly, just like on other symmetric curved noncommutative spaces (See [57] for a review of these constructions in the case of compact group manifolds).
Quantization
We will now begin working our way towards describing how the worldvolumes of D-branes in the spacetime NW 6 are deformed by the non-trivial B-field background. The Seiberg-Witten bi-vector [58] induced by the Neveu-Schwarz background (2.7) and the pp-wave metric G given by (2.6) is Let us introduce the one-form on the null hypersurfaces of constant x − = x − 0 , and compute the corresponding two-form gauge transformation of the B-field in (2.7) to get The Seiberg-Witten bi-vector in this gauge is given by [38] where ∂ ± := ∂ ∂x ± and ∂ = (∂ 1 , ∂ 2 ) := ( ∂ ∂z 1 , ∂ ∂z 2 ). Since (2.11) is degenerate on the whole NW 6 spacetime, it does not define a symplectic structure. However, one easily checks that it does define a Poisson structure, i.e. Θ is a Poisson bi-vector [38]. In this gauge one can show that a consistent solution to the Born-Infeld equations of motion on a non-symmetric spacetime-filling D5-brane wrapping NW 6 has vanishing U(1) gauge field flux F = 0 [40].
In particular, at the special value x − 0 = θ and with the rescaling z → 2/θ τ z, the corresponding open string metric [58] G open = G − B G −1 B becomes that of CW 6 in global coordinates (2.5) [38], while the non-vanishing Poisson brackets corresponding to (2.11) read for i, j = 1, 2. The Poisson algebra thereby coincides with the Lie algebra n in this case and the metric on the branes with the standard curved geometry of the pp-wave. In the semi-classical flat space limit θ → 0, the quantization of the brackets (2.12) thereby yields a noncommutative worldvolume geometry on D5-branes wrapping NW 6 which can be associated with a quantization of n (or more precisely of its dual n ∨ ). In this limit, the corresponding quantization of NW 6 is thus given by the associative Kontsevich star-product [47]. Henceforth, with a slight abuse of notation, we will denote the central coordinate τ as the plane wave time coordinate x + . Our semi-classical quantization will then be valid in the small time limit x + → 0.
Our starting point in describing the noncommutative geometry of NW 6 will therefore be at the algebraic level. We will consider the deformation quantization of the dual n ∨ to the Lie algebra n. Naively, one may think that the easiest way to carry this out is to compute star products on the pp-wave by taking the Penrose limits of the standard ones on S 3 and AdS 3 (or equivalently by contracting the standard quantizations of the Lie algebras su(2) and sl(2, Ê)).
However, some quick calculations show that the induced star-products obtained in this way are divergent in the infinite volume limit, and the reason why is simple. While the standard Inönü-Wigner contractions hold at the level of the Lie algebras [61], they need not necessarily map the corresponding universal enveloping algebras, on which the quantizations are performed. This is connected to the phenomenon that twisted conjugacy classes of branes are not necessarily related by the Penrose-Güven limit [38]. We must therefore resort to a more direct approach to quantizing the spacetime NW 6 .
For notational ease, we will write the algebra n in the generic form where (X a ) := θ (J, T, P i ± ) are the generators of n and the structure constants C c ab can be gleamed off from (2.1). The algebra (2.13) can be regarded as a formal deformation quantization of the Kirillov-Kostant Poisson bracket on n ∨ in the standard coadjoint orbit method. Let us identify n ∨ as the vector space Ê 6 with basis X ∨ a := X a , − : n → Ê dual to the X a . In the algebra of polynomial functions (n ∨ ) = (Ê 6 ), we may then identify the generators X a themselves with the coordinate functions for any x ∈ n ∨ with component x a in the X ∨ a direction. These functions generate the whole coordinate algebra and their Poisson bracket Θ is defined by Therefore, when viewed as functions on Ê 6 the Lie algebra generators have a Poisson bracket given by the Lie bracket, and their quantization is provided by (2.13) with deformation parameter θ. In the next section we will explore various aspects of this quantization and derive several (equivalent) star products on n ∨ .
Gutt Products
The formal completion of the space of polynomials (n ∨ ) is the algebra C ∞ (n ∨ ) of smooth functions on n ∨ . There is a natural way to construct a star-product on the cotangent bundle T * N ∼ = N × n ∨ , which naturally induces an associative product on C ∞ (n ∨ ). This induced product is called the Gutt product [36]. The Poisson bracket defined by (2.15) naturally extends to a Poisson structure Θ : where ∂ a := ∂ ∂xa . This coincides with the Seiberg-Witten bi-vector in the limits described in Section 2.2. The Gutt product constructs a quantization of this Poisson structure. It is equivalent to the Kontsevich star-product in this case [31], and by construction it keeps that part of the Kontsevich formula which is associative [60]. In general, within the present context, the Gutt and Kontsevich deformation quantizations are only identical for nilpotent Lie algebras [44].
The algebra (n ∨ ) of polynomial functions on the dual to the Lie algebra is naturally isomorphic to the symmetric tensor algebra S(n) of n. By the Poincaré-Birkhoff-Witt theorem, there is a natural isomorphism Ω : S(n) → U (n) with the universal enveloping algebra U (n) of n. Using the above identifications, this extends to a canonical isomorphism Ω : C ∞ Ê 6 −→ U (n) (3.2) defined by specifying an ordering for the elements of the basis of monomials for S(n), where U (n) denotes a formal completion of the complexified universal enveloping algebra U (n) := U (n)⊗ . Denoting this ordering by • • − • • , we may write this isomorphism symbolically as The original Gutt construction [36] takes the isomorphism Ω on S(n) to be symmetrization of monomials. In this case Ω(f ) is usually called the Weyl symbol of f ∈ C ∞ (Ê 6 ) and the symmetric ordering • • − • • of symbols Ω(f ) is called Weyl ordering. In the following we shall work with three natural orderings appropriate to the algebra n.
The isomorphism (3.2) can be used to transport the algebraic structure on the universal enveloping algebra U (n) of n to the algebra of smooth functions on n ∨ ∼ = Ê 6 and give the star-product defined by The product on the right-hand side of the formula (3.4) is taken in U (n), and it follows that ⋆ defines an associative, noncommutative product. Moreover, it represents a deformation quantization of the Kirillov-Kostant Poisson structure on n ∨ , in the sense that where (1) (n ∨ ) is the subspace of homogeneous polynomials of degree 1 on n ∨ . In particular, the Lie algebra relations (2.13) are reproduced by star-commutators of the coordinate functions as in accordance with the Poisson brackets (2.12) and the definition (2.15).
Let us now describe how to write the star-product (3.4) explicitly in terms of a bi-differential operatorD : [44]. Using the Kirillov-Kostant Poisson structure as before, we identify the generators of n as coordinates on n ∨ . This establishes, for small s ∈ Ê, a one-to-one correspondence between group elements e s X , X ∈ n and functions e s x on n ∨ . Pulling back the group multiplication of elements e s X ∈ N via this correspondence induces a bi-differential operatorD acting on the functions e s x . Since these functions separate the points on n ∨ , this extends to an operator on the whole of C ∞ (n ∨ ).
To apply this construction explicitly, we use the following trick [49,6] which will also prove useful for later considerations. By restricting to an appropriate Schwartz subspace of functions f ∈ C ∞ (Ê 6 ), we may use a Fourier representation (3.7) This establishes a correspondence between (Schwartz) functions on n ∨ and elements of the complexified group N := N ⊗ . The products of symbols Ω(f ) may be computed using (3.3), and the star-product (3.4) can be represented in terms of a product of group elements in N as Using the Baker-Campbell-Hausdorff formula, to be discussed below, we may write for some function D = (D a ) : Ê 6 × Ê 6 → Ê 6 . This enables us to rewrite the star-product (3.8) in terms of a bi-differential operator f ⋆ g :=D(f, g) given explicitly by with ∂ := (∂ a ). In particular, the star-products of the coordinate functions themselves may be computed from the formula Finally, let us describe how to explicitly compute the functions D a (k, q) in (3.9). For this, we consider the Dynkin form of the Baker-Campbell-Hausdorff formula which is given for X, Y ∈ n by e X e Y = e H(X:Y) , (3.12) where H(X : Y) = n≥1 H n (X : Y) ∈ n is generically an infinite series whose terms may be calculated through the recurrence relation (3.14) The first few terms of the formula (3.12) may be written explicitly as Terms in the series grow increasingly complicated due to the sum over partitions in (3.13), and in general there is no closed symbolic form, as in the case of the Moyal product based on the ordinary Heisenberg algebra, for the functions D a (k, q) appearing in (3.9). However, at least for certain ordering prescriptions, the solvability of the Lie algebra n enables one to find explicit expressions for the star-product (3.10) in this fashion. We will now proceed to construct three such products.
Time Ordering
The simplest Gutt product is obtained by choosing a "time ordering" prescription in (3.3) whereby all factors of the time translation generator J occur to the far right in any monomial in U (n). It coincides precisely with the global coordinatization (2.4) of the Cahen-Wallach spacetime, and written on elements of the complexified group N it is defined by where we have denoted k := (j, t, p ± ) with j, t ∈ Ê and p ± = p ∓ = (p ± 1 , p ± 2 ) ∈ 2 . To calculate the corresponding star-product * , we have to compute the group products * * * * e i k a Xa * * · * * e i k ′ a Xa * * * * = * * e i (p + The simplest way to compute these products is to realize the six-dimensional Lie algebra n as a central extension of the subalgebra s = so(2) ⋉ Ê 4 of the four-dimensional euclidean algebra iso(4) = so(4) ⋉ Ê 4 [61,35]. Regarding Ê 4 as 2 (with respect to a chosen complex structure), for generic θ = 0 the generators of n act on w ∈ 2 according to the affine transformations e i j J · w = e −θ j w and e i (p + where the formula (3.19) displays the semi-direct product nature of the euclidean group, while (3.20) displays the group cocycle of the projective representation of the subgroup S of ISO(4), arising from the central extension, which makes the translation algebra noncommutative and is computed from the Baker-Campbell-Hausdorff formula.
Using (3.18)-(3.20) we may now compute the products (3.17) and one finds * * * * e i k a Xa * * · * * e i k ′ a Xa * * (3.21) From (3.11) we may compute the star-products between the coordinate functions on n ∨ and easily verify the commutation relations of the algebra n, with a = 1, . . . , 6 and i = 1, 2. From (3.9,3.10) we find the star-product * of generic functions f, g ∈ C ∞ (n ∨ ) given by where µ(f ⊗ g) = f g is the pointwise product. To second order in the deformation parameter θ we obtain (3.24)
Symmetric Time Ordering
Our next Gutt product is obtained by taking a "symmetric time ordering" whereby any monomial in U (n) is the symmetric sum over the two time orderings obtained by placing J to the far right and to the far left. This ordering is induced by the group contraction of U(1) × SU(2) onto the Nappi-Witten group N 0 [27], and it thereby induces the coordinatization of NW 4 that is obtained from the Penrose-Güven limit of the spacetime S 1,0 × S 3 , i.e. it coincides with the Brinkman coordinatization of the Cahen-Wallach spacetime. On elements of N it is defined by we can again easily compute the required group products to get (3.26) With the same conventions as above, from (3.11) we may now compute the star-products • between the coordinate functions on n ∨ and again verify the commutation relations of the algebra n, From (3.9,3.10) we find for generic functions the formula To second order in θ we obtain
Weyl Ordering
The original Gutt product [36] is based on the "Weyl ordering" prescription whereby all monomials in U (n) are completely symmetrized over all elements of n. On N it is defined by While this ordering is usually thought of as the "canonical" ordering for the construction of starproducts, in our case it turns out to be drastically more complicated than the other orderings. Nevertheless, we shall present here its explicit construction for the sake of completeness and for later comparisons.
It is an extremely arduous task to compute products of the group elements (3.30) directly from the Baker-Campbell-Hausdorff formula (3.13). Instead, we shall construct an isomorphism G : U (n) → U (n) which sends the time-ordered product defined by (3.17) into the Weylordered product defined by (3.30), i.e.
Then by defining G Ω := Ω −1 * • G • Ω ⋆ , the star-product ⋆ associated with the Weyl ordering prescription (3.30) may be computed as represented as the invertible differential operator This relation just reflects the fact that the time-ordered and Weyl-ordered star-products, although not identical, simply represent different ordering prescriptions for the same algebra and are therefore (cohomologically) equivalent. We will elucidate this property more thoroughly in Section 4. Thus once the map (3.33) is known, the Weyl ordered star-product ⋆ can be computed in terms of the time-ordered star-product * of Section 3.1.
The functions G a (k) appearing in (3.33) are readily calculable through the Baker-Campbell-Hausdorff formula. It is clear from (3.17) that the coefficient of the time translation generator J ∈ n is simply Since B 0 = 1, B 1 = − 1 2 and B 2k+1 = 0 ∀k ≥ 1, from (3.14) we thereby find where we have introduced the function obeying the identities In a completely analogous way one finds the coefficient of the P i − term to be given by .
Finally, the non-vanishing contributions to the central element T ∈ n are given by By differentiating (3.36) and (3.38) with respect to s = −θ j we arrive finally at where we have introduced the function From (3.34) we may now write down the explicit form of the differential operator implementing the equivalence between the star-products * and ⋆ as From (3.21) and (3.33) we may readily compute the products of Weyl symbols with the result From (3.11) we may now compute the star-products ⋆ between the coordinate functions on n ∨ to be These products are identical to those of the symmetric time ordering prescription (3.27). After some computation, from (3.9,3.10) we find for generic functions f, g ∈ C ∞ (n ∨ ) the formula To second order in the deformation parameter θ we obtain Although extremely cumbersome in form, the Weyl-ordered product has several desirable features over the simpler time-ordered products. For instance, the Schwartz subspace of C ∞ (n ∨ ) is closed under the Weyl ordered product, whereas the other products are only formal in this regard and do not define strict deformation quantizations. It is also hermitean owing to the property (3.50) Moreover, while the n-covariance condition (3.5) holds for all of our star-products, the Weyl product is in fact n-invariant, because for any x ∈ (1) (n ∨ ) one has the stronger compatibility condition with the action of the Lie algebra n. In the next section we shall see that the Weyl-ordered starproduct is, in a certain sense, the generator of all other star-products making it the "universal" product for the quantization of the spacetime NW 6 .
Weyl Systems
In this section we will use the notion of a generalized Weyl system introduced in [1] to describe some more formal aspects of the star-products that we have constructed and to analyse the interplay between them. This generalizes the standard Weyl systems [62] which may be used to provide a purely operator theoretic characterization of the Moyal product, associated to the (untwisted) Heisenberg algebra. In that case, it can be regarded as a projective representation of the translation group in an even-dimensional real vector space. However, for the twisted Heisenberg algebra such a representation is not possible, since by definition the appropriate arena should be a central extension of the non-abelian subgroup S of the full euclidean group ISO(4). This requires a generalization of the standard notion which we will now describe and use it to obtain a very useful characterization of the noncommutative geometry induced by the algebra n.
Let Î be a five-dimensional real vector space. In a suitable (canonical) basis, vectors k ∈ Î ∼ = Ê × 2 will be denoted (with respect to a chosen complex structure) as with j ∈ Ê and p ± = p ∓ ∈ 2 . As the notation suggests, we regard Î as the "momentum space" of the dual n ∨ . Note that we do not explicitly incorporate the component corresponding to the central element T, as it will instead appear through the appropriate projective representation that we will construct, similarly to the Moyal case. As an abelian group, Î ∼ = Ê 5 with the usual addition + and identity 0. Corresponding to a deformation parameter θ ∈ Ê, we deform this abelian Lie group structure to a generically non-abelian one. The deformed composition law is denoted ⊞. It is associative and in general will depend on θ. The identity element with respect to ⊞ is still defined to be 0, and the inverse of any element k ∈ Î is denoted k, so that Being a deformation of the underlying abelian group structure on Î means that the composition of any two vectors k, q ∈ Î has a formal small θ expansion of the form from which it follows that In other words, rather than introducing star-products that deform the pointwise multiplication of functions on n ∨ , we now deform the "momentum space" of n ∨ to a non-abelian Lie group.
We will see below that the five-dimensional group (Î, ⊞) is isomorphic to the original subgroup S ⊂ ISO(4), and that the two notions of quantization are in fact the same.
Given such a group, we now define a (generalized) Weyl system for the algebra n as a quadruple (Î, ⊞, W, ω), where the map is a projective representation of the group (Î, ⊞) with projective phase ω : Î × Î → . This means that for every pair of elements k, q ∈ Î one has the composition rule in the completed, complexified universal enveloping algebra of n. The associativity of ⊞ and the relation (4.6) imply that the subalgebra W(Î) ⊂ U (n) is associative if and only if for all vectors k, q, p ∈ Î. This condition means that ω defines a one-cocycle in the group cohomology of (Î, ⊞). It is automatically satisfied if ω is a bilinear form with respect to ⊞. We will in addition require that ω(k, q) = O(θ) ∀k, q ∈ Î for consistency with (4.3). The identity element of W(Î) is W(0) while the inverse of W(k) is given by The standard Weyl system on Ê 2n takes ⊞ to be ordinary addition and ω to be the Darboux symplectic two-form, so that W(Ê 2n ) is a projective representation of the translation group, as is appropriate to the Moyal product.
Given a Weyl system defined as above, we can now introduce another isomorphism defined by the symbol where as beforef denotes the Fourier transform of f ∈ C ∞ (Ê 5 ). This definition implies that and that we may introduce a * -involution † on both algebras C ∞ (Ê 5 ) and W(Î) by the formula (4.12) The compatibility condition with the product in U (n) imposes further constraints on the group composition law ⊞ and cocycle ω [1]. From (4.6) we may thereby define a †-hermitean star-product of f, g ∈ C ∞ (Ê 5 ) by the formula (4.14) and in this way we have constructed a quantization of the algebra n solely from the formal notion of a Weyl system. The associativity of ⋆ follows from associativity of ⊞. We may also rewrite the star-product (4.14) in terms of a bi-differential operator as This deformation is completely characterized in terms of the new algebraic structure and its projective representation provided by the Weyl system. It is straightforward to show that the Lie algebra of (Î, ⊞) coincides precisely with the original subalgebra s ⊂ iso(4), while the cocycle ω generates the central extension of s to n in the usual way. From (4.14) one may compute the star-products of coordinate functions on Ê 5 as (4.16) The corresponding star-commutator may thereby be written as where the relation gives the structure constants of the Lie algebra defined by the Lie group (Î, ⊞), while the cocycle term gives the usual form of a central extension of this Lie algebra. Demanding that this yield a deformation quantization of the Kirillov-Kostant Poisson structure on n ∨ requires that C c ab coincide with the structure constants of the subalgebra s ⊂ iso(4) of n, and also that ξ p − ,p + = −ξ p + ,p − = 2t be the only non-vanishing components of the central extension.
It is thus possible to define a broad class of deformation quantizations of n ∨ solely in terms of an abstract Weyl system (Î, ⊞, W, ω), without explicit realization of the operators W(k). In the remainder of this section we will set Π = Ω above and describe the Weyl systems underpinning the various products that we constructed previously. This entails identifying the appropriate maps (4.5), which enables the calculation of the projective representations (4.6) and hence explicit realizations of the group composition laws ⊞ in the various instances. This unveils a purely algebraic description of the star-products which will be particularly useful for our later constructions, and enables one to make the equivalences between these products explicit.
Time Ordering
Setting t = t ′ = 0 in (3.21), we find the "time-ordered" non-abelian group composition law * for any two elements of the form (4.1) to be given by (4.20) From (4.20) it is straightforward to compute the inverse k of a group element (4.1), satisfying (4.2), to be (4.21) The group cocycle is given by and it defines the canonical symplectic structure on the j = constant subspaces 2 ⊂ Î. Note that in this representation, the central coordinate function x + is not written explicitly and is simply understood as the unit element of (Ê 5 ), as is conventional in the case of the Moyal product. For k ∈ Î and X a ∈ s the projective representation (4.6) is generated by the timeordered group elements W * (k) = * * e i k a Xa * *
Symmetric Time Ordering
In a completely analogous manner, inspection of (3.26) reveals the "symmetric time-ordered" non-abelian group composition law • defined by for which the inverse k of a group element (4.1) is simply given by The group cocycle is
Weyl Ordering
Finally, we construct the Weyl system (Î, ⋆ , W ⋆ , ω ⋆ ) associated with the Weyl-ordered starproduct of Section 3.3. Starting from (3.45) we introduce the non-abelian group composition law ⋆ by from which we may again straightforwardly compute the inverse k of a group element (4.1) simply as When combined with the definition (4.12), one has f † = f ∀f ∈ C ∞ (Ê 5 ) and this explains the hermitean property (3.50) of the Weyl-ordered star-product ⋆. This is also true of the product •, whereas * is only hermitean with respect to the modified involution † defined by (4.12) and (4.21). The group cocycle is given by In contrast to the other cocycles, this does not induce any symplectic structure, at least not in the manner described earlier. The corresponding projective representation (4.6) is generated by the completely symmetrized group elements with k ∈ Î and X a ∈ s.
The Weyl system (Î, ⋆ , W ⋆ , ω ⋆ ) can be used to generate the other Weyl systems that we have found [1]. From (3.33) and (3.45) one has the identity which implies that the time-ordered star-product * can be expressed by means of a choice of different Weyl system generating the product ⋆.
(4.33)
This explicit relationship between the Weyl systems for the star-products * and ⋆ is another formulation of the statement of their cohomological equivalence, as established by other means in Section 3.3. Similarly, the symmetric time-ordered star-product • can be expressed in terms of ⋆ through the identity which implies the relationship between the corresponding Weyl systems. This shows explicitly that the star-products • and ⋆ are also equivalent.
Twisted Isometries
We will now start working our way towards the explicit construction of the geometric quantities required to define field theories on the noncommutative plane wave NW 6 . We will begin with a systematic construction of derivative operators on the present noncommutative geometry, which will be used later on to write down kinetic terms for scalar field actions. In this section we will study some of the basic spacetime symmetries of the star-products that we constructed in Section 3, as they are directly related to the actions of derivations on the noncommutative algebras of functions.
Classically, the isometry group of the gravitational wave NW 6 is the group N L × N R induced by the left and right regular actions of the Lie group N on itself. The corresponding Killing vectors live in the 11-dimensional Lie algebra g := n L ⊕ n R (The left and right actions generated by the central element T coincide). This isometry group contains an SO(4) subgroup acting by rotations in the transverse space z ∈ 2 ∼ = Ê 4 , which is broken to U(2) by the Neveu-Schwarz background (2.7). This symmetry can be restored upon quantization by instead letting the generators of g act in a twisted fashion [21,22,64], as we now proceed to describe.
The action of an element ∇ ∈ U (g) as an algebra automorphism C ∞ (n ∨ ) → C ∞ (n ∨ ) will be denoted f → ∇ ⊲ f . The universal enveloping algebra U (g) is given the structure of a cocommutative bialgebra by introducing the "trivial" coproduct ∆ : U (g) → U (g)⊗U (g) defined by the homomorphism which generates the action of U (g) on the tensor product C ∞ (n ∨ ) ⊗ C ∞ (n ∨ ). Since ∇ is an automorphism of C ∞ (n ∨ ), the action of the coproduct is compatible with the pointwise (commutative) product of functions µ : For example, the standard action of spacetime translations is given by for which (5.2) becomes the classical symmetric Leibniz rule.
Let us now pass to a noncommutative deformation of the algebra of functions on NW 6 via a quantization map Ω : C ∞ (n ∨ ) → U (n) corresponding to a specific star-product ⋆ on C ∞ (n ∨ ) (or equivalently a specific operator ordering in U (n)). This isomorphism can be used to induce an action of U (g) on the algebra U (n) through which defines a set of quantized operators ∇ ⋆ = ∇ + O(θ) : C ∞ (n ∨ ) → C ∞ (n ∨ ). However, the bialgebra U (g) will no longer generate automorphisms with respect to the noncommutative starproduct on C ∞ (n ∨ ). It will only do so if its coproduct can be deformed to a non-cocommutative one ∆ ⋆ = ∆ + O(θ) such that the covariance condition is satisfied, where µ ⋆ (f ⊗g) := f ⋆g. This deformation is constructed by writing the star-product f ⋆ g =D(f, g) in terms of a bi-differential operator as in (3.10) or (4.15) to define an invertible abelian Drinfeld twist element [55]F ⋆ ∈ U (g) ⊗ U (g) through It obeys the cocycle condition and defines the twisted coproduct through This new coproduct obeys the requisite coassociativity The important property of the twist elementF ⋆ is that it modifies only the coproduct on the bialgebra U (g), while leaving the original product structure (inherited from the Lie algebra g = n L ⊕ n R ) unchanged.
As an example, let us illustrate how to compute the twisting of the quantized translation generators by the noncommutative geometry of NW 6 . For this, we introduce a Weyl system (Î, ⊞, W, ω) corresponding to the chosen star-product ⋆. With the same notations as in the previous section, for a = 1, . . . , 5 we may use (4.6), (4.12) with Π = Ω, and (5.4) with ∇ = ∂ a to compute where we have assumed that the group composition law of the Weyl system has an expansion of . From the covariance condition (5.5) it then follows that the twisted coproduct assumes a Sweedler form Analogously, if we assume that the group cocycle of the Weyl system admits an expansion of the form ω(k, k ′ ) := i w i (1) (k) w i (2) (k ′ ), then a similar calculation gives the twisted coproduct of the quantized plane wave time derivative as Note that now the corresponding Leibniz rules (5.5) are no longer the usual ones associated with the product ⋆ but are the deformed, generically non-symmetric ones given by arising from the twisting of the coproduct. Thus these derivatives do not define derivations of the noncommutative algebra of functions, but rather implement the twisting of isometries of flat space appropriate to the plane wave geometry [45,24,10,38].
In the language of quantum groups [23], the twisted isometry group of the spacetime NW 6 coincides with the quantum double of the cocommutative Hopf algebra U (n). The antipode S ⋆ : U (g) → U (g) of the given non-cocommutative Hopf algebra structure on the bialgebra U (g) gives the dual action of the isometries of the noncommutative plane wave and provides the analog of inversion of isometry group elements. This analogy is made precise by computing S ⋆ from the group inverses k of elements k ∈ Î of the corresponding Weyl system. Symbolically, one has S ⋆ (∂ ⋆ ) = ∂ ⋆ . In particular, if k = −k (as in the case of our symmetric star-products) then S ⋆ (∂ a ⋆ ) = −∂ a ⋆ and the action of the antipode is trivial. In all three instances the counit ε ⋆ : U (g) → describes the action on the trivial representation as ε ⋆ (∂ a ⋆ ) = 0, and it obeys the compatibility condition with the Drinfeld twist. In what follows we will only require the underlying bialgebra structure of U (g). The compatibility condition (5.5) means that the action of U (g) on C ∞ (n ∨ ) defines quantum isometries of the noncommutative pp-wave, in that the star-product is an intertwiner and the noncommutative algebra of functions is covariant with respect to the action of the quantum group.
The generic non-triviality of the twisted coproducts (5.10) and (5.11) is consistent with and extends the fact that generic translations are not classically isometries of the plane wave geometry, but rather only appropriate twisted versions are [45,24,10,38]. Similar computations can also be carried through for the remaining five isometry generators of g and correspond to the right-acting counterparts of the derivatives above, giving the full action of the noncommutative isometry group on NW 6 . We shall not display these formulas here. In the next section we will explicitly construct the quantized derivative operators ∂ a ⋆ and ∂ ⋆ + above. We now proceed to list the coproducts corresponding to our three star-products.
Time Ordering
The Drinfeld twistF * for the time-ordered star-product is the inverse of the exponential operator appearing in (3.23). Following the general prescription given above, from the group composition law (4.20) of the corresponding Weyl system we deduce the time-ordered coproducts while from the group cocycle (4.22) we obtain The corresponding Leibniz rules read
Symmetric Time Ordering
The Drinfeld twistF • associated to the symmetric time-ordered star-product is given by the inverse of the exponential operator in (3.28). From the group composition law (4.24) of the corresponding Weyl system we deduce the symmetric time-ordered coproducts while from the group cocycle (4.26) we find The corresponding Leibniz rules are given by
Weyl Ordering
Finally, for the Weyl-ordered star-product (3.47) we read off the twist elementF ⋆ in the standard way, and use the associated group composition law (4.28) to write down the coproducts . (5.20) The remaining coproduct may be determined from the cocycle (4.30) as In (5.20) and (5.21) the functionals of the derivative operator i ∂ ⋆ − ⊗1+1⊗ i ∂ ⋆ − are understood as usual in terms of the power series expansions given in Section 3.3. This leads to the corresponding Leibniz rules Note that a common feature to all three deformations is that the coproduct of the quantization of the light-cone position translation generator ∂ − coincides with the trivial one (5.1), and thereby yields the standard symmetric Leibniz rule with respect to the pertinent star-product. This owes to the fact that the action of ∂ − on the spacetime NW 6 corresponds to the commutative action of the central Lie algebra generator T, whose left and right actions coincide. In the next section we shall see that the action of the quantized translations in x − on C ∞ (n ∨ ) coincides with the standard commutative action (5.3). This is consistent with the fact that all frames of reference for the spacetime NW 6 possess an x − -translational symmetry, while translational symmetries in the other coordinates depend crucially on the frame and generally need to be twisted in order to generate an isometry of NW 6 . Notice also that ordinary time translation invariance is always broken by the time-dependent Neveu-Schwarz background (2.7).
Derivative Operators
In this section we will systematically construct a set of quantized derivative operators ∂ a ⋆ , a = 1, . . . , 6 satisfying the conditions of the previous section. In general, there is no unique way to build up such derivatives. To this end, we will impose some weak conditions, namely that the Substituting these into (6.2) using (6.4) then shows that the actions of the * -derivatives simply coincide with the canonical actions of the translation generators on C ∞ (n ∨ ), so that Thus the time-ordered noncommutative geometry of NW 6 is invariant under ordinary translations of the spacetime in all coordinate directions, with the generators obeying the twisted Leibniz rules (5.16).
From (6.2), (6.4) and the derivative rule 11) it then follows that the actions of the ⋆-derivatives on C ∞ (n ∨ ) are given by
Traces
The final ingredient required to construct noncommutative field theory action functionals is a definition of integration. At the algebraic level, we define an integral to be a trace on the algebra U (n) , i.e. a map − : U (n) → which is linear, for all f, g ∈ C ∞ (n ∨ ) and c 1 , c 2 ∈ , and which is cyclic, − Ω(f ) · Ω(g) = − Ω(g) · Ω(f ) .
(7.2)
We define the integral in the star-product formalism using the usual definitions for the integration of commuting Schwartz functions in C ∞ (Ê 6 ). Then the linearity property (7.1) is automatically satisfied. To satisfy the cyclicity requirement (7.2), we introduce [17,6,2,30,34] a measure κ on Ê 6 which deforms the flat space volume element dx and define − Ω(f ) := The measure κ is chosen in order to achieve the property (7.2), so that Such a measure always exists [17,30,34] and its inclusion in the present context is natural for the curved spacetime NW 6 which we are considering here. It is important note that, for the star-products that we use, a measure which satisfies (7.4) gives the integral the additional property providing an explicit realization of the Connes-Flato-Sternheimer conjecture [34].
Since the coordinate functions x a generate the noncommutative algebra, the cyclicity constraint (7.4) is equivalent to the star-commutator condition which must hold for arbitrary functions f ∈ C ∞ (Ê 6 ) (for which the integral makes sense) and for all n ∈ AE, a = 1, . . . , 6. Expanding the star-commutator bracket using its derivation property brings (7.6) to the form We may thus insert the explicit form of [x a , f ] ⋆ for generic f and use the ordinary integration by parts property for Schwartz functions f, g, h ∈ C ∞ (Ê 6 ). This will lead to a number of constraints on the measure κ.
The trace (7.3) can also be used to define an inner product (−, −) : Note that this is different from the inner product introduced in Section 2.1. When we come to deal with the variational principle in the next section, we shall require that our star-derivative operators ∂ a ⋆ be anti-hermitean with respect to the inner product (7.9), i.e. (f, ∂ a ⋆ ⊲ g) = −(∂ a ⋆ ⊲ f, g), or equivalently This allows for a generalized integration by parts property [30] for our noncommutative integral. As always, we will now go through our list of star-products to explore the properties of the integral in each case. We will find that the measure κ is not uniquely determined by the above criteria and there is a large flexibility in the choices that can be made. We will also find that the derivatives of the previous section must be generically modified by a κ-dependent shift in order to satisfy (7.10).
Time Ordering
Using (6.5) along with the analogous * -products f * x a we arrive at the * -commutators When inserted into (7.7), after integration by parts and application of the derivative rule (6.8) these expressions imply constraints on the corresponding measure κ * given by It is straightforward to see that the equations (7.12) imply that the measure must be independent of both the light-cone position and transverse coordinates, so that However, the derivative ∂ * + in (6.6) does not satisfy the anti-hermiticity requirement (7.10). This can be remedied by translating it by a logarithmic derivative of the measure κ * and defining the modified * -derivative 14) The remaining * -derivatives in (6.6) are unaltered. While this redefinition has no adverse effects on the commutation relations (6.4), the action ∂ * + ⊲ f contains an additional linear term in f even if the function f is independent of the time coordinate x + .
Symmetric Time Ordering
Using (6.7) along with the corresponding •-products f • x a we arrive at the •-commutators Substituting these into (7.7) and integrating by parts, we arrive at constraints on the measure κ • given by which can be reduced to the conditions Now the derivative operators ∂ • + , ∂ i • and ∂ i • all violate the requirement (7.10). Introducing translates of ∂ i • and ∂ i • analogously to what we did in (7.14) is problematic. While such a shift does not alter the canonical commutation relations between the coordinates and derivatives, i.e. the algebraic properties of the differential operators, it does violate the •-commutator relationships (6.2) and (6.4) for generic functions f . Consistency between differential operator and function commutators would only be possible in this case by demanding that multiplication from the left follow a Leibniz-like rule for the translated part.
Thus in order to satisfy both sets of constraints, we are forced to further require that the measure κ • depend only on the plane wave time coordinate x + so that (7.17) truncates to The logarithmic translation of ∂ • + must still be applied in order to ensure that the time derivative is anti-hermitean with respect to the noncommutative inner product. This modifies its action to The actions of all other •-derivatives are as in (6.9). Again this shifting has no adverse effects on (6.4), but it carries the same warning as in the time ordered case regarding extra linear terms from the action ∂ • + ⊲ f .
Weyl Ordering
Finally, the Weyl ordered star-products (6.10) along with the corresponding f ⋆ x a products lead to the ⋆-commutators Substituting these commutation relations into (7.7), integrating by parts, and using the derivative rules (6.8) and (6.11) leads to the corresponding measure constraints Again these differential equations imply that the measure κ ⋆ depends only on the plane wave time coordinate x + so that Translating the derivative operator ∂ ⋆ + as before in order to satisfy (7.10) yields the modified derivative with the remaining ⋆-derivatives in (6.12) unchanged. Once again this produces no major alteration to (6.4) but does yield extra linear terms in the actions ∂ ⋆ + ⊲ f .
Field Theory on NW 6
We are now ready to apply the detailed constructions of the preceding sections to the analysis of noncommutative field theories on the plane wave NW 6 , regarded as the worldvolume of a non-symmetric D5-brane [48]. In this paper we will only study the simplest example of free scalar fields, leaving the detailed analysis of interacting field theories and higher spin (fermionic and gauge) fields for future work. The analysis of this section will set the stage for more detailed studies of noncommutative field theories in these settings, and will illustrate some of the generic features that one can expect.
Given a real scalar field ϕ ∈ C ∞ (n ∨ ) of mass m, we define an action functional using the integral (7.3) by where η ab is the invariant Minkowski metric tensor induced by the inner product (2.2) with the non-vanishing components η ± ∓ = 1 and η z i z j = 1 2 δ ij . The tildes on the derivatives in (8.1) indicate that the time component must be appropriately shifted as described in the previous section. Using the property (7.5) we may simplify the action to the form By using the integration by parts property (7.10) on Schwartz fields ϕ, we may easily compute the first order variation of the action (8.2) to be Applying the variational principle δS[ϕ] δϕ = 0 to (8.3) thereby leads to the noncommutative Klein-Gordan field equation and we have used ∂ − κ = 0. The second order ⋆-differential operator ⋆ should be regarded as a deformation of the covariant Laplace operator ⋆ 0 corresponding to the commutative plane wave geometry of NW 6 . This Laplacian coincides with the quadratic Casimir element of the universal enveloping algebra U (n), expressed in terms of left or right isometry generators for the action of the isometry group N L × N R on NW 6 [45,24,38].
However, in the manner which we have constructed things, this is not the case. Recall that the approximation in which our quantization of the geometry of NW 6 holds is the small time limit x + → 0 in which the plane wave approaches flat six-dimensional Minkowski space 1,5 . To incorporate the effects of the curved geometry of NW 6 into our formalism, we have to replace the derivative operators ∂ a ⋆ appearing in (8.1) with appropriate curved space analogs δ a ⋆ [6,42]. Recall that the derivative operators ∂ a ⋆ are not derivations of the star-product ⋆, but instead obey the deformed Leibniz rules (5.12). The deformation arose from twisting the co-action of the bialgebra U (g) so that it generated automorphisms of the noncommutative algebra of functions, i.e. isometries of the noncommutative plane wave. The basic idea is to now "absorb" these twistings into derivations δ a ⋆ obeying the usual Leibniz rule These derivations generically act on C ∞ (n ∨ ) as the noncommutative ⋆-polydifferential operators with ξ a a 1 ···an ∈ C ∞ (n ∨ ). Unlike the derivatives ∂ a ⋆ , these derivations will no longer star-commute among each other. There is a one-to-one correspondence [47] between such derivations δ ⋆ a and Poisson vector fields for all f, g ∈ C ∞ (n ∨ ). To leading order one has δ a ⋆ ⊲ f = E a b ⋆ (∂ b ⋆ ⊲ f ) + O(θ). By identifying the Lie algebra n with the tangent space to NW 6 , at this order the vector fields E a can be thought of as defining a natural local frame with flat metric η ab and a curved metric tensor on the noncommutative space NW 6 . However, for our star-products there are always higher order terms in (8.8) which spoil this interpretation. The noncommutative frame fields δ a ⋆ describe the quantum geometry of the plane wave NW 6 . In particular, the metric tensor G ⋆ will in general differ from the classical open string metric G open . While the operators δ a ⋆ always exist as a consequence of the Kontsevich formality map [47,6], computing them explicitly is a highly difficult problem. We will see some explicit examples below, as we now begin to tour through our three star-products. Throughout we shall take the natural choice of measure κ = | det G| = 1 2 , the constant Riemannian volume density of the NW 6 plane wave geometry.
Time Ordering
In the case of time ordering, we use (6.6) to compute * ⊲ ϕ = 2 ∂ + ∂ − + ∂ · ∂ ϕ (8.10) and thus the equation of motion coincides with that of a free scalar particle on flat Minkowski space 1,5 (Deviations from flat spacetime can only come about here by choosing a timedependent measure κ * ). This illustrates the point made above that the treatment of the present paper tackles only the semi-classical flat space limit of the spacetime NW 6 . The appropriate curved geometry for this ordering corresponds to the global coordinate system (2.5) in which the classical Laplace operator is given by * (8.11) so that the free wave equation ( * 0 − m 2 )ϕ = 0 is equivalent to the Schrödinger equation for a particle of charge p + (the momentum along the x − direction) in a constant magnetic field of strength θ. A global pseudo-orthonormal frame is provided by the commutative vector fields Determining the derivations δ a * corresponding to the commuting frame (8.12) on the quantum space is in general rather difficult. Evidently, from the coproduct structure (5.16) the action along the light-cone position is given by This is simply a consequence of the fact that translations along x − generate an automorphism of the noncommutative algebra of functions, i.e. an isometry of the noncommutative geometry. From the Hopf algebra coproduct (5.14) we have and consequently On the other hand, the remaining isometries involve intricate twistings between the light-cone and transverse space directions. For example, let us demonstrate how to unravel the coproduct rule for ∂ * + in (5.16) into the desired symmetric Leibniz rule (8.7) for δ * + . This can be achieved by exploiting the * -product identities along with the commutativity properties [∂ * − , z i ] * = [∂ * − , z i ] * = 0 for i = 1, 2 and for arbitrary functions f . Using in addition the modified Leibniz rules (5.16) along with the * -multiplication properties (6.5) we thereby find This action mimicks the form of the classical frame field E * + in (8.12).
Finally, for the transversal isometries, one can attempt to seek functions g i ∈ C ∞ (n ∨ ) such that g i * f = ( e − i θ ∂ − f ) * g i in order to absorb the light-cone translation in the Leibniz rule for ∂ i * in (5.16). This would mean that the x − translations are generated by inner automorphisms of the noncommutative algebra. If such functions exist, then the corresponding derivations are given by δ i * ⊲ f = g i * ∂ i * f (no sum over i) and similarly for δ i * . However, it is doubtful that such inner derivations exist and the transverse space frame fields are more likely to be given by higher-order * -polyvector fields. For example, using similar steps to those which led to (8.17), one can show that the actions define derivations of the * -product on NW 6 , and hence naturally determine elements of a noncommutative transverse frame.
The action of the corresponding noncommutative Laplacian η ab δ a * ⊲ (δ b * ⊲ ϕ) deforms the harmonic oscillator dynamics generated by (8.11) by non-local higher spatial derivative terms. These extra terms will have significant ramifications at large energies for motion in the transverse space. This could have profound physical effects in the interacting noncommutative quantum field theory. In particular, it may alter the UV/IR mixing story [51] in an interesting way. For time-dependent noncommutativity with standard tree-level propagators, UV/IR mixing becomes intertwined with violations of energy conservation in an intriguing way [7,56], and it would be interesting to see how our modified free field propagators affect this analysis. It would also be interesting to see if and how these modifications are related to the generic connection between wave propagation on homogeneous plane waves and the Lewis-Riesenfeld theory of time-dependent harmonic oscillators [10].
Symmetric Time Ordering
The analysis in the case of symmetric time ordering is very similar to that just performed, so we will be very brief and only highlight the essential changes. From (6.9) we find once again that the Laplacian (8.5) concides with the flat space wave operator The relevant coordinate system in this case is given by the Brinkman metric (2.6) for which the classical Laplace operator reads A global pseudo-orthonormal frame in this case is provided by the vector fields The corresponding twisted derivations δ a • which symmetrize the Leibniz rules (5.19) can be constructed analogously to those of the time ordering case in Section 8.1 above.
Weyl Ordering
Finally, the case of Weyl ordering is particularly interesting because the effects of curvature are present even in the flat space limit. Using (6.12) we find the Laplacian which coincides with the flat space Laplacian only at θ = 0. To second order in the deformation parameter θ, the equation of motion (8.4) thereby yields a second order correction to the usual flat space Klein-Gordan equation given by Again we find that only the transverse space motion is altered by noncommutativity, but this time through a non-local dependence on the light-cone momentum p + yielding a drastic modification of the dispersion relation for free wave propagation in the noncommutative spacetime. This dependence is natural. The classical mass-shell condition for motion in the curved background is 2 p + p − + |4 θ p + λ| 2 = m 2 , where λ ∈ 2 represents the position and radius of the circular trajectories in the background magnetic field [24]. Thus the quantity 4 θ p + λ can be interpreted as the momentum for motion in the transverse space. The operator (8.22) incorporates the appropriate noncommutative deformation of this motion. It illustrates the point that the fundamental quanta governing the interactions in the present class of noncommutative quantum field theories are not likely to involve the particle-like dipoles of the flat space cases [59,9], but more likely string-like objects owing to the nonvanishing H-flux in (2.7). These open string quanta become polarized as dipoles experiencing a net force due to their couplings to the nonuniform B-field. It is tempting to speculate that, in contrast to the other orderings, the Weyl ordering naturally incorporates the new vacua corresponding to long string configurations which are due entirely to the time-dependent nature of the background Neveu-Schwarz field [8].
While the Weyl ordered star-product is natural from an algebraic point of view, it does not correspond to a natural coordinate system for the plane wave NW 6 due to the complicated form of the group product rule (3.45) in this case. In particular, the frame fields in this instance will be quite complicated. Computing the corresponding twisted derivations δ a ⋆ directly would again be extremely cumbersome, but luckily we can exploit the equivalence between the star-products ⋆ and * derived in Section 3.3. Given the derivations δ a * constructed in Section 8.1 above, we may use the differential operator (3.44) which implements the equivalence (3.32) to define These noncommutative frame fields will lead to the appropriate curved space extension of the Laplace operator in (8.22).
Worldvolume Field Theories
In this final section we will describe how to build noncommutative field theories on regularly embedded worldvolumes of D-branes in the spacetime NW 6 using the formalism described above.
We shall describe the general technique on a representative example by comparing the noncommutative field theory on NW 6 which we have constructed in this paper to that of the noncommutative D3-branes which was constructed in [38]. We shall do so in a general fashion which illustrates how the construction extends to generic D-branes. This will provide further perspective on the natures of the different quantizations we have used throughout, and also illustrate the overall consistency of our results. As we will now demonstrate, we can view the noncommutative geometry of NW 6 , in the manner constructed above, as a collection of all euclidean noncommutative D3-branes taken together. This is done by restricting the geometry to obtain the usual quantization of coadjoint orbits in n ∨ (as opposed to all of n ∨ as described above). This restriction defines an alternative and more geometrical approach to the quantization of these branes which does not rely upon working with representations of the Lie group N , and which is more adapted to the flat space limit θ → 0. This procedure can be thought of as somewhat opposite to the philosophy of [38], which quantized the geometry of a non-symmetric D5-brane wrapping NW 6 [48] by viewing it as a noncommutative foliation by these euclidean D3-branes. Here the quantization of the spacetime-filling brane in NW 6 has been carried out independently leading to a much simpler noncommutative geometry which correctly induces the anticipated worldvolume field theories on the 4 submanifolds of NW 6 .
The euclidean D3-branes of interest wrap the non-degenerate conjugacy classes of the group N and are coordinatized by the transverse space z ∈ 2 ∼ = 4 [61]. They are defined by the spacelike hyperplanes of constant time in NW 6 given by the transversal intersections of the null hypersurfaces independently of the chosen coordinate frame. This describes the brane worldvolume as a wavefront expanding in a sphere S 3 in the transverse space. In the semi-classical flat space limit θ → 0, the second constraint in (9.1) to leading order becomes The function C on n ∨ corresponds to the Casimir element (8.6) and the constraint (9.2) is analogous to the requirement that Casimir operators act as scalars in irreducible representations. Similarly, the constraint on the time coordinate x + in (9.1) is analogous to the requirement that the central element T act as a scalar operator in any irreducible representation of N .
Let π : NW 6 → 4 be the projection of the six-dimensional plane wave onto the worldvolume of the symmetric D3-branes. Let π ♯ : C ∞ ( 4 ) → C ∞ (NW 6 ) be the induced algebra morphism defined by pull-back π ♯ (f ) = f • π. To consistently reduce the noncommutative geometry from all of NW 6 to its conjugacy classes, we need to ensure that the candidate star-product on n ∨ respects the Casimir property of the functions x + and C, i.e. that x + and C star-commute with every function f ∈ C ∞ (n ∨ ). Only in that case can the star-product be consistently restricted from all of NW 6 to a star-product ⋆ x + on the conjugacy classes 4 defined by f ⋆ x + g := π ♯ (f ) ⋆ π ♯ (g) . Then one has the compatibility condition ι ♯ (f ⋆ g) = ι ♯ (f ) ⋆ x + ι ♯ (g) (9.4) where ι ♯ : C ∞ (NW 6 ) → C ∞ ( 4 ) is the pull-back induced by the inclusion map ι : 4 ֒→ NW 6 . In this case one has an isomorphism C ∞ ( 4 ) ∼ = C ∞ (NW 6 )/J of associative noncommutative algebras [12], where J is the two-sided ideal of C ∞ (NW 6 ) generated by the Casimir constraints (x + − constant) and (C − constant). This procedure is a noncommutative version of Poisson reduction, with the Poisson ideal J implementing the geometric requirement that the Seiberg-Witten bi-vector Θ be tangent to the conjugacy classes.
From the star-commutators (7.11), (7.15) and (7.20) we see that [x + , f ] ⋆ = 0 for all three of our star-products. However, the condition [C, f ] ⋆ = 0 is not satisfied. Although classically one has the Poisson commutation Θ(C, f ) = 0, one can only consistently restrict the star-products by first defining an appropriate projection of the algebra of functions on n ∨ onto the star-subalgebra C of functions which star-commute with the Casimir function C. One easily computes that C naturally consists of functions f which are independent of the light-cone position, i.e. ∂ − f = 0. Then the projection ι ♯ above may be applied to the subalgebra C on which it obeys the requisite compatibility condition (9.4). The general conditions for reduction of Kontsevich star-products to D-submanifolds of Poisson manifolds are described in [20,18].
With these projections implicitly understood, one straightforwardly finds that all three starproducts (3.23), (3.28) and (3.47) restrict to for functions f, g ∈ C ∞ ( 4 ). This is just the Moyal product, with noncommutativity parameter θ x + , on the noncommutative euclidean D3-branes. It is cohomologically equivalent to the Voros product which arises from quantizing the conjugacy classes through endomorphism algebras of irreducible representations of the twisted Heisenberg algebra n, with a normal or Wick ordering prescription for the generators P i ± [38]. In this case, the noncommutative euclidean space arises from a projection of U (n) in the discrete representation V p + ,p − whose second Casimir invariant (8.6) is given in terms of light-cone momenta as C = −2 p + (p − + θ) and with T = θ p + . In this approach the noncommutativity parameter is naturally the inverse of the effective magnetic field p + θ. On the other hand, the present analysis is a more geometrical approach to the quantization of symmetric D3-branes in NW 6 which deforms the euclidean worldvolume geometry by the time parameter θ x + without resorting to endomorphism algebras. The relationship between the two sets of parameters is given by x + = p + τ , where τ is the proper time coordinate for geodesic motion in the pp-wave geometry of NW 6 .
In contrast to the coadjoint orbit quantization [38], the noncommutativity found here matches exactly that predicted from string theory in the semi-classical limit [28], which asserts that the Seiberg-Witten bi-vector on the D3-branes is given by Θ x + = i 2 sin(θ x + ) ∂ ⊤ ∧ ∂. Note that the present analysis also covers as a special case the degenerate cylindrical null branes located at time x + = 0 [61], for which (9.5) becomes the ordinary pointwise product f ⋆ 0 g = f g of worldvolume fields and as expected these branes support a commutative worldvolume geometry. In contrast, the commutative null branes correspond to the class of continuous representations of the twisted Heisenberg algebra having quantum number p + = 0 which must be dealt with separately [38].
It is elementary to check that the rest of the geometrical constructs of this paper reduce to the standard ones appropriate for a Moyal space. By defining ∂ a ⋆ x + ⊲ f := ι ♯ • ∂ a ⋆ ⊲ π ♯ (f ) , (9.6) one finds that the actions of the derivatives constructed in Section 6 all reduce to the standard ones of flat noncommutative euclidean space, i.e. ∂ i ⋆ x + ⊲ f = ∂ i f , ∂ i ⋆ x + ⊲ f = ∂ i f for f ∈ C ∞ ( 4 ). From Section 5 one recovers the standard Hopf algebra of these derivatives with trivial coproducts ∆ ⋆ x + defined by and hence the symmetric Leibniz rules appropriate to the translational symmetry of field theory on Moyal space. Consistent with the restriction to the conjugacy classes, one also has ∂ ⋆ x + ± ⊲ f = 0.
However, from (5.15), (5.18) and (5.21) one finds a non-vanishing co-action of time translations given by This formula is very natural. The isometries of NW 6 in g = n L ⊕ n R corresponding to the number operator J of the twisted Heisenberg algebra are generated by the vector fields [38] J L = θ −1 ∂ + and J R = −θ −1 ∂ + − i (z · ∂ − z · ∂ ) = θ −1 E * + (in Brinkman coordinates). The vector field J L + J R generates rigid rotations in the transverse space. Restricted to the D3brane worldvolume, the time translation isometries thus truncate to rotations of 4 in so(4). The coproduct (9.8) gives the standard twisted co-action of rotations for the Moyal algebra which define quantum rotational symmetries of noncommutative euclidean space [21,22,64]. This discussion also drives home the point made earlier that our derivative operators ∂ a ⋆ indeed do generate, through their twisted co-actions (Leibniz rules), quantum isometries of the full noncommutative plane wave.
Finally, a trace on C ∞ ( 4 ) is induced from (7.3) by restricting the integral to the submanifold ι : 4 ֒→ NW 6 and using the induced measure ι ♯ (κ). For the measures constructed in Section 7, ι ♯ (κ) is always a constant function on 4 and hence the integration measures all restrict to the constant volume form of 4 . Thus noncommutative field theories on the spacetime NW 6 consistently truncate to the anticipated worldvolume field theories on noncommutative euclidean D3-branes in NW 6 , together with the correct twisted implementation for the action of classical worldvolume symmetries. The advantage of the present point of view is that many of the novel features of these canonical Moyal space field theories naturally originate from the pp-wave noncommutative geometry when the Moyal space is regarded as a regularly embedded coadjoint orbit in n ∨ , as described above. Furthermore, the method detailed in this paper allows a more systematic construction of the deformed worldvolume field theories of generic D-branes in NW 6 in the semi-classical regime, and not just the symmetric branes analysed here. For instance, the analysis can in principle be applied to describe the dynamics of symmetry-breaking D-branes which localize along products of twisted conjugacy classes in the Lie group N [54]. However, these branes have yet to be classified in the case of the gravitational wave NW 6 . supported in part by an EPSRC Postgraduate Studentship. The work of R.J.S. was supported in part by PPARC Grant PPA/G/S/2002/00478.
|
2014-10-01T00:00:00.000Z
|
2006-02-03T00:00:00.000
|
{
"year": 2006,
"sha1": "38ddc7c245cacc7d82702cc0009df5f1617d9f98",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0602036",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "64cbdd051000067dd5224cbe2db7f008757caf72",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
231942597
|
pes2o/s2orc
|
v3-fos-license
|
Finite Element Approximation of Steady Flows of Colloidal Solutions
We consider the mathematical analysis and numerical approximation of a system of nonlinear partial differential equations that arises in models that have relevance to steady isochoric flows of colloidal suspensions. The symmetric velocity gradient is assumed to be a monotone nonlinear function of the deviatoric part of the Cauchy stress tensor. We prove the existence of a unique weak solution to the problem, and under the additional assumption that the nonlinearity involved in the constitutive relation is Lipschitz continuous we also prove uniqueness of the weak solution. We then construct mixed finite element approximations of the system using both conforming and nonconforming finite element spaces. For both of these we prove the convergence of the method to the unique weak solution of the problem, and in the case of the conforming method we provide a bound on the error between the analytical solution and its finite element approximation in terms of the best approximation error from the finite element spaces. We propose first a Lions-Mercier type iterative method and next a classical fixed-point algorithm to solve the finite-dimensional problems resulting from the finite element discretisation of the system of nonlinear partial differential equations under consideration and present numerical experiments that illustrate the practical performance of the proposed numerical method.
Introduction
The classical incompressible Navier-Stokes constitutive equation and its usual generalisations, the constitutive relations for the incompressible Stokesian fluid, are explicit expressions for the Cauchy stress in terms of the symmetric part of the velocity gradient. The Stokesian fluid is defined by the constitutive expression Navier-Stokes fluid is a special sub-class of (1.1) that is linear in the symmetric part of the velocity gradient and is defined through: T = −pI + 2µD, (1.2) where µ is the viscosity of the fluid. Power-law fluids are another popular sub-class of (1.1), the power-law fluid being defined through the constitutive equation where µ 0 and α are positive constants and m is a constant; if m is zero we recover the Navier-Stokes fluid model, if it is negative we have a shear-thinning fluid model and if it is positive we have a shear-thickening fluid model. There are however many fluids that cannot be described by constitutive equations of the form (1.1) but require "relations", in the true mathematical sense of the term, between the Cauchy stress and the symmetric part of the velocity gradient. Implicit constitutive relations that involve higher time derivatives of the stress and the symmetric part of the velocity gradient have been proposed to describe the response of non-Newtonian fluids that exhibit viscoelastic response 1 (see Burgers (1939) [13], Oldroyd (1950) [30]); that is fluids that exhibit phenomena like stress relaxation. However, purely implicit algebraic relationship between the stress and the symmetric part of the velocity gradient were not considered to describe non-Newtonian fluids until recently. Such models are critical if one is interested in describing the response of fluids which do not exhibit viscoelasticity but whose material properties depend on the mean value of the stress and the shear rate, a characteristic exhibited by many fluids and colloids, as borne out by numerous experiments. Consider for example an incompressible fluid whose viscosity depends on the mechanical pressure 2 (mean value of the stress) and is shear-thinning, whose constitutive relation takes the form T = −pI + 2µ p, tr(D 2 ) D. which is an implicit relationship between the stress and the symmetric part of the velocity gradient. Rajagopal (2003) [34], (2006) [35] introduced the implicit relationship of the above form (and also the much more general implicit relationship between the history of the stress and the history of the deformation gradient) to describe materials whose properties depend upon the pressure and the shear rate. In fact, the properties of all fluids depend upon the pressure: it is just a matter of how large the variation of the pressure is in order for one to take the variation of the properties into account. The book by Bridgman (1931) [9] entitled "Physics of High Pressures" provides copious references to the experimental literature before 1931 on the variation of the viscosity of fluids with pressure, and one can find recent references to the experimental literature on the dependence of viscosity on pressure in Málek and Rajagopal (2006) [27]. Stokes (1845) [40] recognised that the viscosity of fluids varies with pressure, but in the case of sufficiently slow flows in channels and pipes he assumed that the viscosity could be considered a constant. Suffice to say, constitutive relations of the class (1.7) are necessary to describe the response of fluids whose viscosity depends on the pressure. Also as mentioned earlier, the implicit constitutive relation (1.7) is useful to describe the behaviour of colloids.
Recently, Perlácová and Prǔša (2015) [32] (see also LeRoux and Rajagopal (2013) [39]) used an implicit model belonging to a sub-class of (1.7) to describe the response of colloidal solutions as presented in the experimental work of Boltenhagen et al. (1997) [4], Hu et al. (1998) [21], Lopez-Diaz et al. (2010) [26] among others. Notice that while one always expresses the incompressible Navier-Stokes fluid by the representation (1.2), it is perfectly reasonable to describe it as (1. 8) In fact, it is the representation (1.8) that is in keeping with causality as the stress is the cause and the velocity and hence its gradient is the effect, and this fact cannot be overemphasised. Such a representation would imply that the Stokes assumption that is often appealed to is incorrect (see Rajagopal (2013) [36] for a detailed discussion of the same). Málek et al. (2010) [33] generalised (1.8) to stress power-law fluids, namely constitutive relations of the form: where T d is the deviatoric part of the Cauchy stress, γ and β are positive constants, and n is a constant that can be positive, negative or zero. The constitutive relation (1.9) is capable of describing phenomena that the classical power-law models are incapable of describing. For instance, the constitutive models (1.9) can describe limiting strain rate as well as fluids which allow the possibility of the strain rate initially increasing with stress and later decreasing with stress; both such responses cannot be described by the classical powerlaw fluid model (1.3) (see the discussion in Málek et al. (2010) [33] with regard to the difference in the response characteristics of the stress power-law fluid and the classical power-law fluid). We are interested in a further generalization of the constitutive relation of the form (1.9) that is appropriate for describing the response of colloidal solutions. This constitutive relation takes the form: where α, β, and γ are positive constants, n is a real number, and T d is the deviatoric part of the Cauchy stress. The shear stress in a fluid undergoing simple shear flow, that is described by the constitutive relation given above, increases from zero to a maximum, then decreases to a local minimum, and then increases monotonically as the shear stress increases from zero. As discussed by Le Roux and Rajagopal [39], and Perlácová and Prǔša [32], many colloids exhibit such behavior. The constitutive relation that we introduce first in (2.10) and next in (3.1) includes (1.10) as a special sub-class. It can be posed within a Hilbert space setting owing to the presence of the coefficient α in (1.10), but nevertheless, it is a challenging problem as it involves two nonlinearities: the monotone part in the constitutive relation and the inertial (convective) term. The problem without the inertial term, see Subsection 2.2 below, has already been analysed in [5], while the analysis of the steady-state incompressible Navier-Stokes equations is well-established, see for instance [41,20]. With both nonlinearities present in the model, proving the existence of a weak solution, for instance, to the best of our knowledge cannot be done by simply coupling the techniques used for these two problems, namely the Browder-Minty theorem and the Galerkin method combined with Brouwer's fixed point theorem and a weak compactness argument. More refined arguments are needed; they are crucial to the proofs of Lemmas 4 and 5 below. This work is organised as follows. The notation and the functional-analytic setting are recalled in the next subsection. In Section 2, both linear and fully nonlinear versions of the formulation are briefly analysed for the Stokes system, i.e., without the inertial (convective) term. The theoretical analysis of the complete nonlinear system is carried out in Section 3. The main results of this section are Theorem 1 for the existence of a solution and Proposition 2 for the uniqueness of a solution under additional assumptions on the input data. In Section 4, conforming finite element approximations of these models are proposed and error estimates are derived. The cases of both simplicial and hexahedral elements are discussed. The analysis of the latter is less satisfactory as it requires subdivisions consisting of parallelepipeds and suffers from a higher computational cost. This motivates the introduction of nonconforming approximations in Section 5. In Section 6, two decoupling algorithms are presented and compared: a Lions-Mercier algorithm adapted to a system with a monotone part and an elliptic part, and a classical fixed-point algorithm alternating between the approximation of a Navier-Stokes system and the nonlinear constitutive relation for the stress. Numerical experiments are performed with conforming finite elements on a square mesh in two dimensions. The theoretically established convergence of the scheme is confirmed and convergence of both decoupled algorithms is observed.
Notation and preliminaries
Let Ω ⊂ R d , d ∈ {2, 3}, be a bounded, open, simply connected Lipschitz domain. We consider the function spaces Q := L 2 0 (Ω), V := H 1 0 (Ω) d and M := {S ∈ L 2 (Ω) d×d sym : tr(S) = 0}, (1.11) for the pressure, the velocity, and the deviatoric stress tensor, respectively. As usual, the zero mean value constraint being introduced to fix the undetermined additive constant in the mechanical pressure. Here the subscript sym indicates that the d × d tensors under consideration are assumed to be symmetric. Henceforth, the symmetric gradient of the velocity field v (or, briefly, symmetric velocity gradient) will be denoted by and the deviatoric part of a d × d tensor S is defined by with I the d × d identity tensor; thus the trace of S d is zero. We denote by V the subspace of V consisting of all divergence-free functions contained in V ; that is, (1.14) For vector-valued functions v : with | · | signifying the Euclidean norm on R d , while for tensor-valued functions S : Ω → R d×d , we define is the Frobenius norm of S. Clearly, M is a Hilbert space with this norm. We recall the Poincaré and Korn inequalities, which are, respectively, the following: there exist positive constants C P and C K such that We endow V (and V) with the norm · V := D(·) L 2 (Ω) . (1.17) Both V and V are Hilbert spaces with this norm, because · V is equivalent to both the H 1 (Ω) d×d norm and the H 1 (Ω) d×d semi-norm, thanks to (1.15), (1.16) and the trivial relation D(v) L 2 (Ω) ≤ ∇v L 2 (Ω) .
Stokes system with linear and nonlinear constitutive relations
In this section we study two preliminary model problems without the inertial term; the first one simply reduces to the Stokes system, while the second model problem involves a monotone nonlinearity treated by the Browder-Minty approach.
The Stokes system
Let us consider the problem in Ω, u = 0 on ∂Ω, where f : Ω → R d is a prescribed external force, D(u) is defined by (1.12), the unknown tensor T is symmetric, and α is a given positive constant, the reciprocal of the viscosity coefficient. Here, we assume that f ∈ L 2 (Ω) d for simplicity, but a similar analysis holds for the general case f ∈ V ′ = H −1 (Ω) d ; see for instance Remark 1 in Section 3. By decomposing the Cauchy stress T as T = T d + 1 d tr(T )I and inserting this in the first equation of (2.1) we arrive at the following equivalent problem: which we recognise to be the Stokes system where the mechanical pressure (mean normal stress) is p := − 1 d tr(T ). Recalling the spaces M, V, Q defined in (1.11) and using the relation which holds 3 for any symmetric tensor S, the weak formulation of problem (2.2) can be written as follows: For any S ∈ M , v ∈ V , and q ∈ Q, we set As is usual for the Stokes problem, the unknown pressure can be eliminated from (2.3) by restricting the test functions v to V. In addition, the variable u can also be eliminated by treating the first line of (2.3) as a constraint, thus leading to an equivalent (reduced) problem for which the two variables p and u are eliminated. The equivalence is based on the following (inf-sup) conditions For any R, S ∈ R d×d , with S symmetric, we have that S : R = S+S t 2 : R = 1 2 S : R + 1 2 S t : R = 1 2 S : R + 1 2 S : R t = S : R+R t 2 . and ∃β > 0 : inf where we have used that D(v) L 2 (Ω) ≤ ∇v L 2 (Ω) . It is well-known that the spaces V and Q defined in (1.11) satisfy the inf-sup condition (2.5), see for instance [20], while the relation (2.4) can be easily shown by observing that, for a given v ∈ V, we have R : We can then eliminate the incompressibility constraint by seeking u ∈ V, yielding the (partially reduced) problem: (2.6) Clearly, each solution of (2.3) satisfies (2.6). Conversely, it follows from the inf-sup condition (2.5) that for any solution (T d , u) of (2.6) there exists a unique p ∈ Q such that (T d , u, p) is the solution of (2.3); see [20].
Hence these two problems are equivalent. Furthermore, we can eliminate the unknown u by proceeding as follows; see [5]. First, we introduce the decomposition M = M ⊕ M ⊥ with with C P and C K the constants in Poincaré's and Korn's inequalities (1.15) and (1.16), respectively. We finally get the (fully reduced) problem: find T d 0 ∈ M such that The well-posedness of problem (2.9) follows from the Lax-Milgram lemma, while its equivalence to the original problem (2.3) is guaranteed by (2.4) and (2.5).
Of course, in this simple model with a linear constitutive relation, T d 0 = 0 since the right-hand side of (2.9) vanishes and α(·, ·) Ω is an inner product on M. However, the framework developed here will be used in the sequel in a more general setting.
Stokes model with a nonlinear constitutive relation
Next, we consider the following Stokes-like system with a nonlinear relation between the stress and the symmetric velocity gradient: with γ a given positive constant, and where µ ∈ C 1 ((0, +∞)) ∩ C 0 ([0, +∞)) is a given function satisfying and µ(a) > 0 and µ(a)a ≤ C 1 ∀ a ∈ R ≥0 (2.12) for some positive constant C 1 . Since µ is continuous on any subinterval of R ≥0 , the second part of (2.12) implies that µ is bounded above and we denote its maximum by µ max , Moreover, proceeding as in the proof of [12, Lemma 4.1], we deduce from (2.11) and (2.12) that for any R, S ∈ R d×d , the following monotonicity property hold: with equality if and only if R = S. Introducing again p := − 1 d tr(T ), the weak formulation of problem (2.10) reads as follows: find a triple (2.15) Proceeding exactly as in Section 2.1, we first eliminate the pressure, and we thus deduce that problem (2.15) is equivalent to the following problem: which is further equivalent to the following problem: find with T d f ∈ M ⊥ the solution of (2.8). The Browder-Minty theorem, see for instance [29], guarantees the existence of a solution to problem (2.17). Indeed, let A : where ·, · M denotes the duality pairing between M and its dual space, M ′ . It then easily follows that the mapping is bounded, monotone, coercive and hemi-continuous. By the Browder-Minty theorem these imply surjectivity of A and thereby existence of a solution, while its uniqueness follows from the strict monotonicity of A.
Navier-Stokes with nonlinear constitutive relation
Now, we focus on our problem of interest, where a convective term is added to the first equation of (2.10), i.e., we consider the problem in Ω, u = 0 on ∂Ω. (3.1) We prove a priori estimates, construct a solution, and give sufficient conditions for global uniqueness.
(3.2)
In order to bring forth an elliptic term on the left-hand side of the first equation of (3.2), we rewrite the second equation in (3.2) as and thus by substituting this relation into the first equation of (3.2) we get in Ω, div(u) = 0 in Ω, u = 0 on ∂Ω. (3.4) The weak formulation of (3.4) reads: As previously, we eliminate the pressure by restricting the test functions to V, and we thus obtain the following equivalent reduced problem: for all (S, v) ∈ M × V. Interestingly, (3.6), (3.7) can be further reduced by observing that, given u, (3.7) uniquely determines T d thanks to the Browder-Minty theorem; see the end of Section 2.2. Thus, we define the mapping G : V → M by u → T d with T d ∈ M being the unique solution of where we recall that A is defined in (2.18). With this mapping, (3.6), (3.7) is equivalent to the following problem: find u ∈ V such that Before embarking on the proof of existence of a solution to problem (3.6), (3.7) we establish a series of a priori estimates under the assumption that a solution exists.
A priori estimates
Assuming that problem (3.6), (3.7) has a solution, the following a priori estimates hold for any solution with C P and C K signifying the constants in Poincaré's and Korn's inequality, respectively, and C 1 the constant in (2.12). Proof.
Using then the positivity of µ, see (2.12), we get To obtain a bound for u, we recall the well-known relation which is easily obtained by integration by parts, as follows: Therefore, taking v = u in (3.6) and using (2.12) we obtain from which we directly deduce (3.10); (3.11) follows by applying (3.10) to (3.12).
Proof. The ingredients of the proof are similar to those used in the proof of Lemma 1 and only the derivation of the bound for D(u) is different. First notice that combining (3.6) and (3.7) we have (3.16) Taking v = u in (3.16) we then find that Notice that T d : D(u) ≥ 0 a.e. in Ω. Indeed, from (3.7) we have that in Ω.
Therefore, taking S = D(u) in (3.7) and using the upper bound µ max for µ and the bound (3.17) we have which yields (3.14). Finally, the bound (3.15) for T d is obtained by substituting (3.14) in (3.12).
Remark 1. Similar a priori estimates can be derived in the case when f ∈ V ′ (with V ′ = H −1 (Ω) d ). More precisely, all occurrences of C P C K f L 2 (Ω) can be replaced by f V ′ , where , (3.19) and ·, · V denotes the duality pairing between V ′ and V . The same observation holds for all that follows.
Remark 2. By a direct argument we can also prove that This leads to the same a priori bound (3.15) for T d , Indeed, the choice S = D(u) in (3.7) gives directly (without invoking (3.18)) Then (3.20) follows by substituting the bound into the preceding inequality. This also yields (3.15).
Construction of a solution
In this subsection we prove the existence of a solution in a bounded Lipschitz domain without any restrictions on the data, other than those stated at the beginning of Section 2.2. The first part of the construction is fairly standard: a suitable sequence of (finite-dimensional) Galerkin approximations to the infinite-dimensional problem is constructed, followed by Brouwer's fixed point theorem to prove that each finite-dimensional problem in the sequence has a solution; uniform a priori estimates, similar to those derived in Lemma 1, are established for the Galerkin solutions, which are then used for passing to the (weak) limit, via a weak compactness argument. However, because of the combined effect of the nonlinearities, identifying the limit as a solution to the infinite-dimensional problem requires a more refined argument. For the sake of clarity, the argument is split into several steps.
Step 1 (Finite-dimensional approximation). Formulation (3.9) lends itself readily to a Galerkin discretisation. Since the only unknown is u in V, a separable Hilbert space, we introduce a countably infinite basis {w 1 , w 2 , . . .} of orthonormal functions of V with respect to the inner product whose span is dense in V. Next, we truncate this basis, i.e., for each m ≥ 1 we define and for u m ∈ V m we denote byû m ∈ R m its representation with respect to this basis. Finally, we fix m ≥ 1 and consider the following finite-dimensional problem: find u m ∈ V m such that, for all 1 ≤ j ≤ m, Problem (3.22), which can be seen as the projection of (3.9) onto V m , is equivalent to the following: find u m ∈ R m such that F(û m ) = 0, where F = (F 1 , . . . , F m ) t : R m → R m is the continuous function defined, for j = 1, . . . , m, by Step 2 (Existence of a discrete solution). Problem (3.22) is a system of m nonlinear equations in m unknowns. The existence of a solution to this problem can be established by the following variant of Brouwer's fixed point theorem (see e.g. [17,20]). (3.24) Moreover, T d m = G(u m ) satisfies the uniform bound Proof. We infer from Lemma 3 that F has a zero in the ball B m (0, r) with Indeed, using the antisymmetry property (3.13), which holds because u m ∈ V m ⊂ V, we get where we have used Poincaré's and Korn's inequalities (1.15) and (1.16), respectively, to bound the second term and the relation (2.12) for the third one. As D(u m ) L 2 (Ω) = |û m |, we deduce from the last inequality that if |û m | = r with r as defined above, then Thanks to Lemma 3, there exists a pointû m ∈ B m (0, r) such that F(û m ) = 0, i.e., problem (3.22) has a solution u m ∈ V m that satisfies the uniform bound (3.24). Finally, it is easily shown that T d m := G(u m ) satisfies the bound (3.25).
Step 3 (Passage to the limit m → ∞ and identification of the limit). We consider the sequences (u m ) m≥1 Thanks to the uniform estimates (3.24) and (3.25) there exist two subsequences (not relabelled) such that Our objective is to show that the pair (T d ,ū) ∈ M × V is a solution to the problem under consideration by passing to the limit in (3.22), (3.23).
Passing to the limit in (3.22), (3.23) is however not straightforward because of the lack of strong convergence of T d m in M . Identifying the pair (T d ,ū) ∈ M × V as a solution will be achieved by means of the following two lemmas, the first of which (Lemma 4) relies on the equations and the strong convergence of the sequence (u m ) m≥1 in L q (Ω) d shown above, and the second lemma (Lemma 5) follows from the monotonicity property (2.14). The proof, included below, that the pair (T d ,ū) satisfies (3.7) is inspired by the arguments in [11], where a more general constitutive relation than ( Proof. By testing equation (3.23) with S = D(w j ) and substituting into (3.22) we deduce that Multiplying (3.27) by (û m ) j , summing over j, and applying (3.13), we derive Thus we obtain on the one hand On the other hand, letting m tend to infinity in (3.27) for fixed j and considering the strong convergence of u m , we infer that In view of (3.13), the choice v =ū in (3.30) yields and (3.26) then follows from (3.29) and (3.31).
for all S ∈ M . Taking then S = T d m −T d and using the monotonicity property (2.14) we get Finally, we take the limit m → ∞ of both sides and apply (3.26) to obtain Proof. It follows from Lemma 5 that on the one hand (T d ,ū) solves (3.7) and on the other hand, Indeed, passing to the limit in (3.23) gives, for any S ∈ M , Therefore, taking the limit as m → ∞ in (3.22) we get for each j = 1, 2, . . ., and thus the density of m≥1 V m in V implies that which is precisely (3.6).
Global conditional uniqueness
We now prove global uniqueness of the solution under additional assumptions on the function µ and the input data. The notion of uniqueness we establish is global and conditional in the sense that it holds under suitable restrictions on the data, but it is also global because no other solution exists. Let R d×d sym,0 denote the space of symmetric d × d matrices with vanishing trace and let C S be the smallest positive constant in the following Sobolev embedding: (3.33) If the input data satisfy then the solution of problem (3.6), (3.7) is unique.
Proof. We use a variational argument. Suppose that (T d 1 , u 1 ), (T d 2 , u 2 ) ∈ M × V are solutions of (3.6), (3.7). Let us write δT d := T d 1 − T d 2 and δu := u 1 − u 2 . Subtracting the equations solved by (T d 2 , u 2 ) from those solved by (T d 1 , u 1 ) we get for all (S, v) ∈ M × V the following pair of equalities: The choice S = δT d in (3.37), thanks to the monotonicity property (2.14), leads to Then, by noting that by testing (3.36) with v = δu, and recalling (3.13) we obtain The assumption (3.35) on the data guarantees that the factor on the right-hand side of the last inequality is strictly smaller than 1 α , thus implying that D(δu) L 2 (Ω) = 0, i.e., u 1 = u 2 . Finally, applying this result to (3.38) Remark 3. The strategy used in deriving the second a priori estimate stated in Lemma 2 leads to uniqueness when (3.35) is replaced by In fact both strategies lead to the same condition (3.39); namely, we also get (3.39) by using (3.14) instead of (3.10) to bound D(u 1 ) L 2 (Ω) in the proof of Proposition 2.
Note that both (3.35) and (3.39) hold when γ and f are sufficiently small.
Remark 4.
Under the Lipschitz condition (3.34), the proof of (3.20) and (3.15) is valid with µ max replaced by Λ, and the L 2 (Ω) d norm of f (multiplied by C P C K ) replaced by its norm in V ′ , see Remark 1. More precisely,
Comparison of the a priori estimates
At this stage, it is useful to compare the a priori estimates derived in the previous sections. We have where Λ is replaced by µ max if we do not make the Lipschitz assumption (3.34). For p we have where V ⊥ denotes the orthogonal complement of V in V with respect to the inner product (3.21).
Remark 5.
We can replace C 2 S by the product C p C r of the smallest constants C p and C r from the Sobolev embedding of H 1 (Ω) d into L p (Ω) d and L r (Ω) d , respectively, with p = 6 and r = 3. We could also use the best constantĈ such that In the former case,Ĉ ≤ C p C r while in the latter case,Ĉ ≤ C 3 K C p C r .
Conforming finite element approximation
In this section, we study conforming finite element approximations of problem (3.2), where conformity refers to the discrete velocity space. To facilitate the implementation, it is useful to relax the zero trace restriction on the discrete tensor space, but this is not quite a nonconformity because the theoretical analysis of the preceding sections holds without this condition. In particular, the inf-sup condition (2.4) is still valid (supremum over a larger space). We start with the numerical analysis of general conforming approximations, including existence of discrete solutions, convergence, and error estimates, and give specific examples further on.
General conforming approximation
As stated above, here M = L 2 (Ω) d×d sym . Up to this modification, we propose to discretise the formulation derived from (3.2): (4.1) Note that, since div(u) = 0, by taking S = I the second line of (4.1) implies that the solution T d of (4.1) satisfies tr(T d ) = 0 a.e. in Ω, even though this condition was not explicitly imposed on elements of M . Let h > 0 be a discretisation parameter that will tend to zero and, for each h, let V h ⊂ V , Q h ⊂ Q and M h ⊂ M be three finite-dimensional spaces satisfying the following basic approximation properties, for all S ∈ M , v ∈ V and q ∈ Q: We assume on the one hand that the pair for some constant β * > 0, independent of h, and on the other hand that M h and V h,0 are compatible in the sense that Note that the latter assumption may be prohibitive when considering conforming finite elements on quadrilateral (d = 2) or hexahedral (d = 3) meshes, see Subsection 4.2; this motivates the study of non-conforming finite elements considered in Section 5. The inf-sup condition (4.3) guarantees that which can be shown using a standard argument; see for instance [20]. Here, c b denotes the continuity constant of b 2 (·, ·) on V × Q.
As the divergence of functions of V h,0 is not necessarily zero, the antisymmetry property (3.13) does not hold in the discrete spaces. Since this property is a crucial ingredient in the analysis of our problem, it is standard (see for instance [41,20]) to introduce the trilinear form d : The trilinear form d is obviously antisymmetric and it is consistent thanks to the fact that Moreover, a standard computation shows that there exists a constantD ≤ min(C 2 We then consider the following approximation of problem (4.1): (4.9)
Existence of a discrete solution
Existence of a solution to problem (4.9) without restrictions on the data is established by Brouwer's fixed point theorem, as in Section 3.3. To begin with, for any function v ∈ V , we define the discrete analogue of the mapping G, see This finite-dimensional square system has one and only one solution G h (v) thanks to the properties of the left-hand side: the first term is elliptic and the second term is monotone. As in Section 3.3, in view of the inf-sup condition (4.3), problem (4.9) is equivalent to finding where T h := G h (u h ). By proceeding as in Proposition 1, it is easy to prove that problem (4.11) has at least one solution u h ∈ V h,0 , and by the above equivalence, each solution u h determines a pair ( . Moreover, each solution of problem (4.9) satisfies the same estimates as in (3.10) and (3.11). For the sake of simplicity, since the approximation is conforming, we state them in terms of the norm of f in H −1 (Ω) d , and Regarding the other a priori bounds, (3.20) and (3.15) are satisfied by u h and T h and, if (3.34) holds, so are (3.41) and (3.40), all up to the above norm for f . In contrast, however, we do not have enough information to claim that (3.14) is valid because it relies on the nonnegativity of T h : D(u h ) almost everywhere in Ω; the integral average is positive but this does not always guarantee pointwise nonnegativity. Thus we replace the constant C u of (3.42) by the constant C u in the following inequality: 14) where the last term is included when (3.34) holds. Because C u ≤ C u , we shall use C u to bound both u and u h in order to simplify the constants in the computations that will now follow.
Finally, let us establish the convergence of the sequence of discrete solutions in the limit of h → 0. The above uniform a priori estimates imply that, up to a subsequence of the discretisation parameter h, Clearly, the symmetry of T h implies that ofT and div(ū) = 0 follows from the fact that u h belongs to V h,0 . Then the approximation properties of the discrete spaces and (4.5) permit to replicate the steps of the proof of Lemma 4 and yield (4.15) To fully identify the limit, in addition toT d := G(ū), which has trace zero since div(ū) = 0, we introduce the auxiliary tensorT h := G h (ū). On the one hand Since bothT h andT d are bounded in M uniformly with respect to h, and again a uniform bound, then the approximation properties of M h and the monotonicity property (2.14) imply that lim On the other hand, the auxiliary tensorT h permits us to argue as in the proof of Lemma 5. Indeed, the monotonicity property (2.14) yields From (4.15) and (4.16), we easily derive that the above right-hand side tends to zero. Hence and then combining this with (4.16) we infer that Hence uniqueness of the limit implies thatT =T d = G(ū). This, and (4.3), permit to identify the limit as in Lemma 5 and Theorem 1, and proves convergence to a weak solution without restrictions on the data. Thus we have proved the following result.
Theorem 2. (Convergence for all data) Under the above approximation properties and compatibility of the discrete spaces, up to a subsequence, where (T d , u, p) is a solution of (3.6), (3.7).
Error estimate
We now prove an a priori error estimate between (T d , u, p) and (T h , u h , p h ), under the assumption (3.34) that has not been used so far, and the small data condition (4.18) below. Note that this small data condition is in fact the same as the uniqueness condition (3.35), upon replacing C u by C u . To simplify the notation and compress some of the long displayed lines of mathematics, we shall write · V , · M and · Q instead of D(·) L 2 (Ω) (as a norm on V ), · L 2 (Ω) (as a norm on M ) and · L 2 (Ω) (as a norm on Q), respectively.
Theorem 3. In addition to (3.34), let the input data satisfy where 0 < θ < 1 andD is the constant from (4.8). Then, there exists a constant C > 0 independent of h such that the difference between the solution (T h , u h , p h ) of (4.9) and (T d , u, p) of (4.1) satisfies Proof. Since we are using conforming finite element spaces, taking (S, v, q) = (S h , v h , q h ) in (4.1) and subtracting the equations of (4.9) we easily get The rest of the proof is divided into three steps.
Step 1 (Error bound for the pressure). By the triangle inequality we have, for any q h ∈ Q h , and it therefore suffices to derive a bound on q h − p h Q . From the (discrete) inf-sup condition we have Again, using the first equation of (4.20) we have where we can take c b = C K using the relation div(v) 2 L 2 (Ω) + rot(v) 2 L 2 (Ω) = ∇v 2 L 2 (Ω) that holds because we have homogeneous Dirichlet boundary conditions (otherwise take c b = √ dC K ). Thus, we obtain for any q h ∈ Q h .
Step 2 (Error bound for the stress tensor). Again, we start with the triangle inequality; for any S h ∈ M h we have that and we then bound T h − S h M . Thanks to the monotonicity property (2.14) and the second equation of (4.20), we have and thus Step 3 (Error bound for the velocity). Recalling the definition of V h,0 in (4.2), let v h,0 ∈ V h,0 and let v h := v h,0 − u h ∈ V h,0 . We will first show the relation (4.19) by taking the infimum over V h,0 instead of V h . As before, we use the triangle inequality to get Thanks to the assumption (4.4), we can take S h = D(v h ) in the second equation of (4.20) yielding Using the first equation of (4.20), we can easily derive the equality thanks to the fact that b 2 (v h , q h − p h ) = 0. To bound the convective term, we use Therefore, using the assumption (4.18) on the input data, we obtain for any v h,0 ∈ V h,0 . Finally, combining (4.21), (4.22) and (4.23) we obtain We can then conclude the proof using (4.6).
Examples of conforming approximation
From now on, we assume that the boundary of the Lipschitz domain Ω ⊂ R d is a polygonal line (when d = 2) or a polyhedral surface (when d = 3), so that it can be exactly meshed. For each h, let T h be a conforming mesh on Ω consisting of elements E, triangles or quadrilaterals in two dimensions, tetrahedra or hexahedra (all planar-faced) in three dimensions, conforming in the sense that the mesh has no hanging nodes. As usual, the diameter of E is denoted by h E , and ̺ E is the diameter of the largest ball inscribed in E.
The simplicial case
In the case of simplices, the family of meshes T h is assumed to be regular in the sense of Ciarlet [14]: i.e., it is assumed that there exists a constant σ > 0, independent of h, such that This condition guarantees that there is an invertible affine mapping F E that maps the unit reference simplex onto E. For any integer k ≥ 0, let P k denote the space of polynomials in d variables of degree at most k. In each element E, the functions will be approximated in the spaces P k . The specific choice of finite element spaces is dictated by two considerations. First, conditions (4.3) and (4.4) must be satisfied. Next, since the number of unknowns in (4.9) is large, the degree k of the finite element functions should be small. It is well-known that the lowest degree of conforming approximation of (u, p) satisfying (4.3), without modification of the bilinear forms, is the Taylor-Hood P d 2 -P 1 element, see [20,3], provided each element has at least one interior vertex.
In view of (4.4), this implies that T d is approximated by P d×d 1 . Thus the corresponding finite element spaces are It is easy to check that with these spaces on a simplicial mesh, under condition (4.24), problem (4.9) has at least one solution. Furthermore, if the data satisfy (4.18), then Theorem 3 yields provided that the solution is sufficiently smooth, namely u ∈ H 3 (Ω) d ∩ H 1 0 (Ω) d , T d ∈ H 2 (Ω) d×d , and p ∈ H 2 (Ω) ∩ L 2 0 (Ω). Therefore the scheme has order two for an optimal number of degrees of freedom, i.e., this order of convergence cannot be achieved with fewer degrees of freedom.
The quadrilateral/hexahedral case
The notion of regularity is more complex for quadrilateral and much more complex for hexahedral elements. In the case of quadrilaterals [20], the family of meshes is regular if the elements are convex and, moreover, the subtriangles associated to each vertex (there is one per vertex) all satisfy (4.24). In the case of hexahedra with plane faces, convexity and the validity of (4.24) for the subtetrahedra associated to each vertex are necessary but not sufficient. This difficulty has been investigated by many authors, see for instance [42,23]; the most relevant publication concerning hexahedra with plane faces is however [22], where the minimum of the Jacobian in the reference cubeÊ is bounded below by the minimum of the coefficients of its Bézier expansion and this minimum is determined by an efficient algorithm. The details of this are beyond the scope of this work, and we shall simply assume here that the minimum of these Bézier coefficients is strictly positive and that furthermore, denoting by J E the Jacobian determinant of F E , with a constantĉ independent of E and h. If these conditions hold, there is an invertible bi-affine mapping F E in two dimensions or tri-affine in three dimensions that maps the unit reference square or cube onto E. We let Q k be the space of polynomials in d variables of degree at most k in each variable. In contrast to the case of simplicial meshes, the space Q k is not invariant under the composition with F E , which makes the compatibility condition (4.4) between D(V h ) and M h problematic. To circumvent this issue, we restrict ourselves to affine maps F E , thereby allowing subdivisions consisting of parallelograms/parallelepipeds. In addition, the situation is less satisfactory when a quadrilateral or hexahedral mesh is used, because although the Taylor-Hood Q d 2 -Q 1 element satisfies (4.3), the second condition (4.4) does not hold if T d is approximated by Q d×d 1 since the components of the gradient of Q 2 functions belong to a space, intermediate between Q 2 and Q 1 , that is strictly larger than both Q 1 and P 2 . Therefore, in order to satisfy (4.4), the simplest option is to discretise each component of T d by Q 2 . The corresponding finite element spaces are With these spaces and under the above regularity conditions, problem (4.9) has at least one solution and the error estimate (4.25) holds if the data satisfy (4.18). However, this triple of spaces is no longer optimal, because the degree two approximation of T d now requires far too many degrees of freedom with no gain in accuracy. For instance, when d = 3, its approximation by (Q 2 ) 3×3 sym requires 27 × 6 = 162 unknowns inside each element instead of 8 × 6 = 48 unknowns for (Q 1 ) 3×3 sym . The nonconforming finite element approximations discussed in Section 5 do not require an affine mapping F E and, by considering P-type approximations on the physical element E, do not suffer from the computational cost overhead mentioned above.
Nonconforming finite element approximation
The nonconforming approximations developed here will not only allow the use of elements of degree one for u, but will also lead to locally mass-conserving schemes. Because of the discontinuity of the finite element functions, the proofs are in some cases more complex; this is true in particular for the proof of the inf-sup condition for the discrete divergence.
The quadrilateral/planar-faced hexahedral case
Here we consider quadrilateral/hexahedral grids T h with planar faces, satisfying the regularity assumptions stated in Section 4.2. There is a wide choice of possible approximations with nonconforming finite elements. Here we propose globally discontinuous velocities in P d k , k ≥ 1, in each cell associated with globally discontinuous pressures and stresses both of degree at most k − 1. Thus we consider As usual, the full nonconformity of V h is compensated by adding to the forms consistent jumps and averages on edges when d = 2 or faces when d = 3; see for instance [38]. Let Γ h = Γ i h ∪ Γ b h denote the set of all edges when d = 2 or all faces when d = 3 with Γ i h and Γ b h signifying the set of all interior and the set of all boundary edges (d = 2) or faces (d = 3), respectively. A unit normal vector n e is attributed to each e ∈ Γ h ; its direction can be freely chosen. Here, the following rule is applied: if e ∈ Γ b h , then n e = n Ω , the exterior unit normal to Ω; if e ∈ Γ i h , then n e points from E i to E j , where E i and E j are the two elements of T h adjacent to e and the number i of E i is smaller than that of E j . The jumps and averages of any function f on e (smooth enough to have a trace) are defined by When e ∈ Γ b h , the jump and average are defined to coincide with the trace on e. The terms involving jumps and averages that are added to each form are not unique; here we make the following fairly standard choice: The trilinear form d is approximated by a centred discretisation, as follows: Clearly, the jump terms in (5.4) and (5.6) vanish when v h belongs to H 1 0 (Ω) d . Likewise, the jump and divergence terms in (5.5) vanish when u h and v h belong to H 1 0 (Ω) d and div(u h ) = 0. Moreover, (5.5) is constructed so that d h is antisymmetric, Finally, the following positive definite form acts as a penalty to compensate the nonconformity of u h :
8)
where h e is the average of the diameter of the two elements adjacent to e, if e ∈ Γ i h , or the diameter of the element adjacent to e otherwise. The parameters σ e > 0 will be chosen below to guarantee stability of the scheme, see (5.28) and (5.24). This form is also used to define the norm on denotes the associated semi-norm. Also, in view of (5.6), we define the space of discretely divergence-free functions, The discrete scheme reads: As expected, b 2h (v h , 1) = 0, and therefore the system (5.12) is unchanged when the zero mean value constraint is lifted from the functions of Q h .
Properties of the norm and forms
All constants below depend on the regularity of the mesh but are independent of h. In particular, we shall use C to denote such generic constant independent of h. In addition, we shall use the following "edge to interior" inequality. There exists a constantĈ, depending only on the dimension d and the degree of the polynomials, such that for all v h ∈ V h , all e ∈ Γ h and any element E, adjacent to e, v h L 2 (e) ≤Ĉ |e| |E| It is easy to check that (5.9) defines a norm on V h . Next, the results in [6,7] yield the following consequences of a discrete Korn inequality: and where ∇ h v h is the broken gradient (i.e., the local gradient in each element). Moreover, by following the work in [19,24,10], this can be generalised for all finite p ≥ 1 when d = 2 and all p ∈ [1, 6] when d = 3, to Finally, the inequality below is used in choosing σ e . Its proof is fairly straightforward, but it is included here for the reader's convenience.
Similarly, for e ∈ Γ b h , which is the face of an element E adjacent to ∂Ω, we have
By using the last two inequalities in
with the notational convention that when summing over e ∈ Γ b h the element E under the summation sign is the element adjacent to ∂Ω with face e, and when summing over e ∈ Γ i h the elements E 1 and E 2 under the summation sign are the ones that share the face e. Hence, .
The asserted result (5.23) follows from the last inequality by noting that, for each E ∈ T h , the factor D(u h ) 2 L 2 (E) appears at most 2d times.
Concerning the expression appearing in (5.24) we note that, thanks to the regularity assumption on the family of meshes, we have that h e |e| |E| ≤ C and so (5.25)
First a priori estimates
By testing the first equation of (5.12) with v h = u h , applying the third equation and the antisymmmetry (5.7) of d h , we obtain Next, by testing the second equation of (5.12) with S h = T h and substituting the above equality, we deduce that Thus, in view of (5.14), we have our first bound: A further bound is arrived at by testing the second equation of (5.12) with S h = D(u h ); hence, Then Proposition 3 gives, for any δ > 0, We choose δ = 1 and, upon recalling (5.25), assume that σ e is chosen so that Next, by adding J h (u h , u h ) to both sides of (5.27), applying (5.26) to bound this term, and using the norm of V h , we infer that To close the estimates, we return to (5.26) and get for any δ 2 > 0. Thus Thus we have shown the following uniform and unconditional bounds: An a priori estimate for the pressure requires an inf-sup condition. This is the subject of the next subsection.
An inf-sup condition
In the nonconforming case considered here, the analogue of (4.3) reads with a constant β * > 0 independent of h. To check this condition, recall Fortin's lemma; see for instance [20].
32)
and with a constant C independent of h.
Originally, Fortin's lemma was stated for discrete functions in subspaces of H 1 0 (Ω) d , but the extension to spaces of discontinuous functions is straightforward, as long as the form b 2h (·, ·) is consistent with the divergence, which is the case here.
As the proof of (5.32), (5.33) is fairly technical, we restrict the discussion to the first order case, i.e., k = 1, in hexahedra. The quadrilateral case is much simpler.
The inf-sup condition in planar-faced hexahedra for k = 1
The construction of a suitable operator Π h is usually done by correcting a good approximation operator R h . For instance, we can use the L 2 projection onto the space of polynomials of degree one defined locally in each element, so that R h (v) belongs to V h and satisfies optimal approximation properties; see for instance [8].
By expanding b 2h and denoting by q E the value of q h in E, (5.34) reads Green's formula in each element yields 35) with n E the unit exterior normal to E. Consider now an interior face e shared by E 1 and E 2 , so that n e is interior to E 2 ; the contribution of e to the left-hand side of (5.35) is with a similar contribution to the right-hand side. Notice also that the contribution of a boundary face e ∈ Γ b h is equal to zero on both sides of (5.35). Therefore a sufficient condition for (5.35) is that We will thus construct c h ∈ V h by imposing (5.36) for each element E ∈ T h and each face e ∈ ∂E. To simplify the notation, we will write from now on c h and ( Let E be an arbitrary hexahedral element of T h with faces e i , centre of face b i , and exterior unit normal n i , 1 ≤ i ≤ 6. To be specific, let a i , i = 1, 2, 3, 4, be the vertices of e 1 , a i , i = 1, 3, 5, 6, the vertices of e 2 , a i , i = 1, 2, 5, 7, the vertices of e 3 , a i , i = 5, 6, 7, 8, the vertices of e 4 , a i , i = 2, 4, 7, 8, the vertices of e 5 , and a i , i = 3, 4, 6, 8, the vertices of e 6 . The ordering of the nodes is illustrated in Figure 1. Note that for i = 1, 2, 3, e i+3 is the face opposite to e i , opposite in the sense that its intersection with e i is empty. Without loss of generality, we assume that the vertex a 1 is located at the origin and that the face e 1 lies on the x 3 = 0 plane. Indeed, this situation can be obtained via a rigid motion (translation plus rotation), which preserves all normal vectors. Therefore, the normal to the face e 1 is parallel to the x 3 axis. Now, the idea is to transform E onto a "reference" elementÊ by an affine mapping F E so that the subtetrahedron S 1 of E based on e 1 and containing the origin a 1 is mapped onto the unit tetrahedronŜ 1 . More precisely, as e 2 and e 3 are both adjacent to e 1 , S 1 is the subtetrahedron with vertices a 1 , a 2 , a 3 , and a 5 , andŜ 1 has verticesâ 1 = (0, 0, 0),â 2 = (0, 1, 0),â 3 = (1, 0, 0),â 5 = (0, 0, 1), see Figure 1 for an illustration and some notation. This transformation and notation will be used till the end of this subsection. It stems from the regularity of the family of triangulations that there exists a constant M , independent of E and h, such that diameter(Ê) ≤ M. (5.37) The affine mapping F E has the expression where the constant term is zero since a 1 is the origin, and the matrix B is nonsingular; its columns are respectively a 3 = (a 1 3 , a 2 3 , 0) t , a 2 = (a 1 2 , a 2 2 , 0) t and a 5 = (a 1 5 , a 2 5 , a As F E is an affine transformation, it transforms faces onto faces, edges onto edges, and vertices onto vertices. Thus, since a 4 is in the plane x 3 = 0, thenâ 4 is in the planex 3 = 0. Likewise,â 6 is in the planex 2 = 0,â 7 in the planex 1 = 0, andâ 8 in the plane determined byâ 4 ,â 2 ,â 7 , as well as the plane determined byâ 7 ,â 5 ,â 6 , and the plane determined byâ 6 ,â 3 ,â 4 , hence in the intersection of these three planes. ThereforeÊ is located in the first octant of R 3 . Letn i denote the unit exterior normal vector toê i . It is related to n i by the general formulâ The advantage of having e 1 on the plane x 3 = 0 is thatn 1 = n 1 = (0, 0, −1) t . We also haven 2 = (0, −1, 0) t , andn 3 = (−1, 0, 0) t . Thus |n 3 1 | = |n 2 2 | = |n 1 3 | = 1, (5.39) and the regularity of the family T h implies that there exists a constant ν 0 , independent of h and E, such that |n 3 4 |, |n 2 5 |, |n 1 6 | ≥ ν 0 . (5.40) With this transformation, and after cancelling |detB| on both sides, (5.36) reads locally where the hat denotes composition with F E . Thus, by performing the change of variablê This is a linear system of six equations in twelve unknowns, the coefficients ofd h . Therefore, we can freely choose six coefficients and we have the following existence lemma. Collecting these results and the two extra assumptions mê 1 (d 1 ) = mê 1 (d 2 ) = 0 in (5.42), we find that The three facesê 1 ,ê 5 ,ê 6 share the vertexâ 4 , and the regularity of the hexahedron implies that the three vectors along the segments [â 4 ,â 3 ], [â 4 ,â 8 ], and [â 4 ,â 2 ] is a set of three linearly independent vectors of R 3 . Then the regularity of the hexahedron implies that a polynomial of degree one is uniquely determined by its moments on the four facesê 1 ,ê 5 ,ê 6 ,ê i for any i in the set {2, 3, 4}. Hence, asd 1 (respectively,d 2 ) is a polynomial of degree one, the first set (respectively, second set) of equalities and the regularity of the hexahedron imply thatd 1 = 0, respectively,d 2 = 0. When i = 4, this leads to mê 4 (d 3 ) = 0. Consequently, Let MÊ be the 6 × 6 matrix of the system (5.41) under the restriction (5.42). It stems from Lemma 7 that MÊ is nonsingular. Furthermore, the regularity of the hexahedron implies that MÊ is a continuous function ofÊ, thus continuous in a compact set of R 3 . Hence the norm of its inverse is bounded by a constant C, independent ofÊ, The stability of the correction follows now easily.
Lemma 8.
There exists a constantĈ, independent of h and E, such that for all E in T h and all e in Γ h , 47) where E 1 and E 2 are the two elements sharing e, when e is an interior face, and the sum is reduced to one term, namely the element E adjacent to e, when e is a boundary face.
Proof. The notationĈ below refers to different constants that are all independent of h and E. Recalling (5.41), (5.37) and the transformation fromŜ 1 onto S 1 , we observe that, for any i, By a trace inequality inÊ and the approximation property of R h inÊ, we have Then, by reverting to E, In view of (5.46) and the regularity of the family T h , the above relations lead to the following bound ond h : Since h S1 < h E , we immediately deduce from (5.48) the first two inequalities in (5.47). Finally, the third inequality follows from (5.48) and That completes the proof of the lemma.
As a consequence of Lemma 8 we have the following bounds: Finally, since the construction of Lemma 7 yields a unique correction, it is easy to check that the mapping v → c h defines a linear operator from V h into itself, i.e., c h = c h (v).
On the other hand, we infer from standard approximation properties of R h and the regularity of the mesh, that
A bound on the pressure
As usual, the inf-sup condition (5.31) yields a bound on the pressure. Indeed, it follows from the first equation of (5.12) together with (5.22), (5.20), (5.15) and (5.16) that Then (5.30) implies, with a constant C independent of h (but depending on α), that With the inf-sup condition (5.31), this implies that for another constant C independent of h.
Existence and convergence
The proof of existence of a solution of (5.12) is the same as in the conforming case. Recall that the case of interest is k = 1, which is assumed for the remainder of this subsection, but all of what follows can be straightforwardly extended to a general polynomial degree k ≥ 1 as long as the inf-sup condition (5.31) holds. First, the problem is reduced to one equation by testing the first equation of (5.12) with v h ∈ V h,0 and by observing that the second equation determines for each u h ∈ V h a unique T h in M h . This is expressed by writing T h = G h,DG (u h ). Then, (5.12) is equivalent to finding a u h ∈ V h,0 such that By means of the a priori estimates (5.30), existence of a solution is deduced by Brouwer's fixed point theorem.
However, in order to pass to the limit in the equations of the scheme, following [16], we need to introduce discrete differential operators related to distributional differential operators. These are G sym The polynomial degree one in this space is convenient for proving the convergence of the nonlinear term; see (5.62). The straightforward scaling argument used in proving Proposition 3 shows that and and thus by (5.15) with different constants C independent of h. At the same time, this gives existence of these two operators. The next proposition relates G sym h (u h ) and D(ū). The proof is an easy extension of that written in [16], but we include it below for the reader's convenience. Proof. On the one hand, the bounds (5.56) and (5.30) imply that there exists a functionw ∈ L 2 (Ω) d×d sym such that, up to a subsequence, lim On the other hand, take any tensor F in H 1 (Ω) d×d sym and let P 0 h (F ) be its orthogonal L 2 (Ω) d×d projection on constants in each E. We have that tends to zero with h. Therefore, the definition (5.54) of G sym and a straightforward argument yields that the first term tends to zero. Hence
Now, an application of Green's formula in each
A comparison with (5.59) and uniqueness of the limit yield D(ū) =w, thus proving (5.58).
Remark 6. The fact thatū belongs to H 1 0 (Ω) d is an easy consequence of the above proof.
A similar argument to the one in Proposition 4 gives that Hence, by passing to the limit in the last equation of (5.12), we immediately deduce that div(ū) = 0; thus u belongs to V and satisfies the third equation of (4.1).
In the next theorem, these results are used to show that the limit satisfies the remaining equations of (4.1).
Theorem 5. Let the family of hexahedra T h be regular in the sense defined above. Then the triple (T ,ū,p) solves (4.1).
Proof. The proof proceeds in two steps.
Step 1. Let us start with the first equation of (5.12). Take a function v ∈ D(Ω) d and let v h ∈ V h be the L 2 (Ω) d orthogonal projection of v on P d 1 in each element. It is easy to check that Therefore the weak convergence of T h and the definition of G sym Similarly, As the right-hand side tends to Thanks to the antisymmetry of d h , we have For the first term, the strong convergence of u h in L 4 (Ω) d and the strong convergence of the broken gradient sinceū ∈ V. For the second term, take any piecewise constant approximationv h of v. Then The boundedness of u h in V h and the convergence to zero of v h −v h in L ∞ (Ω) d imply that the first term tends to zero. For the second term, we deduce from the definition of G div As div(ū) = 0, G div h (u h ) tends to zero weakly in L 2 (Ω). Then the strong convergence of u h in L 2 (Ω) d and that ofv h in L ∞ (Ω) d show that this second term tends to zero. It remains to examine the last term of (5.61). Here we use the fact that, for any This, with the boundedness of u h in V h , gives that this last term tends to zero. Thus, we conclude that The conclusion of these limits and a density argument is that the triple (T ,ū,p) satisfies the first equation of (4.1) Step 2. The argument for recovering the constitutive relationT = G(ū) is close to that for the conforming case, up to some changes. On the one hand, we observe that and, since J h (u h , u h ) is positive and bounded, this implies that On the other hand, we infer from (5.63) that and defineT h = G h,DG (ū), i.e., where the second equality holds thanks to the fact thatū belongs to H 1 0 (Ω) d . The fact that div(ū) = 0 implies that the trace ofT d is zero and justifies the above superscript. Therefore and, as in the conforming case, we conclude that Finally, the difference between the equations satisfied by T h andT h yields By testing this equation with S h = T h −T h and using the monotonicity property (2.14), we deduce that However, by (5.54), and it follows from Proposition 4 and (5.65) that Then, by passing to the limit in (5.66), we obtain in view of (5.64) the inequality α lim This proves that (T ,ū) satisfies the second equation of (4.1).
The tetrahedral case
Here we study briefly two examples of finite element discretisations on tetrahedral meshes, the triangular case being simpler. Many of the details are skipped because they follow closely those in the previous subsection. The family of meshes T h is assumed to be regular as in (4.24). Let us start with the same spaces V h , Q h , and M h defined on T h by (5.1), (5.2), and (5.3), respectively, and the same bilinear (5.5), and (5.6), respectively. Then the scheme is again given by (5.12) and, under assumption (4.24), all proofs from the previous subsections are valid in this case, except possibly the proof of the inf-sup condition. In fact, Theorem 4 holds with a much simpler proof. Indeed, take any tetrahedron E. Recalling that the case of interest is k = 1, a polynomial of P 1 is uniquely determined in E by its values at the centre points b e of its four faces e. Then, instead of (5.36), we can use the sufficient condition and this defines uniquely the correction c h . Furthermore, thanks to (4.24), the stability of this correction follows from the fact that E is the image of the unit tetrahedronÊ by an invertible affine mapping whose matrix satisfies the same properties as the matrix B used above. Thus the conclusion of Theorem 4 is valid in this case.
As a second example, it would be tempting to use the Crouzeix-Raviart element of degree one on tetrahedra; see [15]. This would be possible if the analysis did not invoke Korn's inequality (with respect to the broken symmetric gradient), because it is not satisfied by the Crouzeix-Raviart element; cf [18]. Thus, the simplest way to bypass this difficulty is to introduce the jump penalty term J h (u h , v h ) defined in (5.8). Let us describe this discretisation. Again, we suppose that (4.24) holds. The discrete spaces Q h and M h are the same, with k = 1, as in (5.2) and (5.3), respectively. However, instead of V h , we now use the space V CR h whose elements are also piecewise polynomials of degree one in each element, but in contrast with (5.1), they are continuous at the centre points of all interior faces e ∈ Γ i h , and are set to zero at the centre points of all boundary faces e ∈ Γ b h . Thanks to this pointwise continuity and boundary condition, the scheme now involves the following bilinear/trilinear forms, compare with (5.4), (5.5), (5.6): With these new forms, analogously to (5.12), the finite element approximation of the problem reads as follows: find a triple ( is obviously antisymmetric and is simpler. Although the norm of the broken gradient is a norm on V CR h , the mapping v h → D(v h ) h is not a norm on V CR h . According to [6,7], we have instead (5.14) and (5.15). That is why we use again the norm v h V h defined in (5.9) and keep the term J h (u h , v h ) in the first line of (5.71). Note however that the parameters σ e need not be tuned by Proposition 3 since there are no surface terms in b CR 1h (T h , v h ); thus it suffices for instance to take σ e = 1 for each face e. Moreover, the analysis used for the general discontinuous elements substantially simplifies here. First, as there are no surface terms in the bilinear forms, the bounds are simpler. Next, the operator Π h satisfying the statement of Lemma 6 is constructed directly by setting, for v in H 1 0 (Ω) d , see [15]. Clearly, as v ∈ H 1 0 (Ω) d , (5.72) defines a piecewise polynomial function of degree one in V CR h . Finally, convergence of the scheme is derived without the discrete differential operators G sym h and G div h . Indeed, property (5.17) can be extended as is asserted in the proposition.
with a constant C independent of h, then there exists a functionv ∈ H 1 0 (Ω) d satisfying (5.17) and lim where D h stands for the broken symmetric gradient.
The proof, contained in [15], relies on the fact that the integral average of the jump [v h ] e vanishes on any face e and hence, for any tensor Thus, there is no need for G sym h ; the same is true for G div h . This permits to pass directly to the limit in (5.71).
Numerical illustrations
We introduce two decoupled iterative algorithms. The first one is based on a Lions-Mercier decoupling strategy while the second one is a fixed point algorithm. All the algorithms are implemented using the deall.ii library [1]. For simplicity, we focus on conforming finite element approximations for which an a priori error estimate has been derived in Subsection 4.1.2. Performing numerical experiments in the case of the nonconforming approximation scheme will be the subject of future work.
The general setup is the following: • Dirichlet boundary conditions are imposed on the entire domain boundary (not necessarily homogeneous); • A sequence of uniformly refined meshes with square elements of diameter h = √ 2/2 n , n = 2, . . . , 6 (level of refinement) are considered for the mesh refinement analysis; • The finite element spaces M h , V h , and Q h consist, respectively, of discontinuous piecewise polynomials of degree 2, continuous piecewise polynomials of degree 2, and continuous piecewise polynomials of degree 1 (see Subsection 4.2.2).
Following [5], we replace the constitutive relation to design an exact solution. Then, given T d , u and p, we compute the corresponding right-hand sides g and f (forcing term), where we recall that Finally, we choose µ(s) = 1 √ 1+s 2 which corresponds to (1.10) with β = 1 and n = −1/2.
Lions-Mercier decoupled iterative algorithm
We present here an iterative algorithm to compute approximately the solution to problem (3.5), which is based on the formulation (3.4): . Note that problem (6.1) is equivalent to problem (4.9) analysed in Section 4.
To compute the solution to problem (6.1), we propose a decoupled algorithm based on a Lions-Mercier splitting algorithm [25] (alternating-direction method of the Peaceman-Rachford type [31]) applied to the unknown T h . Following the discussion in [5,Section 7], the algorithm reads, for a pseudo-time step τ > 0: Then, for k = 0, 1, . . . , perform the following two steps: Step 1: Find T Step 2: The solution to (6.2) is obtained by first determining u . A standard argument shows that the above algorithm generates uniformly bounded sequences. Thus they converge up to subsequences. However, the identification of a unique limit for the entire sequence is currently unclear.
Regarding the implementation, we make the following comments: • Stopping criterion: For the main loop (Lions-Mercier algorithm), the stopping criterion is ≤ 10 −5 ; (6.4) • Initialisation: We solve the Navier-Stokes system associated to problem (6.2) using Newton's method (the iterates are indexed by m) until the following stopping criterion is met: As an initial guess, we take the solution of the associated Stokes system without the convective term.
The solution to each saddle-point system of the form is obtained using a Schur complement formulation To solve for P, we use the conjugate gradient algorithm in the case of the Stokes problem and GM-RES for the (linearised) Navier-Stokes problems. In both cases, the pressure mass matrix is used as preconditioner and the tolerance for the iterative algorithm is set to 10 −6 BA −1 F − G ℓ2 . A direct method is advocated for every occurrence of A −1 and also to obtain T (0) . Recall that discontinuous piecewise polynomial approximations are used for the stress and so T h is determined locally on each element E ∈ T h as the solution to We again employ Newton's method starting with T so that the global residual is less than 10 −6 . Note that in this case, it might happen that no iteration is needed (e.g. when γ = 0), in which case T • Step 2 : This step is similar to the initialisation step except that we take (u h ) as our initial guess for Newton's method for solving the finite element approximation of the Navier-Stokes system. , and in particular it has vanishing trace. We observe that u is divergence-free. Moreover, the pressure satisfies p = − 1 2 tr(T ) and has zero mean. We report in Table 1 the error for each component of the solution for the case α = 1 and γ = 0, while Table 2 contains the results for α = γ = 1. Note that we use the H 1 semi-norm for the velocity and not the (equivalent) L 2 (Ω) 2×2 norm of the symmetric gradient. We observe in Table 2 n h T d − T h L 2 (Ω) ∇(u − u h ) L 2 (Ω) p − p h L 2 (Ω) iter 2 0.354 6.04199×10 −2 7.51266×10 −2 3.02263×10 Table 2: Case 1, α = γ = 1, δ = 10 −5 , τ = 0.01. that all three errors are O(h 2 ). The deterioration of the convergence rate we observe for T d and p in Table 2 is due to the stopping criterion. Indeed, if we use 10 −6 instead of 10 −5 in the stopping criterion (6.4) for the main loop, then for h = 0.044 (n = 5) we need 250 iterations and we get T d − T h L 2 (Ω) = 4.66898 × 10 −4 , u − u h L 2 (Ω) = 1.13118 × 10 −3 and p − p h L 2 (Ω) = 3.62581 × 10 −4 , compare with the fourth row of Table 2. We give in Tables 3, 4 and 5 the results obtained when a larger pseudo-time step is used. We see that Table 3: Case 1, α = γ = 1, δ = 10 −5 , τ = 0.05.
A fixed-point algorithm
Instead of the Lions-Mercier type algorithm introduced in Subsection 6.1, we explore the following fixed-point strategy.
Step 1: for all (u h , q h ) ∈ V h × Q h .
Step 2: Find T It is easy to show that this algorithm produces uniformly bounded sequences. The solvers used for these two steps are similar to those described in Subsection 6.1. In particular, we take (u (k) h , p (k) h ) as initial guess for Newton's method for the finite element approximation of the Navier-Stokes system, except when k = 0, in which case we use the solution of the associated Stokes problem.
Concerning the computational cost when similar results are obtained, i.e., when τ = 0.5 for the Lions-Mercier type algorithm, we note that the latter requires the solution of one more equation per iteration, namely the linear equation for T Table 5) and the algorithm of Section 6.2.
|
2021-02-18T02:15:38.991Z
|
2021-02-17T00:00:00.000
|
{
"year": 2021,
"sha1": "32fd6e9af8f448ca005329e2611cd3a60a9d95be",
"oa_license": "CCBY",
"oa_url": "https://www.esaim-m2an.org/articles/m2an/pdf/2021/06/m2an210026.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "32fd6e9af8f448ca005329e2611cd3a60a9d95be",
"s2fieldsofstudy": [
"Mathematics",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
55516549
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of the suitability of using the selected wind turbine blades for wind power appliances based on numerical analyses
The aim of this study was a comparative analysis of the suitability of the use of airfoil for wind power. Based on numerical analyses and analytical methods, information on the power factor was obtained. The analyses were carried out for the wind turbine blades and rotors of a vertical axis wind turbine. The tests were performed for the constructed profile and compared with the profile DU 06-W-200 used in the construction of a wind turbine rotors. A vertical axis wind turbine model equipped with designed blade profiles was prepared. The main predicted purpose of the device is to supply electricity to the household. The blade profile models were prepared and then a numerical analysis was performed using the CFD application. The obtained results for given wind speeds and types of profiles were compared with each other. The conducted research allowed to determine the sense of applying the unaudited profile based on the determined value of wind turbine power coefficient. Studies have shown that the accurate preparation of the optimal rotor blade with respect to flow of air stream strongly influences the characteristics of the wind turbine.
Introduction
One of the most important issues appearing in the power energy sector is development and examining of modern technologies enabling maximization of obtaining energy from renewable sources.Pollution resulting from burning fossil fuels in conventional power plants have become a serious ecological and economic problems in the whole world [1 -3].The development of appropriate technology to intensify the use of renewable energy sources in this context is very important [4].
Wind power plants, are most often associated with large high masts and turbines with blades of an enormous span.This is not the only solution for wind turbines.The result is the construction and continuous development of vertical axis wind turbines.Technology of wind turbines with a vertical axis of rotation is a technology that tries to convince investors to back to that idea.Technical parameters are promising.A turbine of this type can work with very low wind speed.The design allows the investor to achieve a sufficiency high efficiency without the need to build high masts and allows the facility to be assembled from functional segments [5][6][7].An additional advantage may be the long working period of the device.A vertical axis wind turbine (VAWT) works well enough in urban areas.A classic horizontal axis wind turbine (HAWT) are sensitive to an air turbulence and can be a source of noise.The VAWT can be used as a energy source for powering single-family houses, campsites and summerhouses, yachts, measuring and signalling devices, advertising banners, street lamps.VAWT can be used wherever access to a standard power grid is difficult or impossible.The main element of the wind power plant is the rotor converting wind energy into mechanical energy.The rotor is mounted on a low-speed shaft and rotates mostly at a speed of 15 to 180 rpm.This speed is multiplied by the transmission and transferred to the generator using a high-speed shaft.The generator produces electricity from mechanical energy [8,9].
Design requirements for modern turbines are high.New, advanced technologies, light and durable materials and the ability to perform complex aerodynamic simulations give designers great opportunities to create modern airfoils and efficient turbines.The operation of the wind turbine is based on the use of lift and drag forces.Wind power generators designed to start up at low wind speeds and high functionality are just a few of the advantages of the Darrieus type wind turbine.Despite not very complicated construction, such a power plant allows you to react to wind from any direction.Despite the disadvantage of this turbine associated with a very small starting torque, such a power plant with a low rotational rotor speed is characterized by a lower noise propagation during machine operation and the possibility of mounting closer to the population centres.Wind turbines constructed today are based on the achievements of aviation technology (Fig. 1a) and the production of wind turbine components has a lot in common with the aviation industry.These similarities mainly concern operating conditions, which in both cases take place at variable wind speeds and directions [10,12].Both in an aviation as well as in a wind power engineering, the geometry of the blade has a decisive impact on characteristics of the wind turbine or an aircraft performance.The changes in a surface smoothness and a shape of the airfoil are of great importance and can cause significant differences in the performance of the power plant with the same dimensions of the blades.Even slight differences in a shape can significantly change the characteristics.Therefore, choosing the right rotor blade is very important, and properly profiled blades guarantee a higher efficiency of the conversion of kinetic energy of wind into mechanical energy of the rotor [13 -15].Airfoils are characterized by an efficient work even with very large Reynolds numbers of 150.000 and 700.000.The increased blade width allows power plants to maintain the assumed capacity in a greater range of wind speed.Due to the thickening of the profiles, it is possible to obtain a higher strength of the structure.
A new wind turbine blade has been modelled (Fig. 1b).Assuming that each cross-section of the blade behaves like a profile in a two-dimensional flow field, the rotor has been analysed in a simplified manner (Tab.1).In fact, it is impossible to use the total wind power.In Thus, the energy on the rotor shaft is calculated as follows: The profile DU-06-W-200 based on the data [16] and the new designed airfoil were modelled.The results obtained during the numerical analysis were referenced to these profiles.The modelled blade made it possible to obtain the final version of the turbine, which was used during the numerical analysis (Fig. 2a, 2b).
Power coefficient calculations
The value of the power coefficient Cp depends on the speed of the air before and after the rotor.The velocity of the air stream entering the rotor and the air velocity after the rotor is expressed as v1 and v2, respectively.Three analytical methods of calculating the power coefficient Cp are presented.
The Betz limit is a hypothetical boundary on the total amount of power that could be extracted by a wind turbine from a wind movement [17 -19].In the first method power coefficient Cp using the Betz limit is calculated as follows: where : v1 -speed of the air stream entering the rotor, m/s, v2 -air speed behind the rotor, m/s.Based on the above formula, a plot of Cp coefficient depending on the air speed was prepared (Fig. 3a).Calculated power coefficient values reach in reality, in fact, lower values due to the occurrence of various types of aerodynamic losses.The second method enabling the determination of the power coefficient Cp is based on the tip speed ratio (TSR) λ, which makes the rotational angular velocity dependent on the wind speed [15,21,22]: where : Ω angular velocity of the rotor wheel, rad/s, v1 -wind speed, m/s, Rz -external radius of the impeller, m.The rotational speed of the rotor can be express as Ω=(π n)/30, where n is a rotor speed, rpm.
It is possible to find a relationship between Cp and TSR: Based on the above formula, a plot of Cp depending on the TSR was prepared (Fig. 3b).The third method of determining the power coefficient is based on specific aerodynamic coefficients of the wind turbine rotor blades.This method is especially useful for comparison of a turbine performance with different number of blades [15,23] and it can be presented as: where : N -a number of wind turbine blades, λ a TSR, C1,C2 -coefficients, in the literature they appear in a simplified form C1=0.416 and C2=0.35,Cd -a drag coefficient, Cl -a lift-coefficient.Based on the above analyses, the power coefficient (Cp) can be the measure of the wind turbine rotor efficiency at a given wind speed, and it provides an approximation of the actual power produced by the rotor of the wind turbine.
Experimental procedure
The aim of the research is to determine the impact of the setting of the rotor axis on the power coefficient.Moreover, the role of optimal shape of the blades of the wind power plant was specified.Based on the computational algorithm and following the similar solutions found in the technique, the rotational speeds of the wind turbine rotor have been determined [15,21,23].The analysis was carried out at three rotational speeds of rotor depending on three wind speeds: minimum, rated and maximum wind speed.Minimum speed of the turbine describes a rotational speed of the rotor at which the turbine can start automatically.Rated rotational speed describes the rotor speed at which the wind turbine should achieve the nominal power.
Maximum speed describes the rotor speed at which the turbine can operate safely, without exposing the rotor structure to the destructive effect of centrifugal forces, vibrations and other factors that can damage the turbine rotor.
Considering that researches conducted on similar types of the wind turbines showed that constant-geometry Darrieus turbines can start-up during wind gusts and work at wind speeds of even 2-5 m/s, the minimum wind velocity has been assumed as vmin=5 m/s.
Considering similar technical applications presented on the market by turbine manufacturers, taking into account the strength aspects and the correct and safe operation of the turbine structure, the maximum wind speed was assumed at the level of vmax=25 m/s, and the rated wind speed at the level vznam=12 m/s.
Analyses were prepared for chosen rotor blades (Fig. 4).The speed of the moving air is shown using graphical vectors illustrating the distribution of airflow velocity and measured at four characteristic measuring points.The air speed values read from the measurement points at given air speeds are listed in the table (Tab.2, 3).In the same way as in the case of the tested blades profiles, a volume was generated, materials were assigned for the generated volume and turbine models, and the boundary conditions of the analyzes were determined.As for the blade profiles, the following: 5 m/s, 12 m/ s and 25 m/s air velocities were assumed: The rotor has been given a motion with a E3S Web of Conferences 46, 00006 (2018) https://doi.org/10.1051/e3sconf/20184600006 3 rd International Conference on Energy and Environmental Protection constant angular velocity.The appropriate angular velocities, determined on the basis of the air flow value, were imposed on the specified axis and the center of rotation (the turbine hub).
The following angular velocities were adopted: 63 1/s, 145 1/s and 310 1/s.The visible color difference between the upper and lower side of the profile (Fig. 5) results from the difference in speed and pressures.The lift force acting on the rotor blades and setting it in motion is formed as a result of these differences.Analyses were conducted for DU 06-W-200 and new designed blade profile.
The speed of the moving air is shown using graphical vectors illustrating the distribution of airflow velocity and measured at four characteristic measuring points.The air speed values read from the measurement points at given air speeds are listed in the table (Tab.4,5).
Determination of values of power coefficient by analytical methods
For averaged measurements of air flow velocities collected at the measuring points before and after the turbine, the value of the power coefficient Cp was determined.The value of Cp On the basis of the range of the power coefficient can be concluded that results are possible to obtain.
While determining the value of the Cp coefficient with the use of the tip speed ratio, for the tested type of turbines, the maximum value of the power coefficient Cp, max=0.35 was assumed.The maximum TSR λmax=6.4 and TSR λ=3.15 were assumed.In this method, identical results were obtained for all types of analyzed turbines.The distinguishing feature of TSR is dependent on the rotational speed, which has been adopted for all types of turbines as the same.In addition, in all types of rotors, the same radius of rotor was assumed, and in addition, this analysis was carried out with identical air flow conditions.
The value of the power coefficient calculated for this type of wind turbines, taking into account the adopted dimensions and parameters, was equal to : 0.171.
While determining the value of the Cp coefficient with the use of the aerodynamic coefficients, for the tested type of turbines, the values of the lift force was determined by determining the value of the lift and drag force coefficients.Based on the assumptions: the number of wind turbine blades N=3, the TSR as λ=3.15 and coefficients C1=0.The values of the power coefficient for the turbine with the DU 06-W-200 profiles were calculated.The Cp coefficients for the analyzed profiles, determined by three methods, are compared (Tab.7).According to the first calculation method, the Cp coefficients for the turbine with the new designed blade profile is slightly worse to the comparative rotor.
The second method gave the same result for all types of analyzed rotors.This method depends on the type of turbine and its dimensions (diameter of the rotor wheel, the number of blades in the rotor), and do not depend on the parameters of the rotor's motion or the type of profiles used.
In the third method, better results ware obtained for the turbine with a new designed blade profile, and also the arithmetic average of all methods, calculated for each type of profile, indicate the new designed profile as better sollution.
Conclusion
A wind farm with a vertical axis of rotation is a simple construction that gives the possibility of obtaining electricity in places where access to it is limited or impossible.A model of a wind turbine with a vertical axis of rotation was developed with the use of new profiles.In order to interpret the obtained results and reference to reality, for comparative analysis the comparative profile DU 06-W-200 and the turbine with their use were used.
Based on the numerical analysis, it can be concluded that the new profile achieves comparable or even better results than the comparative profile in relation to the aerodynamics itself and the value of wind power used by the turbine.
The CFD analysis of the flow around the profiles showed a greater difference in speed between the air speeds at the bottom and the back of the blade for the designed profile, which suggests the creation of more lift force.
Also based on the power coefficient Cp determined by three methods, it is possible to suggest to use the new profile in the technology.The average arithmetic value of the Cp coefficient calculated on the basis of the above methods gives a better result for the designed blade profile than currently use one.
However, in addition to the properly engineered power plant blade, there are still many elements affecting the efficiency of the turbine: the structural parameters of the wind power plant, drives and control systems, materials from which the power plant was built, elements of the climate [24][25][26][27].In order for the investment to bring profit to investors, each of these factors must have a positive impact on the venture.
Fig. 1 .
Fig. 1.Airfoils; (a) the DU 06-W-200 rotor blade profile used in the construction of wind turbines; (b) the designed rotor blade profile
Fig. 3 .
Fig. 3. Power coefficient Cp of wind turbine based on analytical analyses; (a) as a function of wind speed V2/V1 (Betz limit); (b) as a function of TSR λ;
Fig. 4 .
Fig. 4. The distribution of air speed for DU 06-W-200 and designed blade profile at different speeds of the airflow: (a) DU 06-W-200 at air speed 5 m/s; (b) DU 06-W-200 at air speed 12 m/s; (c) DU 06-W-200 at air speed 25 m/s; (d) designed blade at air speed 5 m/s; (e) designed blade at air speed 12 m/s; (f) designed blade at air speed 25 m/s.
Fig. 5 .
Fig. 5.The distribution of air speed for DU 06-W-200 and designed blade profile at different speeds of the airflow: (a) DU 06-W-200 at air speed 5 m/s; (b) DU 06-W-200 at air speed 12 m/s; (c) DU 06-W-200 at air speed 25 m/s; (d) designed blade at air speed 5 m/s; (e) designed blade at air speed 12 m/s; (f) designed blade at air speed 25 m/s.
••
416 and C2=0.35, for the new profile we calculate:• for the air speed = 5 for the air speed = 12 for the air speed = 25
Table 1 .
International Conference on Energy and Environmental Protectionpractice, it turns out that only part of the wind power is actually used to set the turbine in motion.The mechanical power of a wind turbine is obtained by reducing the wind power by the power coefficient Cp.Assumptions for calculations Conferences 46, 00006 (2018) https://doi.org/10.1051/e3sconf/20184600006 3 rd International Conference on Energy and Environmental Protection ) E3S Web of Conferences 46, 00006 (2018) https://doi.org/10.1051/e3sconf/20184600006 3 rd International Conference on Energy and Environmental Protection
Table 2 .
Airflow velocities for the DU 06-W-200 profile at four characteristic points
Table 3 .
Airflow velocities for the designed blade profile at four characteristic points
Table 4 .
Airflow velocities for the rotor made of DU 06-W-200 profile at six characteristic points
Table 5 .
Airflow velocities for the rotor made of designed blade profile at six characteristic points
Table 6 .
E3S Web of Conferences 46, 00006 (2018) https://doi.org/10.1051/e3sconf/20184600006 3 rd International Conference on Energy and Environmental Protection coefficients is calculated taking into accout Betz limit in succession for =5[ Values of Cp coefficients at three different flow rates
|
2018-12-10T21:39:03.112Z
|
2018-09-01T00:00:00.000
|
{
"year": 2018,
"sha1": "78bf83536bad5c860a80c3ae722cf1ac7a57a7c1",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/21/e3sconf_enos2018_00006.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "78bf83536bad5c860a80c3ae722cf1ac7a57a7c1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
53772664
|
pes2o/s2orc
|
v3-fos-license
|
Prognostic factors of synchronous endometrial and ovarian endometrioid carcinoma
Objective Gynecologists occasionally encounter synchronous endometrial and ovarian endometrioid carcinoma (SEO-EC) patients who show favorable prognosis than locally advanced or metastatic disease patients. This study aimed to elucidate prognostic factors of SEO-EC and identify patients who have a sufficiently low risk of recurrence without receiving adjuvant chemotherapy. Methods We retrospectively reviewed 46 patients with pathologically confirmed SEO-EC who underwent surgery at the National Cancer Center Hospital between 1997 and 2016. Immunohistochemical evaluation of DNA mismatch repair (MMR) protein expression were performed for both endometrial and ovarian tumors. Patient outcomes were analyzed according to clinicopathologic factors. Results From the multivariate analysis, cervical stromal invasion indicated a worse prognosis for progression-free survival (hazard ratio [HR]=6.85; 95% confidence interval [CI]=1.50–31.1) and overall survival (HR=6.95; 95% CI=1.15–41.8). Lymph node metastasis and peritoneal dissemination did not significantly affect survival. MMR deficiency was observed in 13 patients (28.3%), with both endometrial and ovarian tumors showing the same MMR expression status. MMR deficiency was not significantly associated with survival. Of 23 patients with lesions confined to only the uterine body and adnexa, only 2 had recurrence in the group receiving adjuvant therapy, while none of the 10 patients who did not receive adjuvant therapy had recurrence. Conclusion SEO-EC patients with tumors localized to the uterine body and adnexa lesions had a low risk for recurrence and may not require adjuvant therapy. SEO-EC may have prognostic factors different from those of endometrial and ovarian cancer.
INTRODUCTION
Synchronous endometrial and ovarian endometrioid cancer (SEO-EC) is an intractable clinical concern for gynecologists. SEO-EC occurs in 3.1%-10.0% of patients with endometrial cancer and 10% of those with ovarian cancer [1][2][3]. The possible origins of SEO-EC include 3 different scenarios, including dual primary cancer, endometrial cancer with ovarian metastasis, and ovarian cancer with endometrial metastasis. Distinguishing dual primary cancer from metastatic disease is essential for choosing the optimal adjuvant treatment and predicting patient prognosis. Previously reported clinicopathological features of dual primary cancer include younger age, earlier stage, histologically lower grade, and a more favorable prognosis than metastatic disease [2,[4][5][6][7][8]. Therefore, some gynecologists believe that patients with dual primary cancer may not always require adjuvant chemotherapy. The traditional approach for selecting the treatment of distinct dual primary cancer from metastatic disease is performed based on the histopathological features, such as histological similarity, tumor volume, depth of myometrial invasion, presence of vascular space invasion, ovarian tumor laterality, presence of atypical endometrial hyperplasia, and/or endometriosis, which were reported by Herrington [9]. However, clinicians occasionally encounter an SEO-EC patient in whom distinguishing between dual primary and metastatic disease using the Scully criteria is difficult (the full description of the criteria provided in Supplementary Table 1).
Recently, the clonality of SEO-EC has been discerned from molecular analyses using next-generation sequencing (NGS) technology [10][11][12]. These studies revealed the clonal relationship between endometrial cancer and ovarian cancer in SEO-EC patients with massively parallel sequencing. Moreover, these studies consistently reported that approximately 95% of the SEO-ECs clinically diagnosed as dual primary cancers were clonal tumors; that is, most SEO-ECs are metastatic disease from either endometrial or ovarian primary tumors.
The clinical and biological behavior of SEO-EC remains to be clarified. Considering recent reports on the clonality of SEO-EC, most of these malignancies are either endometrial cancer with ovarian metastases (The International Federation of Gynecology and Obstetrics [FIGO] stage III) or ovarian cancer with uterine metastases (FIGO Stage II). However, there have been numerous reports of SEO-EC patients with favorable prognosis, despite having metastatic disease [2,[4][5][6][7][8]13,14]. The European Society of Medical Oncology (ESMO) and the National Cancer Center Network (NCCN) guidelines recommend adjuvant therapy for patients that have endometrial cancer with ovarian metastasis or ovarian cancer with uterine metastasis [15][16][17][18]. However, adjuvant therapy might not provide a survival benefit for SEO-EC patients with good prognoses. Therefore, there is concern that unnecessary adjuvant therapy will be routinely performed for SEO-EC patients with metastatic cancers disease, as defined by NGS technology in the near future.
This study aimed to determine prognostic factors of SEO-EC, including those of patients diagnosed with primary uterine cancer, primary ovarian cancer, and dual primary cancer, and identify SEO-EC patients who have a sufficiently low risk of recurrence without adjuvant chemotherapy.
MATERIALS AND METHODS
This retrospective study was approved by the Institutional Review Board of the National Cancer Center Hospital (approval No. 2016-260). The analyzed cases included patients that were pathologically diagnosed with endometrioid cancer of both the endometrium and ovary and who underwent surgery at the National Cancer Center Hospital between January 1997 and June 2016. First, we identified a total of 1,183 patients with endometrial and/or ovarian cancer who underwent surgery at our hospital during the study period. Only 89 patients with both endometrial and ovarian lesions were included. Of these patients, 22 with histological subtypes other than endometrioid carcinoma were excluded. Therefore, 67 SEO-EC cases were identified. Finally, after exclusion of 19 cases without lymph node dissection and 2 cases with subsequent salvage operation, 46 SEO-EC patients were included in the analysis (Supplementary Fig. 1). Data of clinicopathological characteristics and prognoses were collected from medical records.
Immunohistochemical evaluation of DNA mismatch repair (MMR) protein expression was performed using formalin-fixed, paraffin-embedded blocks of both endometrial and ovarian specimens. Representative whole 4-μm-thick sections were analyzed by immunohistochemistry (IHC). The protein expression of 4 MMR proteins was evaluated using the following antibodies . All IHC tests were performed using a Dako autostainer (Dako), according to the manufacturer's instructions. After deparaffinization, tissue sections were stained using antibodies against MLH1, MSH2, MSH6, and PMS2. Slides were counterstained with hematoxylin. MMRdeficient status was defined as the complete loss of nuclear staining for 1 or more MMR proteins. Adjacent normal mucosa, stromal cells, and inflammatory cells with intact nuclear staining served as internal positive controls. When PMS2 or MSH6 loss was observed and to discriminate between MLH1 or MSH2 (concurrent MLH1 and PMS2, or MSH2 and MSH6 loss) and PMS2 or MSH6 (only PMS2 or MSH6 loss) aberrations, an additional IHC test using either an anti-MLH1 or anti-MSH2 antibody was performed.
For the survival analysis, progression-free survival (PFS) was defined as the time from the date of surgery to the date of first recurrence or any cause of death. Overall survival (OS) was defined as the time from the date of surgery to the date of any cause of death. Survival curves were constructed using the Kaplan-Meier method, and a univariate log-rank test was used to assess statistical significance. Multivariate analyses for PFS and OS were performed using Cox proportional hazard modeling. For the analyses, the level of statistical significance was set at p<0.05. All statistical analyses were performed using SPSS version 19 for Mac (IBM Corp., Armonk, NY, USA).
Clinicopathological characteristics of SEO-EC patients
A total of 67 patients with SEO-EC were identified during the study period. Of the 67 patients, 46 met the inclusion criteria and 21 were excluded because they underwent salvage surgery or had insufficient staging procedure (Supplementary Fig. 1). The median follow-up period was 62 months (range, 7-223 months). The patients' characteristics are summarized in Table 1. The median age was 51 years (range, 30-77 years), the median body mass index was 22.0 (range, 16.4-30.7), and 27 patients (58.7%) were nulliparous. All patients achieved a no residual disease status following surgery, and semi-radical hysterectomies were performed for all 4 patients with cervical stromal invasion. Preoperative chemotherapy was not administered in any case. Thirty-two patients (69.5%) received adjuvant therapy after surgery, 29 (63.0%) were treated with chemotherapy, and 3 (6.5%) received radiation therapy. Of them, adriamycin and cisplatin (AP), paclitaxel and carboplatin (TC), cyclophosphamide, doxorubicin and cisplatin (CAP), dose-dense paclitaxel and carboplatin (ddTC), docetaxel and carboplatin (DC) were administered in 11, 8, 5, 4, and 1 patients, respectively. All patients who underwent adjuvant chemotherapy completed 6 cycles of chemotherapy without dose reduction. Most of the patients had pathological grade 1 or 2 endometrial and ovarian cancer and had a unilateral ovarian lesion. Lesions confined to the uterine body and ovary were observed in 23 of the patients (50%), while the other cases showed fallopian tubal involvement (30.4%), cervical stromal invasion (8.7%), peritoneal dissemination (30.4%), lymph node metastasis (32.6%), and positive peritoneal cytology (41.3%). During the follow-up period, 10 patients (21.7%) had recurrence and 9 patients (19.6%) died. Kaplan-Meier estimates for PFS and OS of all the patients are presented in Supplementary Fig. 2. The 5-year PFS of all patients was 80.3% (95% confidence interval [CI]=67.9%-92.6%) and the 5-year OS of all patients was 85.7% (95% CI=75.1%-96.4%). On the immunohistochemical evaluation of MMR protein status, 33 patients (71.7%) were classified as MMR intact, while 13 patients (28.3%) showed MMR deficiency. Of these, loss of MLH1 and PMS2, MSH2 and MSH6, and MSH6-only loss was observed in 7 (15.2%), 3 (6.5%), and 3 (6.5%) patients, respectively (Fig. 1). No tumors were classified as PMS2-only loss. All tumors showed the same MMR protein expression status between endometrial and ovarian tumors.
Prognostic factors for PFS and OS
The univariate analysis of clinicopathological factors and survival outcomes are shown in Table 2. Patients with cervical stromal invasion had significantly lower PFS and OS than patients without cervical stromal invasion (p<0.01 and p=0.03, respectively). Patients with lymph node metastases had significantly lower PFS than patients without lymph node metastases (p=0.04, respectively). Depth of myometrial invasion and lymphovascular space invasion (LVSI) by endometrial lesion were not significantly associated with OS, but patients with myometrial invasion of ≥1/2 and LVSI tended to have worse PFS. MMR protein status was not significantly associated with survival according to the univariate analysis. Multivariate analysis including depth of myometrial invasion, LVSI of endometrial lesion, Although previous studies consistently reported that true primary site of SEO-EC cannot to be strictly determined by clinicopathological and/or genetic features [10][11][12], just for reference, results of survival analyses based on the same FIGO stage were provided in a Supplementary Tables 2 and 3. These survival analyses are based on hypothesis that all the SEO-EC cases were all endometrial primary (FIGO stage III or IV) or all ovarian primary (FIGO stage II or III).
Favorable prognosis group among SEO-EC patients
The SEO-EC patients with confined lesions could be considered to achieve complete resection of the tumor by surgery alone; therefore, these patients might not require adjuvant chemotherapy. Considering the results of multivariate analysis described above, we analyzed the prognosis of the patients who had lesions confined to only the uterine body and adnexa (ovary and fallopian tube); the prognosis of these patients is shown in Tables 2 and 4, as well as Fig. 2 significant differences in the clinicopathological features between the 2 groups. Two patients had recurrence in the group that received adjuvant therapy, while no recurrences were observed in the group that was not administered adjuvant therapy.
DISCUSSION
This study elucidated baseline recurrent risk and prognostic factors of SEO-EC, which have been reported to be biologically clonal lesions. In addition, we identified patients who obtained a benefit from adjuvant therapy. We found that cervical stromal invasion was an independent factor for an unfavorable prognosis and described the clinicopathological features of SEO-EC patients that showed favorable prognosis despite not receiving adjuvant therapy. Furthermore, we clarified the frequency of MMR protein deficiencies in SEO-EC and showed that MMR protein status was the same in both endometrial and ovarian tumors. Recent studies revealed the clonal relationships between endometrial and ovarian tumors in SEO-EC, and all of these results consistently indicated that most SEO-ECs were clonal and metastatic disease [10][11][12]. However, gynecologists occasionally encounter SEO-EC patients that that show a more favorable prognosis than endometrial or ovarian cancers that were diagnosed as locally advanced or metastatic disease. Therefore, this finding raised the clinical question of whether all SEO-ECs should be treated as metastatic disease with a high-risk of recurrence. Although determining if SEO-EC was a uterine or ovarian primary tumor has not been revealed by NGS analysis, we considered that the clinical course and behavior of SEO-EC was quite different from metastatic disease, especially lymphovascular metastasis of endometrial and ovarian cancer.
Univariate and multivariate analyses revealed that cervical stromal invasion had a significant effect on PFS and OS. Notably, lymph node metastasis and peritoneal dissemination did not have a significant impact on survival. If SEO-EC is of endometrial origin, SEO-EC is classified as FIGO stage III endometrial cancer. However, cervical stromal invasion, which is a factor of FIGO stage II, was a significant prognostic factor, while peritoneal dissemination, which is a factor of FIGO stage IV, was not prognostic in the present study. Conversely, if SEO-EC is of ovarian origin, SEO-EC with cervical stromal invasion is classified as FIGO stage II ovarian cancer. However, lymph node metastasis and peritoneal dissemination, which are factors of FIGO stage III, were not significant prognostic factors in this study. According to our knowledge, there have been no previous reports on prognostic factors in SEO-EC patients, including cases that were previously excluded by the classical clinicopathological criteria. Although a limited power to detect differences due to the small sample size should be considered, our results indicate that the prognostic factors of SEO-EC may differ from the established prognostic factors of endometrial and ovarian cancer. More large-scale cohort studies should be performed to validate our findings for identifying SEO-EC patients with a high-risk of recurrence.
Prognosis of SEO-EC patients with lesions confined to only the uterine body and adnexa were analyzed to explore the baseline recurrence risk of these patients. SEO-EC patients with confined lesions could be considered for complete tumor resection by surgery alone; therefore, these patients might not require adjuvant chemotherapy. Considering the significant impact of cervical stromal invasion for a poor prognosis, we assessed patients who had lesions confined to only the uterine body and adnexa. Although there were only 10 patients who did not receive adjuvant therapy, no recurrence was observed in this cohort. [15][16][17][18]. However, adjuvant therapy may result in unnecessary treatment for SEO-EC patients whose disease is localized to the uterine body and adnexa. This hypothesis has been supported by some previous studies. Before the NGS-era, some SEO-EC patients were clinically diagnosed as dual primary cancer but showed comparable prognosis to stage I endometrial cancer and stage I ovarian cancer [8,13]. [27]. However, the results of IHC analysis contained curious combinations of MMR protein loss, such as loss of MLH1 and MSH6. The results included 8 of 32 endometrial cancers and 1 of 32 ovarian cancers with loss of both MLH1 and MSH6. These results implied technical issues in the quality control and assessment by IHC analysis of MMR proteins. Furthermore, the inclusion criteria of the present study were different from those of previous studies, which included only the clinicopathologically diagnosed synchronous primary endometrial and ovarian cancers using the Scully criteria [9]. The differences in inclusion criteria might also explain the discordant results between our study and previous studies.
Previous studies reported that MMR protein expression status was significantly associated with low histological grade, disease stage, and LVSI in endometrioid-type endometrial cancer [19,28] and that MMR deficiency might be correlated to an optimal prognosis [29,30]. However, MMR protein status was not significantly associated with the prognosis of SEO-EC patients in this study. Although the reason for these discordant results remains unclear, this is the first report on the prognostic value of MMR deficiency in SEO-EC patients.
The present study has several limitations. First, we did not examine the clonal relationship between endometrial and ovarian tumors using NGS technology. We assumed that all 46 SEO-EC patients had metastatic disease based on consistent research results from different institutions [10][11][12]; therefore, a very small number of true dual primary SEO-ECs might be included in this study. However, considering that the fact of approximately 95% of SEO-EC patients that were clinically diagnosed as dual primary had metastatic disease, we can expect that few true dual primary SEO-ECs were included and should not affect the main results of this study. Second, the number of the included patients in this study was small due to the rarity of SEO-EC. More large-scale cohort studies should be performed in the future to validate the findings of our study. Nevertheless, the fact that cervical stromal invasion was a significant factor for a poor prognosis in SEO-EC patients was not consistent with conventional FIGO staging for both endometrial and ovarian cancer. Despite the small number of cases, we could point out the possibility that SEO-EC had different prognostic factors from both endometrial and ovarian cancer.
In conclusion, SEO-EC patients with tumors localized to the uterine body and adnexa showed a very low risk for recurrence. Therefore, adjuvant therapy for these patients might not provide a therapeutic benefit. Cervical stromal invasion was a significant factor for a poor prognosis, while MMR protein status was not associated with prognosis for SEO-EC. Recently, most SEO-ECs have been regarded as clonal and metastatic disease by NGS technology; however, prognostic factors of SEO-EC may be different from metastatic endometrial cancer and ovarian cancer. Further large-scale cohort studies are necessary to validate the findings of the present study for identifying SEO-EC patients who may actually obtain benefit with adjuvant therapy.
|
2018-12-02T16:55:10.381Z
|
2018-09-17T00:00:00.000
|
{
"year": 2018,
"sha1": "27d5e218595c4a213c0f4c45cc00cf771d4abd86",
"oa_license": "CCBYNC",
"oa_url": "http://ejgo.org/Synapse/Data/PDFData/1114JGO/jgo-30-e7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "db384b206c7f2cba135512b7394c36ec361594f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226195821
|
pes2o/s2orc
|
v3-fos-license
|
EFFECT OF REPETITIVE WELDING USING ORBITAL GMAW ON TENSILE PROPERTIES OF AISI 304 AUSTENITIC STAINLESS STEEL PIPES
Repair welding is often carried out in steel structural components. The main intention of repair welding is by providing a cure for existing welding defects during initial stages or weld deterioration during their services that can increase the service lives or performance of components. Repair welding is a better choice as compared to replacing the parts because it is a more economical, faster and reliable method to make a part come back to its services when failure of the parts is identified. In this study, the effect of welding parameters on tensile properties of AISI 304 austenitic stainless steel pipes was investigated. The best set of parameters was suggested by using Taguchi method as the design of experiment and analysis. Results showed that the optimum set of parameters for achieving highest ultimate tensile strength (UTS) values was when arc current, arc voltage and welding at 160 A, 22 V and 50 mm/min, respectively. On the other hand, the optimum set of parameters for achieving highest elongation percentage was when arc current, arc voltage and welding speed at 160 A, 22 V and 40 mm/min. Based on the optimum set of parameters, the repetitive welding to replicate a repairing process was performed. It was evidenced that the UTS showed an increasing trend up to the second repair (RW2) before decreased after the third repair (RW3). The highest value of UTS obtained at second repair (RW2) was 525.51 MPa. The repair welding has caused dynamic changed in the microstructure of the weldment. Therefore, the number of optimum repetitions for the AISI 304 austenitic stainless steel pipes was proposed up to second repair.
Introduction
Gas Metal Arc Welding (GMAW) is a welding process in which an electric arc forms between a consumable wire electrode and the work piece metal, which will heat the work piece metal causing them to melt and joint together. Both the arc and weld pool are protected from the atmospheric contamination by an inert gas as a shield which is sent through a nozzle that is concentric with the welding wire guide tube. GMAW has been commercially available around 60 years. It is commonly used as industrial welding process. One of the main factors of this happen are because of its versatility, speed and the ease of adapting the process even to robotic automation.
Definition of orbital welding is the circular movement of welding tool or welding torch around the workpiece to be welded. This orbital welding process mainly involve in industries such as pharmaceutical, aircraft, food and beverage, chemical, fossil and nuclear power plant. It is often used to join tubes or pipes over the other types of joining methods. Whenever advance quality weld is required, orbital welding is the most chosen process for the joining of tubes. This is not only because it provides best weld quality, it also can perform easily and smoothly in a cramped working environment [1].
The demand on stainless steel usage in industry increase drastically as a result of rapid growth, combined with the limitations in production routes and dynamic raw materials price of major alloying addition such as nickel, molybdenum, and chromium have stimulated INDUSTRIAL SYSTEM VOL. [2]. This type of stainless steel is suitable in either conducive or high temperature service environment. Other than that, they also convey good mechanical properties, especially toughness and ductility, so that it shows exceptional elongation during tensile testing.
Repairs are generally carried out by removing a non-acceptable welding defect, rewelding and then reinstating the original geometry of the component. Repair welding is often carried out in steel structural components. The main intention of repair welding is by providing cure for existing welding defects during initial stages or weld deterioration during their services that can increase the service lives or performance of components. Repair welding is a better choice compared to replacing the parts because it is more economical, faster and reliable method to make a part come back to its services when failure of the parts is identified [3]. Wrong processes and poor handiwork such as excess or incomplete weld penetration at fabrication stage can cause failure to the weld. Besides that, inappropriate selection of filler metal used in welding operation also can cause failure to the weld. Another cause that leads to the failure of weld is stagnation during services, where the working environment is corrosive or emphasizes by stress corrosion [4]. The Welding Institute (TWI) conducted an industry survey on repair rates in 2011 and came up with the average repair rates for different welded products/parts based on typically used material grades [5] as shown in Table 1 and Figure 1.
Repair welding is also often required in industry to extend the service life or to increase performance of the parts or components by giving remedy for existence of welding damages during primary stage or weld deterioration during their services [6]. Besides, performing repair welding is low-cost rather than purchasing new parts or components which will increase the cost.
Furthermore, repair welding is comparatively cost-effective than replacing the parts because delay time during waiting of the replacement parts might bring irretrievable lost to a company [7]. In conclusion, repair welding is significant for reduce cost, minimize break down time and extending the service life of a part or component. Figure 1. Average repair rates for different types of products/parts [5] According to Agha Ali et al. [8] metallurgical properties of the steel had undergone changes with the number of welding. When number of weld repair increased, ferrite precipitate was also gradually getting finer and shorter together with some carbide precipitation. Lin et al. [9] stated the primary phases of AISI 304L also comprised of austenite matrix and lathy ferrite precipitates. The lathy ferrite was also become shorter and thinner in accordance to increasing of weld repair. For every repair welding, the material is subjected to additional heat input and the heat input is accumulated in the weldment. On the other hand, Kumar and Shahi [10] clearly stated heat input is a function for both dendrite size and interdendritic spacing. Slower cooling rate due to high heat input facilitates formation of coarser dendrites and they are separated in wider distance compared to low heat input.
Agha Ali et al. [8] also mentioned that grain size number in HAZ is increased corresponded to the number of weld repair that had done on the same location. Therefore, it was evidence that ultimate tensile strength (UTS) tend to increase only up the first repair and UTS begin to decrease after that. Elongation of specimen was also associated to its tensile properties, since it is a measure of changes in the specimen length to original length under tensile test. On the contrary, Vega et al. [11] reported that the tensile strength of API X-52 micro alloyed steel pipe can be increased up to second weld repair, where a maximum value is reached. However, both researchers are concluded that changes of tensile strength are due to reason of grain refinement occurred in the materials.
In this study, the effect of welding parameters on tensile properties of AISI 304 austenitic stainless steel pipes will be investigated. The best set of parameters will be suggested by using Taguchi method as the design of experiment and analysis. At the end of this study, the number of optimum repetitions for the AISI 304 austenitic stainless steel pipes will be proposed.
Experimental Methods
Stainless steel pipes of AISI 304 type with thickness of 4 mm were used in this study. The outer diameter of the pipe was 60.5 mm and it was cut to 60 mm long. The nominal composition of the AISI 304 pipe material and AWS E308 filler wire is given in Table 2. Wire of 308L (including ER308LSi) is predominately used on austenitic stainless steels, such as types 301, 302, 304, 305 and cast alloy. Therefore, AWS E308L wire with 1.2 mm was used as the consumable wire electrode. The ultimate tensile strength (UTS) and elongation of the E308 wire were 379 MPa and 40%, respectively. Taguchi method is a systematic tool for designing high quality of manufacturing system developed by Dr. Genichi Taguchi, a Japanese quality management consultant. It is based on the orthogonal array experiment, which reduced variance for experiments with optimal parameter setting process control. After that, the design integration with parametric optimization process to get the required outcome is achieved in Taguchi method. In this experiment, L9 orthogonal array was used with 3 parameters setting and 3 variables for each parameter. Table 3 shows the planning matrix for this experiment generated for the L9 orthogonal array using a Minitab software. TransSynergic 4000 welding machine equipped with jig as shown in Figure 2 was used in this study. The function of the jig was to hold and rotate the pipe specimen and also to secure the movement of welding nozzle during welding process. Welding speed in this experiment was taken from the speed of the rotating jig. Before the welding process according to the setting parameters were performed, the pipes were tack welded in order to restrict their movement. Tack weld was performed at four locations along the pipes perimeter as shown in Figure 3. Tensile specimens were cut from the welded pipes by using wire electrical discharge machining (WEDM) as shown in Figure 4. They were cut into dog bone shape with the dimension in accordance to the ASTM E8M-04. Filing was performed to flatten the gripping sections of the tensile specimens. Tensile testing was performed by using Shimadzu AG1 universal testing machine (UTM) with load capacity of 100kN and speed of 5mm/min. The ultimate tensile strength (UTS) were recorded. Percentage of elongation was measured from the difference of gauge length before and after tensile testing divided by the initial gauge length times with 100.
Average value of each of tensile properties was measured from two samples.
After the optimum set of parameters was obtained from the first objective, repair welding was performed on the welded pipes. Table 4 shows schematic diagram on how the repair welding was performed on the pipes. Basically, the pipes started with the as-welded condition which designated as RW0. The weld bead height was measured and the value of 1.5 mm was recorded. The weld bead height was then machined with a lathe machine to replicate the preparation before repair welding was performed. Then, welding process using the optimum set of parameters was performed on top of the aswelded weldment. This was to replicate the first repair welding process as designated as RW1. The procedure was repeated for second repair (RW2) and third repair (RW3). Tensile testing was conducted on RW0, RW1, RW2 and RW3 samples. The change in trend of the tensile properties were observed. At the end of this study, number of optimum repetitions for the AISI 304 austenitic stainless steel pipes was proposed. Figure 4. Cut off tensile specimens from the welded AISI 304 pipes by WEDM
Effect of Tensile Properties
Results of tensile testing on AISI 304 stainless steel pipes are shown in Table 5. Results show that failure location mostly occurred at the weldment (WM). On the other hand, only for sample 1 and sample 3, the failure location occurred at the heat affected zone (HAZ).
In order to analyze the Taguchi method, Signal to Noise (S/N) ratio was calculated by using the Minitab software. Larger is better option was selected. This is due to the highest UTS is required for the sample. Table 6 and Table 7 shows the response table for S/N ratios of UTS values for the UTS and elongation, respectively.
From the graph of Main Effects Plot for Means in Figure 5, the optimum parameters to achieve the highest UTS when the arc current, voltage and welding speed were at 160 A, 22 V and 50 mm/min, respectively. From the graph of Main Effects Plot for Means in Figure 6, the optimum parameters to achieve the highest elongation are when the arc current was at 160 A, voltage was at 22 V and welding speed was at 40 mm/min. Table 8 shows the results of UTS obtained for as-welded (RW0) sample and repaired samples to first (RW1), second (RW2) and third (RW3) number of repair. Based on the table and bar chart in Figure 7, it was evidenced that the UTS showed an increasing trend up to the second repair (RW2). Similar trend of result was observed by Hussein et al. [12]. After that, after the third repair (RW3), the UTS value decreased. The highest value of UTS obtained at second repair (RW2) was 525.51 MPa. The repair welding has caused dynamic changed in the microstructure of the weldment. The grain growth substantially affected the tensile strength of the weld metal [2] [13].
Conclusion
In this study, the effect of welding parameters on tensile properties of AISI 304 austenitic stainless steel pipes was investigated. The best set of parameters was suggested by using Taguchi method as the design of experiment and analysis. Results showed that the optimum set of parameters for achieving highest ultimate tensile strength (UTS) values was when arc current, arc voltage and welding at 160 A, 22 V and 50 mm/min, respectively. On the other hand, the optimum set of parameters for achieving highest elongation percentage was when arc current, arc voltage and welding speed at 160 A, 22 V and 40 mm/min. Based on the optimum set of parameters, the repetitive welding to replicate a repairing process was performed. It was evidenced that the UTS showed an increasing trend up to the second repair (RW2) before decreased after the third
|
2019-04-16T13:28:33.741Z
|
2020-05-28T00:00:00.000
|
{
"year": 2020,
"sha1": "0390da0e6b32cf9570eddc1aa5b483be39c83554",
"oa_license": "CCBYNC",
"oa_url": "https://jemis.ub.ac.id/index.php/jemis/article/download/362/273",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f97717def20b028684f4e3a7e4889a5daf4dc449",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
235628383
|
pes2o/s2orc
|
v3-fos-license
|
Face mask detection using deep learning: An approach to reduce risk of Coronavirus spread
Graphical abstract
Introduction
The 209th report of the world health organization (WHO) published on 16th August 2020 reported that coronavirus disease caused by acute respiratory syndrome (SARS-CoV2) has globally infected more than 6 Million people and caused over 379,941 deaths worldwide [1]. According to Carissa F. Etienne, Director, Pan American Health Organization (PAHO), the key to control COVID-19 pandemic is to maintain social distancing, improving surveillance and strengthening health systems [2]. Recently, a study on understanding measures to tackle COVID-19 pandemic carried by the researchers at the University of Edinburgh reveals that wearing a face mask or other covering over the nose and mouth cuts the risk of Coronavirus spread by avoiding forward distance travelled by a person's exhaled breath by more than 90% [3]. Steffen et al. also carried an exhaustive study to compute the community-wide impact of mask use in general public, a portion of which may be asymptomatically infectious in New York and Washington. The findings reveal that near universal adoption (80%) of even weak masks (20% effective) could prevent 17-45% of projected deaths over two months in New Work and reduces the peak daily death rate by 34-58% [4,5]. Their results strongly recommend the use of the face masks in general public to curtail the spread of Coronavirus. Further, with the reopening of countries from COVID-19 lockdown, Government and Public health agencies are recommending face mask as essential measures to keep us safe when venturing into public. To mandate the use of facemask, it becomes essential to devise some technique that enforce individuals to apply a mask before exposure to public places.
Face mask detection refers to detect whether a person is wearing a mask or not. In fact, the problem is reverse engineering of face detection where the face is detected using different machine learning algorithms for the purpose of security, authentication and surveillance. Face detection is a key area in the field of Computer Vision and Pattern Recognition. A significant body of research has contributed sophisticated to algorithms for face detection in past. The primary research on face detection was done in 2001 using the design of handcraft feature and application of traditional machine learning algorithms to train effective classifiers for detection and recognition [6,7]. The problems encountered with this approach include high complexity in feature design and low detection accuracy. In recent years, face detection methods based on deep convolutional neural networks (CNN) have been widely developed [8][9][10][11] to improve detection performance.
Although numerous researchers have committed efforts in designing efficient algorithms for face detection and recognition but there exists an essential difference between 'detection of the face under mask' and 'detection of mask over face'. As per available literature, very little body of research is attempted to detect mask over face. Thus, our work aims to a develop technique that can accurately detect mask over the face in public areas (such as airports. railway stations, crowded markets, bus stops, etc.) to curtail the spread of Coronavirus and thereby contributing to public healthcare. Further, it is not easy to detect faces with/without a mask in public as the dataset available for detecting masks on human faces is relatively small leading to the hard training of the model. So, the concept of transfer learning is used here to transfer the learned kernels from networks trained for a similar face detection task on an extensive dataset. The dataset covers various face images including faces with masks, faces without masks, faces with and without masks in one image and confusing images without masks. With an extensive dataset containing 45,000 images, our technique achieves outstanding accuracy of 98.2%. The major contribution of the proposed work is given below: 1. Develop a novel object detection method that combines one-stage and two-stage detectors for accurately detecting the object in realtime from video streams with transfer learning at the back end. 2. Improved affine transformation is developed to crop the facial areas from uncontrolled real-time images having differences in face size, orientation and background. This step helps in better localizing the person who is violating the facemask norms in public areas/ offices. 3. Creation of unbiased facemask dataset with imbalance ratio equals to nearly one. 4. The proposed model requires less memory, making it easily deployable for embedded devices used for surveillance purposes.
The rest of this paper is organized in sections as follows. Section 2 covers prevalent literature in the field of object recognition. The proposed methodology is presented in Section 3. Section 4 evaluates the performance of the proposed technique with various pre-trained models over different parameters of speed and accuracy. Finally, Section 5 concludes the work with possible future directions.
Related work
Pattern learning and object recognition are the inherent tasks that a computer vision (CV) technique must deal with. Object recognition encompasses both image classification and object detection [12]. The task of recognizing the mask over the face in the pubic area can be achieved by deploying an efficient object recognition algorithm through surveillance devices. The object recognition pipeline consists of generating the region proposals followed by classification of each proposal into related class [13]. We review the recent development in region proposal techniques using single-stage and two-stage detectors, general technique for improving detection of region proposals and pre-trained models based on these techniques.
Single-stage detectors
The single-stage detectors treat the detection of region proposals as a simple regression problem by taking the input image and learning the class probabilities and bounding box coordinates. OverFeat [8] and DeepMultiBox [9] were early examples. YOLO (You Only Look Once) popularized single-stage approach by demonstrating real-time predictions and achieving remarkable detection speed but suffered from low localization accuracy when compared with two-stage detectors; especially when small objects are taken into consideration [10]. Basically, the YOLO network divides an image into a grid of size GxG, and each grid generates N predictions for bounding boxes. Each bounding box is limited to have only one class during the prediction, which restricts the network from finding smaller objects. Further, YOLO network was improved to YOLOv2 that included batch normalization, highresolution classifier and anchor boxes. Furthermore, the development of YOLOv3 is built upon YOLOv2 with the addition of an improved backbone classifier, multi-sale prediction and a new network for feature extraction. Although, YOLOv3 is executed faster than Single-Shot Detector (SSD) but does not perform well in terms of classification accuracy [14,15]. Moreover, YOLOv3 requires a large amount of computational power for inference, making it not suitable for embedded or mobile devices. Next, SSD networks have superior performance than YOLO due to small convolutional filters, multiple feature maps and prediction in multiple scales. The key difference between the two architectures is that YOLO utilizes two fully connected layers, whereas the SSD network uses convolutional layers of varying sizes. Besides, the RetinaNet [11] proposed by Lin is also a single-stage object detector that uses featured image pyramid and focal loss to detect the dense objects in the image across multiple layers and achieves remarkable accuracy as well as speed comparable to two-stage detectors.
Two-stage detectors
In contrast to single-stage detectors, two-stage detectors follow a long line of reasoning in computer vision for the prediction and classification of region proposals. They first predict proposals in an image and then apply a classifier to these regions to classify potential detection. Various two-stage region proposal models have been proposed in past by researchers. Region-based convolutional neural network also abbreviated as R-CNN [16] described in 2014 by Ross Girshick et al. It may have been one of the first large-scale applications of CNN to the problem of object localization and recognition. The model was successfully demonstrated on benchmark datasets such as VOC-2012 and ILSVRC-2013 and produced state of art results. Basically, R-CNN applies a selective search algorithm to extract a set of object proposals at an initial stage and applies SVM (Support Vector Machine) classifier for predicting objects and related classes at later stage. Spatial pyramid pooling SPPNet [17] (modifies R-CNN with an SPP layer) collects features from various region proposals and fed into a fully connected layer for classification. The capability of SPNN to compute feature maps of the entire image in a single-shot resulted in significant improvement in object detection speed by the magnitude of nearly 20 folds greater than R-CNN. Next, Fast R-CNN is an extension over R-CNN and SPPNet [18,12]. It introduces a new layer named Region of Interest (RoI) pooling layer between shared convolutional layers to fine-tune the model. Moreover, it allows to simultaneously train a detector and regressor without altering the network configurations. Although Fast-R-CNN effectively integrates the benefits of R-CNN and SPPNet but still lacks in detection speed compared to single-stage detectors [19].
Further, Faster R-CNN is an amalgam of fast R-CNN and Region Proposal Network (RPN). It enables nearly cost-free region proposals by gradually integrating individual blocks (e.g. proposal detection, feature extraction and bounding box regression) of the object detection system in a single step [20,21]. Although this integration leads to the accomplishment of break-through for the speed bottleneck of Fast R-CNN but there exists a computation redundancy at the subsequent detection stage. The Region-based Fully Convolutional Network (R-FCN) is the only model that allows complete backpropagation for training and inference [22,23]. Feature Pyramid Networks (FPN) can detect nonuniform objects, but least used by researchers due to high computation cost and more memory usage [24]. Furthermore, Mask R-CNN strengthens Faster R-CNN by including the prediction of segmented masks on each RoI [25]. Although two-stage yields high object detection accuracy, but it is limited by low inference speed in real-time for video surveillance [14].
Techniques for improving detectors
Several techniques for improving the performance of single-stage and two-stage detectors have been proposed in past [26]. Easiest among all is cleaning the training data for faster convergence and moderate accuracy. Hard negative sampling technique is often used to provide negative samples for achieving high final accuracy [27]. Modification in context information is another approach used to improve detection accuracy or speed. MS-CNN [20], DSSD [21] and TDN [22] strengthen the feature representation by enriching the context of coarser features by including an additional layer in a top-down manner for better object detection. BlitzNet improved SSD by adding semantic segmentation layer to achieve high detection accuracy [27]. The object detection architectures discussed so far have several open-source models which are pre-trained on large datasets like ImageNet [28], COCO [29] and ILSVRC [30]. These open-source models have largely benefitted in the area of computer vision and can be adopted with minor extensions to solve specific object recognition problem thereby avoiding everything from scratch. Fig. 1 summarizes various pre-trained models based on CNN architectures commenced from 2012 to 2018. These models vary in terms of baseline architecture, number of layers, inference speed, memory consumption and detection accuracy. The achievement of each model is mentioned in Fig. 1.
To enforce mask over faces in public areas to curtail community spread of Coronavirus, a machine learning approach based on the available pre-trained model is highly recommended for the welfare of the society. These pre-trained models are required to be finely tuned with benchmark datasets. The number of datasets with diverse features pertaining to human faces with and without mask are given in Table 1.
An extensive study conducted on available face-related datasets reveal that there exist principally two kinds of datasets. These are: i) masked face and ii) face masked datasets. The masked face datasets are more concentrated on including the face images with a variant degree of facial expression and landmarks whereas face mask centric datasets include those images of faces that are mainly characterized by occlusions and their positional coordinates near the nose and mouth area. Table 1 summarizes these two kinds of prevalent datasets. The following shortcomings are identified after critically observing the available literature: 1. Although there exist several open-source models that are pre-trained on benchmark datasets, but a few models are currently capable of handling COVID related face masked datasets. 2. The available face masked datasets are scarce and need to strengthen with varying degrees of occlusions and semantics around different kinds of masks. 3. Although there exist two major types of state of art object detectors: single-stage detectors and two-stage detectors. But none of them truly meets the requirement of real-time video surveillance devices. These devices are limited by less computational power and memory [37]. So, they require optimized object detection models that can perform surveillance in real-time with less memory consumption and without a notable reduction in accuracy. Single-stage detectors are good for real-time surveillance but limited by low accuracy, whereas two-stage detectors can easily produce accurate results for complex inputs but at the cost of computational time. All these factors necessitate to develop an integrated model for surveillance devices which can produce benefits in terms of computational time as well as accuracy.
To solve these problems, a deep-learning model based on transfer learning which is trained on a highly tuned customized face mask dataset and compatible with video surveillance is being proposed and discussed in detail in the next section.
Proposed architecture
The proposed model is based on the object recognition benchmark given in [38]. According to this benchmark, all the tasks related to an object recognition problem can be ensembled under three main components: Backbone, Neck and Head as depicted in Fig. 2. Here, the backbone corresponds to a baseline convolutional neural network capable of extracting information from images and converting them to a feature map. In the proposed architecture, the concept of transfer learning is applied on the backbone to utilize already learned attributes of a powerful pre-trained convolutional neural network in extracting new features for the model.
An exhaustive backbone building strategy with three popular pretrained models namely ResNet50, MobileNet and AlexNet are conducted for obtaining the best results for facemask detection. The ResNet50 is found to be optimized choice for building the backbone (Refer Section 4.2) of the proposed model. The novelty of our work is being proposed in the Neck component. The intermediate component, the Neck contains all those pre-processing tasks that are needed before the actual classification of images. To make our model compatible with surveillance devices, Neck applies different pipelines for the training and deployment phase. The training pipeline follows the creation of an unbiased customized dataset and fine-tuning of ResNet50. The deployment pipeline consists of real-time frame extraction from video followed by face detection and extraction. In order to achieve trade-off between face detection accuracy and computational time, we propose an image complexity predictor (Refer. Section 3.3). The last component, Head stands for identity detector or predictor that can achieve the desired objective of deep-learning neural network. In the proposed architecture, the trained facemask classifier obtained after transfer learning is applied to detect mask and no mask faces. The ultimate objective of enforcement of wearing of face mask in public area will only be achieved after retrieving the personal identification of faces, violating the mask norms. The action can further, be taken as per government/ office policy. Since there may exist differences in face size and orientation in cropped ROI, affine transformation is applied to identify facial using OpenFace 0.20 [38,39]. The detailed description of each task in the proposed architecture is given in the following subsections.
Creation of unbiased facemask dataset
A facemask-centric dataset, MAFA [35] with a total of 25,876 images, categorized over two classes namely masked and non-masked was initially considered. The number of masked images in MAFA are 23,858 whereas non-masked images are only 2018. It is observed that MAFA is put up with an extrinsic class imbalanced problem that may introduce a bias towards the majority class. So, an ablation study is conducted to analyze the performance of the image classifier once with the original MAFA set (biased) and then with the proposed dataset (unbiased).
Supervised pre-training
We discriminatively pre-trained the CNN on the original biased MAFA dataset. The pre-training was performed using the open-source Caffe python library [7]. In short, our CNN model nearly matches the performance of Madhura et al. [11], obtaining a top-1 error rate 1.8% higher on MAFA validation set. This discrepancy may cause due to simplified training approach.
Supervised pre-training with domain-specific fine-tuning
The other approach is to first remove the inherent bias present in the available dataset and then execute supervised learning over a domainspecific balanced dataset. The bias is alleviated by applying random over-sampling (ROS) with data augmentation. The technique reduces the imbalance ratio ρ = 11.82 (original) to ρ = 1.07. The formula used for computing the imbalance ratio is given by equation (1).
Here, D refers to image Dataset, majority (Di) and minority (Di) return the majority and minority class of D. Count(X) returns the number of images in any arbitrary class x. After data balancing, stochastic gradient descent (SGD) training of CNN parameters with a learning rate of 0.003 is set over wrapped region proposals. The low learning rate allows fine-tuning of the model without clobbering the initialization. We added 2025 negative windows with 50 background windows to increase non-mask dataset ≈ 22 KB. The balancing leads to a reduction in the top-1 error rate of 3.7%.
Fine-tuning of pre-trained model
In the proposed work, facemask detection is achieved through deep neural networks because of their better performance than other classification algorithms. But training a deep neural network is expensive because it is a time-consuming task and requires high computational power. To train the network faster and cost-effective, deep-learningbased transfer learning is applied here. Transfer learning allows to transferring of the trained knowledge of the neural network in terms of parametric weights to the new model. It boosts the performance of the new model even when it is trained on a small dataset. There are several pre-trained models like AlexNet, MobileNet, ResNet50 etc. that had been trained with 14 million images from the ImageNet dataset [40]. In the proposed model, ResNet50 is chosen as a pre-trained model for facemask classification. The last layer of ResNet50 is fine-tuned by adding five new layers. The newly added layers include an average pooling layer of pool size equal to 5 × 5, a flattering layer, a dense ReLU layer of 128 neurons, a dropout of 0.5 and a decisive layer with softmax activation function for binary classification as shown in Fig. 3.
Image complexity predictor for face detection
To address problem 3 identified in Section 2, various face images are analyzed in terms of processing complexity. It is observed that dataset, we consider primarily, contains two major classes that is, mask and nonmask class but the mask class further, contains an inherent variety of occlusions other than surgical/cloth facemask, for example, occlusion of ROI by other objects like a person, hand, hair or some food item as shown in Fig. 4. These occlusions are found to impact the performance of face and mask detection. Thus, obtaining an optimal trade-off between accuracy and computational time for face detection is not a trivial task. So, an image complexity predictor is being proposed here. Its purpose is to split data into soft versus hard images at the initial level followed by a mask and non-mask classification at a later level through a facemask classifier. The important question that we need to answer is how to determine whether an image is soft or hard. The answer to this question is given by the "Semi-supervised object classification strategy" proposed by Lonescu et al. [41]. The Semi-supervised object classification strategy is suitable for our task because it predicts objects without localizing them. For implementing this strategy, we took three sets of image samples: the first set (L) contains labelled (hard/soft) training images, the second set (U) contains unlabelled training images and the third set (T) contains unlabelled test images. We further, applied the curriculum learning approach as suggested in [26], which operates iteratively, by training the hard/soft predictor at each iteration on enlarged training set L. The training set L is enlarged by randomly moving k samples from U to L. We stopped the learning process when L grew three times its original size. Initially, 500 labelled samples are populated in L. The initial labelling of samples in L is done using the three most correlated image properties that make the image complex. These properties are namely object density (including full, truncated and occluded faces), mean area covered by object normalized by image size and image resolution. The object density is evaluated by human annotators. We took 50 trusted annotators and each annotator is shown 10 images.
We asked two questions to each annotator. The questions were: "Is there a human being in the image?" and "How many human faces including full, truncated and occluded faces are present in the given image?". We ensured the annotation task is not trivial by presenting images in a random order such that if the answer to one image is positive then for another image, it may be negative. We recorded the response time of each annotator to answer the questions. We removed all response time longer than 30 s to avoid bias. Further, each annotator response time is normalized by subtracting it from meantime and dividing by standard deviation. We computed the geometric mean of all response times per image and saved the values as object density. We further, observed that image complexity is positively related to object density whereas negatively related to object size and image resolution. Based on these image properties, a ground truth visibility difficulty score is assigned to each image.
To automatically predict the hardness of images in T, we further used VGG-f with νsupport vector regression as discussed in [26]. The last layer of VCG-f is replaced by a fully connected layer. Each test image is divided into three bins of 1 × 1, 2 × 2 and 3 × 3 size, to get pyramid representation of the image for better performance. The image is also flipped horizontally and the same pyramid is applied over it. The 4096 features extracted from each bin are combined to obtain a single feature vector followed by normalization using L2-norm. The obtained normalized feature vector is further, used to regress the image complexity score. Thus, the model automatically predicts image complexity for each image in T. Having identified the hardness of the test images using an image complexity predictor, a soft image is proposed to process through a fast single-stage detector while the hard image is accurately processed by two-stage detector. We employ MobileNet-SSD model for predicting the class of soft images and faster R-CNN based on ResNet50 for predicting hard images. The algorithm for image complexity predictor is outlined below: Algorithm. Image_Complexity_Predictor () 1. Input: 2. Image ← input image 3. D fast ← single-stage detector (0-100%). Here, the test data is partitioned based on random split or soft versus hard spilt given by Image Complexity Predictor. To reduce the bias, the average mAp over 5 runs is recorded for random spilt. The elapsed time is measured on Inter I7, 2.5 GHZ CPU with 16 GB RAM.
Identity prediction
After detecting faces with masks and non-mask in the search proposal, the non-mask faces are passed separately into a neural network for further exploration of a person's identity for being violating the facemask norm. The step requires a fixed-sized input. One possible way of getting a fixed-size input is to reshape the face in the bounding box to 96 × 96 pixels. The potential issue with this solution is that the face could be looking in a different direction. Affine transformation can handle this issue very easily [42]. The technique is similar to deformable part models described in [43]. The use of affine transformation is depicted in Fig. 5.
After applying the transformation, the boundary box regression is applied to map region proposal (R) to ground truth bounding box (G). The working of bounding box regression is discussed in detail here. Let each region proposal (face) is represented by a pair (R, G), where R = (R x , R y , R w , R h ) represents the pixel coordinates of the centre of proposals along with width and height. Each ground truth bounding box is also represented in the same way i.e. G = (G x , G y , G w , G h ). So, the goal is to learn a transformation that can map region proposal (R) to groundtruth bounding box (G) without loss of information. We propose to apply a scale invariant transformation on pixels coordinates of R and log space transformation on width and height of R. The corresponding four transformations are represented as T x (R), T y (R), T w (R) and T h (R). So, coordinates of the ground truth box can be obtained by equations (2), (3), (4) and (5).
Here, each T i (i denotes one of x, y, w, h) is applied as a linear function of the Pool 6 feature of R denoted by f 6 (R). Here, the dependence of f 6 (R) on R is implicitly assumed. Thus, T i (R) can be obtained by equation (6).
where W i denotes the weight learned by optimizing the regularized least square objective of ridge regression and is computed by equation (7).
Loss function and optimization
Defining the loss function for the classification problem is among the most important part of the convolutional neural network design. In classification theory, a loss function or objective function is defined as a function that maps estimated distribution onto true distribution. An optimization algorithm should minimize the output of this function. The stochastic gradient descent optimization algorithm is applied to update the model parameters with a learning rate of 0.03. Further, there exist numerous loss functions in PyTorch but one which is most suitable with balance data is cross-entropy loss. Furthermore, an activation function is required at the output layer to transform the output in such a way that would be easier to interpret the loss.
Since the formula for cross-entropy loss given in equation (12) takes two distributions, t(x), the true distribution and e(x), the estimated distribution defined over discrete variable x [44], thus activation functions that are not interpretable as probabilities (i.e. negative or greater than 1 or sum of output not equals to 1) should not be selected. Since Softmax guarantees to generate well-behaved probabilities distribution over categorical variable so it is chosen in our proposed model.
Further, the loss function over N images (also known as cost function over complete system) in binary classification can be formulated as given in equation (13).
Performance evaluation
To evaluate the performance of the proposed model, the experiment is conducted to answer the following research questions: RQ1: Which model will be best fit as a backbone for detecting mask/ non-mask faces using transfer learning?
RQ2: How do we evaluate the performance of image complexity predictor?
RQ3: How to check the utility of identity prediction in the proposed model?
RQ4: How does our model perform compared to the existing face mask detection model in terms of accuracy and computational speed?
RQ5: What measures should be considered to avoid overfitting?
Experimental setup
The experiment is set up by loading different pre-trained models using the Torch Vision package (https://github.com/pytorch/vision). These models are fine-tuned on our dataset using the open-source Caffe Python library. We choose our customized unbiased dataset with 45,000 images available online at https://www.kaggle.com/mrvis wamitrakaushik/facedatahybrid. Int-Scenario training strategy is adopted as employed in [8]. The dataset is split into training, testing and validation set with 64:20:16 respectively. The algorithms are implemented using Python 3.7 and face detection is achieved through MobileNet-SSD/ResNet. .dib is used for detecting masks with learning rate = 0.003, momentum = 0.9 and batch size = 64.
Model comparison
As discussed in Section 3.2, we can apply transfer learning on pretrained models for image classification but one question that yet to answer is how we can decide which model is effective for our task. In this section, we will compare three efficient models viz. ResNet50, AlexNet and MobileNet, based on following criteria: [45].
A model with minimum Top-1 error, less inference time on CPU and optimum number of parameters will be considered as a good model for our work.
The confusion matrices for different models during testing are given in Fig. 6. The accuracy comparison of various models based on Top-1 error is presented graphically in Fig. 7(a). It may be noted from the graph that the error rate is high in AlexNet and least in ResNet50. Next, we compared the model based on inference time. Test images are supplied to each model and inference times for all iterations are averaged out. It may be observed from Fig. 7(b) that MobileNet takes more time to infer images whereas ResNet and AlexNet take almost equal inference time for images. Further, the memory usage comparison among underlying models is done by finding the number of learnable parameters. These parameters can be obtained by generating model summary in Google colab for each model. It may be noted in Fig. 7(c) that the number of parameters present in AlexNet is around 28 million for our customised dataset. Furthermore, the number of parameters present in MobileNet and ResNet 50 are around 3.5 million and 25 million respectively.
After analyzing the performance of each model on various criteria, we then, squeezed all these details into a single bubble chart by taking no. of parameters as X-coordinate and inference time as Y-coordinate. The bubble size represents the Top-1 error (small bubble is better). The overall comparison of all models is represented by a bubble graph in Fig. 7(d).
It may be observed from Fig. 7, smaller bubbles are better in terms of accuracy and bubbles near the origin are better in terms of memory usage and inference speed. Now, the answer to RQ1 can be given as follows: • AlexNet has a high error rate.
• MobileNet is slow in inferring results.
• ResNet50 is an optimized choice in terms of accuracy, speed and memory usage for detecting face mask using transfer learning.
Performance analysis of image complexity predictor
For performance evaluation of the Image complexity predictor, we use Kendall's coefficient τ (tau). We compute Kendall's rank correlation coefficient τ between the predicted image complexity score and ground truth visual difficulty score. The Kendall's rank correlation coefficient is a suitable measure for our analysis because it is invariant to different ranges of scoring methods. Based on image properties, each human annotator assigns a visual difficulty score to an image from a range that is different from the range, predicted image complexity score is assigned. The Kendal's rank correlation coefficient is computed in Python using kendalltau()SciPy function. The function takes two scores as arguments and returns the correlation coefficient. Our predictor attains Kendall's rank correlation coefficient τ of 0.741, implying the remarkable performance of the image complexity predictor. It may be observed from Fig. 8 that a very strong correlation exists between ground truth and predicted complexity scores. It may be further noted from Fig. 8 that the cloud of points forms a slanted Gaussian with principle component aligned towards diagonal, verifies a strong correlation between two scores.
Performance analysis of identity predictor
In order to impose wearing of face mask in public areas such as schools, airports, markets etc., it becomes essential to find out the identity of those faces which are violating the rules, means either not wearing or not correctly wearing a face mask. Typically, these identities can be found by training our model with persons faces. For this purpose, the photographs of 2160 students are collected and populated in our customized dataset which is available online at https://www.kaggle. com/mrviswamitrakaushik/facedatahybrid. In order to well-train our system, we have taken five photographs of each student, ensuring face looking in different directions with different backgrounds. To further, proceed with the experiment, the video streaming from four CCTV cameras located at different locations in Department of Computer Applications, J. C. Bose University of Science and Technology, Faridabad, India is analysed. We captured the images from real-time video. Fig. 9 shows samples of images captured through different locations: Lecture room LT01, Corridor 2nd floor and staircase 2nd floor. To further, proceed with the experiment, the video streaming from four CCTV cameras located at different locations in Department of Computer Applications, J. C. Bose University of Science and Technology, Faridabad, India is analysed. We captured the images from real-time video. Fig. 9 shows samples of images captured through different locations: Lecture room LT01, Corridor 2nd floor and staircase 2nd floor.
Precision and Recall are taken as evaluation metrics for identity prediction. The Precision and Recall for identity predictor are 98.86% and 98.22% respectively.
Comparison of proposed model with existing models
In this section, we aim to compare the performance of the proposed model with public baseline results published in RetinaFaceMask [11], which aims to answer RQ2. Since RetinaFaceMask is trained on the MAFA dataset and performance is evaluated using precision and recall for face and mask detection so, for comparison purposes, the performance of the proposed technique is also evaluated in the same environment. We employed two standard metrics namely Precision and Recall for comparing the performance of these two systems. The experimental results are reported in Table 3. It may be noted from Table 3 that the proposed model with ResNet50 as backbone achieves higher accuracy as compared to RetinaFaceMask.
Particularly, the proposed model generates 11.75% and 11.07% higher precision in the face and mask detection respectively when compared with RetinaFaceMask. The recall is improved by 3.05% and 6.44% in the face and mask detection respectively. We had observed that improved results are possible due to optimized face detector discussed in Section 3.3 for dealing with complex images.
Controlling overfitting
To address RQ5 and avoid the problem of overfitting, two major steps are taken. First, we performed data augmentation as discussed in Section 3.1.2. Second, the model accuracy is critically observed over 60 epochs both for the training and testing phase. The observations are reported in Fig. 10.
It is further observed that model accuracy keeps on increasing in different epochs and get stable after epoch = 3 as depicted graphically in Fig. 10 above. To summarize the experimental results, we can say that the proposed model achieves high accuracy in face and mask detection with less inference time and less memory consumption as compared to recent techniques. Significant efforts had been put to resolve the data imbalance problem in the existing MAFA dataset, resulting in a new unbiased dataset which is highly suitable for COVID related mask detection tasks. The newly created dataset, optimal face detection approach, localizing the person identity and avoidance of overfitting resulted in an overall system that can be easily installed in an embedded device at public places to curtail the spread of Coronavirus.
Conclusion and future scope
In this work, a deep learning-based approach for detecting masks over faces in public places to curtail the community spread of Coronavirus is presented. The proposed technique efficiently handles occlusions in dense situations by making use of an ensemble of single and twostage detectors at the pre-processing level. The ensemble approach not only helps in achieving high accuracy but also improves detection speed considerably. Furthermore, the application of transfer learning on pretrained models with extensive experimentation over an unbiased dataset resulted in a highly robust and low-cost system. The identity detection of faces, violating the mask norms further, increases the utility of the system for public benefits.
Finally, the work opens interesting future directions for researchers. Firstly, the proposed technique can be integrated into any highresolution video surveillance devices and not limited to mask detection only. Secondly, the model can be extended to detect facial landmarks with a facemask for biometric purposes.
|
2021-06-25T13:16:56.003Z
|
2021-06-22T00:00:00.000
|
{
"year": 2021,
"sha1": "a465e88206c9a5a360884c4d51ffda002e18f2f6",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jbi.2021.103848",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "345f1bf7a36aabbbe530e2d720bd142ed25928bc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
247678808
|
pes2o/s2orc
|
v3-fos-license
|
Improving Perioperative Communication During the COVID‐19 Pandemic
ABSTRACT Perioperative communication can be ineffective and result in delays or adverse events. Coronavirus disease 2019 (COVID‐19) has placed demands on health care leaders and personnel to integrate information quickly and accurately. When caring for patients diagnosed with COVID‐19 or whose infection status was unknown, perioperative personnel at one facility discovered communication gaps associated with the environmental cleaning process and hand‐over reports. A project team comprising perioperative nurses created five tools to provide critical information to help diverse team members share the same mental model. The project team created one tool in English and Spanish to meet the needs of environmental services personnel whose primary language was Spanish. The team created another tool to support communication with central processing department personnel and facilitate prioritization of case cart cleaning when needed. The development and implementation of the communication tools helped to provide a safe working environment during the COVID‐19 pandemic.
A pproximately 30% of perioperative communication is ineffective and can result in delays, workarounds, wasted resources, patient incon venience, and tension among team members. 1 Distractions, language barriers, or education deficits may affect communication, which is a contributing factor for development of adverse events (eg, retained surgical items 2 ).
During the coronavirus disease 2019 (COVID-19) pandemic, maintaining safety of patients and personnel necessitated improving the effectiveness of communication. The frequent reports of the disease transmission levels and associated mortality rates caused anxiety for health care workers and leaders, and some information was contradictory and inconsistent. 3 Leaders received updated information daily, and it was difficult to distinguish scientific data from media speculation.
Regardless of the subject, effective communication involves routinely sharing accurate information with personnel in a structured and consistent manner. 3 This article describes perioperative considerations related to the COVID-19 pandemic, associated communication concerns, and the strategies that leaders at one organization used to address the concerns.
BACKGROUND
Coronaviruses are enveloped RNA viruses that usually lead to upper respiratory tract infections, such as the common cold. 4 Transmission of COVID-19 can occur when infected individuals (who may be asymptomatic) release the virus into the air during exhalation, such as when talking, breathing, or sneezing. 5 Virus particles may be coated with mucus or saliva, which may affect particle size and subsequent transmission. 6 The World Health Organization and the US Centers for Disease Control and Prevention (CDC) identify particles greater than 5 µm in diameter as droplets and particles less than 5 µm in diameter as aerosols. 7,8 The size of the SARS-CoV-2 particles that cause the COVID-19 infection varies; some particles containing the virus are fine (ie, less than 2.5 µm in diameter) and some are large (ie, greater than 10 µm in diameter). 9 The World Health Organization continues to indicate that transmission of the COVID-19 virus primarily occurs through droplet and contact routes, but acknowledges that airborne transmission may be possible. 10 However, the CDC recognizes three principal routes of transmission: inhalation of fine droplets and aerosols, deposition of virus droplets and particles on mucus membranes, and contamination of mucus membranes by virus-laden hands. 11 Therefore, based on the concept of aerosolized transmission, the CDC provides detailed guidance on the donning and doffing of required personal protective equipment (PPE) that personnel wear when caring for a patient diagnosed with COVID-19. 12
PATIENT CARE DURING THE PANDEMIC
The COVID-19 pandemic has placed unprecedented demands on health care organization leaders and personnel, 13 notably related to the use of PPE and environmental cleaning. Recommendations to facilitate patient safety when caring for the anticipated influx of infected patients included protecting and supporting frontline caregivers, limiting the number of personnel providing direct patient care, and communicating openly with personnel. 14 The COVID-19 pandemic has placed unprecedented demands on health care organization leaders and personnel, notably related to the use of PPE and environmental cleaning.
Recommended guidance for donning and doffing PPE correctly to prevent COVID-19 transmission is different from the traditional process that personnel had been using in perioperative environments. 12 Therefore, leaders and educators needed to clearly communicate requirements for donning and doffing PPE safely, and personnel needed to participate in education activities and practice the skills to prevent inadvertent contamination when caring for patients.
Personnel also needed to learn different environmental cleaning processes to prevent the spread of COVID-19 to patients and personnel. Many perioperative departments may not have the capability to convert ORs from positive pressure (ie, pushing air out) to negative pressure (ie, pulling air in). Therefore, procedures for postoperative cleaning in a positive-pressure environment after possible aerosolized contamination in the OR became critically important.
COMMUNICATION CHALLENGES
Failures can occur when the communicator provides the message on the wrong occasion, includes insufficient or inaccurate content, addresses the incorrect audience, or has an unclear purpose. 1 In some facilities, perioperative personnel may speak different languages and the styles of communication may vary depending on the staff member's role. Transcribing verbal information into a written format can lead to medical errors resulting from different dialects and pronunciation, background noises, and distractions. 15 Each team member may have a unique perspective or mental model of a situation and its related tasks; lack of a shared understanding can contribute to avoidable adverse perioperative events, 16 ineffective communication, 16,17 and decreased teamwork. 16 In addition, perioperative miscommunication can result in teamwork failures and also lead to adverse events for patients. 1,2 Team members who are not familiar with each other also may contribute to ineffective communication. Results from an observational study involving 10 open procedures showed that team members did not compensate for unfamiliarity when communicating; however, familiarity among team members also did not prevent ineffective communication. 18
STRATEGIES TO IMPROVE COMMUNICATION
Understanding diverse educational backgrounds, recognizing primary languages, and acknowledging cultural sensitivity may be useful when organization leaders provide education activities for a diverse workforce. The US Department of Health and Human Services recommends that health care organization leaders recruit, develop, educate, and retain a diversified workforce to support the population that the organization serves. 19 The National Center for Cultural Competence recognizes that cultural awareness is an integral component of cultural competence for health planning and policy. 20 Diversity of the health care workforce has increased since 1987. When clinicians acknowledge cultural diversity, team interactions may be more successful than when they do not. 20
Shared Mental Model
Perioperative environments are complex and require interdisciplinary teamwork. An organized method to share knowledge that permits the team members to describe, explain, and predict events and interact with their environment is a shared mental model. 21 Team members can manage difficult and changing situations and decide which action to take when they use multiple types of mental models, including • the knowledge a team member requires to operate the technology or equipment to complete tasks; • the shared understanding of how to perform the procedure using the technology or equipment; • an understanding of the shared ideas of the roles and responsibilities of the team members; and • the idea that each person is part of a team, with the requirements to meet the expectations of team membership. 22 An organized method to share knowledge that permits the team members to describe, explain, and predict events and interact with their environment is a shared mental model. The shared mental model incorporates both the knowledge required for performance and the relationships among team members to organize their work. Therefore, creating a shared mental model that allows all team members to understand and adapt to unfamiliar situations may help improve communication.
PANDEMIC EFFECTS ON PERIOPERATIVE COMMUNICATION
In March 2020, during the early stages of the COVID-19 pandemic, perioperative personnel at Hackensack University Medical Center in New Jersey were required to learn a large amount of information quickly. The perioperative team for the 24 ORs in the department comprises surgical technologists, environmental service aides (ESAs), equipment technicians, and patient transporters. Perioperative personnel expressed difficulty staying informed on pandemic information and the frequently revised policies and protocols. In addition, the perioperative team members identified a lack of communication among personnel assigned to different shifts regarding the status of the OR, supplies, equipment, and patient care related to COVID-19.
Personnel frequently used small note papers and surgical gown tags to share information with each other.
We (an extant project team of perioperative nurses) noticed the communication challenges related to the frequently changing information that affected perioperative processes and decided to prepare formal signage to promote a shared mental model for optimal communication. Although facility leaders expect all personnel to be able to communicate in English, the primary language of many ESAs is Spanish, and we identified a need to provide information on COVID-19 processes in both English and Spanish. To optimize efficiency, we also explored the importance of clearly defining the staff members' roles and responsibilities. We recognized that communication may be challenging when it does not include an individual's preferred language and that he or she may be excluded from the shared mental model. Based on this information, we opted to create an English and Spanish communication tool to provide information on the changing processes. We decided that it might be helpful to provide educational tools in languages other than Spanish (eg, Ukrainian, Albanian), but time pressures to deliver the information as quickly as possible and reach as many personnel as possible prevented inclusion of additional languages. We therefore undertook the creation of bilingual communication tools for perioperative personnel to ensure that they would be able to practice safely and provide a consistent approach to intraoperative activities during the pandemic.
Opportunities to Improve Perioperative Communication
We identified three areas requiring improvement in communication clarity. The first area involved enhanced team communication for cleaning the OR, equipment, and supplies after perioperative personnel transported the patient to the postoperative area. It was critical that all perioperative personnel possess the same information regarding the process for environmental cleaning after the completion of a surgical procedure for a patient diagnosed with COVID-19 or whose infection status was unknown.
Before the pandemic, ESAs wore standard surgical attire with surgical masks, eye protection, and unsterile gloves to perform perioperative environmental cleaning immediately after the RN circulator and anesthesia professional transferred the patient to the postanesthesia care unit. The ambiguous and ever-changing information on disease transmission necessitated a change in the protocol for surgical attire during room cleaning and additional education and simulation to practice new skills. After leaders revised the surgical attire protocol, the perioperative education specialist conducted education sessions for groups of two staff members in a small conference room located near the OR. The private learning environment provided an opportunity for questions and hands-on practice and allowed the personnel to adhere to the social distancing requirements.
In accordance with design recommendations for surgical suites, most of the ORs at our facility are configured as positive-pressure environments with at least 15 air exchanges per hour, three of which include outdoor air. 23 In accordance with AORN recommendations related to airborne disease transmission, 24 we placed one highefficiency particulate air (HEPA) filter next to the anesthesia professional for intubation and extubation and a second HEPA filter near the door immediately inside the OR. After the pandemic declaration, perioperative leaders initially recommended that ORs remain vacant for two hours after personnel transferred the patient to the nursing unit. However, after reviewing information from the CDC and consulting with facilities personnel, leaders realized that the positive-pressure ORs needed to remain vacant for only 28 minutes to remove 99.9% of airborne contaminants with 15 air exchanges per hour. 23 Therefore, the leaders decreased the vacant room requirement to 30 minutes.
After the pandemic declaration, postanesthesia care unit (PACU) nurses entered the OR after procedure completion to provide the immediate postoperative nursing care. When the patient met the PACU discharge criteria, the nurse transferred the patient to an inpatient nursing unit and timing for air contaminant removal commenced.
COMMUNICATION TOOL DEVELOPMENT
Effective teamwork with comprehensive communication strategies was key to providing care during the pandemic 14 and addressing the staff members' concerns. We realized that the OR cleaning information needed to be complete and easy to understand because often there was only a small window of opportunity to complete the cleaning tasks before preparing for another procedure or communicating with personnel who were providing relief at the beginning of the next shift.
Effective teamwork with comprehensive communication strategies was key to providing care during the pandemic and addressing the staff members' concerns.
We developed a detailed room cleaning tool (ie, a sign) in both English and Spanish to facilitate accurate communication and meet the needs of the perioperative and environmental services personnel (Supplementary Figure 1).
To convey the necessary information to individuals entering the room, personnel placed the sign on the OR door at the end of a procedure involving a patient diagnosed with COVID-19 or whose infection status was unknown. The bilingual document listed the required steps for room cleaning and provided a space next to each step for the time of completion and the initials of the staff member completing the cleaning. A second room cleaning sign included a red stop sign to indicate that the cleaning process had not yet been finished. At the completion of the process, the ESA turned the sign over so an image of a green "go" light informed personnel that it was safe to enter the room (Figure 1).
Case Cart Processing Tools
Next, we focused on shared communication when transferring the surgical case cart from the OR to the central processing department (CPD). Before the development of appropriate signage, CPD personnel placed used case carts in a line for processing without prioritization related to possible infections. Although the scrub person applied an instrument precleaning solution to instruments, some instruments became dry and debris became aerosolized, which was a concern during the pandemic.
We created a sign for the scrub person to place on the top of the case cart to identify that the cart contained items associated with a patient who tested positive for COVID-19 or whose infection status was unknown ( Figure 2). Because this sign contained minimal information, we only provided it in English. The scrub person was responsible for phoning a CPD staff member to provide information on the impending arrival of the case cart and completing the sign with the date, time, signature, and name of the CPD contact person.
Adding the case cart sign closed a gap in the cart transfer process and clearly informed the CPD personnel to prioritize case carts when necessary. Carts associated with patients who tested negative for COVID-19 did not have
Hand-Over Communication
The final area that we identified for improvement addressed the sharing of detailed information during hand-over communication when caring for a patient who tested positive for COVID-19 or whose infection status was unknown. We created two documentation tools for this purpose.
The room tracker tool lists each step in the perioperative care of the patient-from OR preparation through procedure completion and patient transfer from the OR-and includes the status of HEPA filters (Supplementary Figure 2). To complete the tool, personnel fill in the date, the time, and their initials for each listed activity. There also is a section at the bottom of the tool for an additional description of events, responsible personnel, and follow-up activities.
Perioperative leaders assigned two RN circulators to procedures involving patients who tested positive for COVID-19 or whose infection status was unknown. One RN circulator remained outside the OR to retrieve and deliver any supplies or items required intraoperatively and also was responsible for completing the documentation on the room tracker tool.
The shift summary report tool provides personnel a place to describe all activities related to COVID-19 that occurred on the unit during the shift (Supplementary Figure 3). Because the number of patients undergoing surgery or other invasive procedures who had been diagnosed with COVID-19 or whose infection status was unknown was increasing, it was important to track the status of each OR. The shift summary tool provided a concise overview of the ORs that ESAs had cleaned and those in which the cleaning process was incomplete.
After completing the room tracker and shift summary report tools, nurses placed the documents in a notebook at the OR desk for leader (eg, charge nurse, environmental services supervisor, nurse manager) and staff member access. Personnel reviewed the completed tools during the hand-over report at the end of scheduled shifts, which improved the shared mental model among all team members.
PERSONNEL ENGAGEMENT
Personnel assigned to all shifts recognized that the development of the communication tools was important, so there was wide acceptance as the project advanced. They were eager to replace the note papers and gown tags and provided positive feedback on this structured process of communication. In addition, personnel incorporated the communication tools into standardized OR Barriers to effective perioperative communication include primary language and style differences, diverse educational backgrounds, difficulty converting verbal information to a written format, and lack of a shared mental model. A perioperative project team developed five easy-to-use communication tools to improve communication during the short period of time available for environmental cleaning and completion of hand-over reports after care of a patient diagnosed with COVID-19 or whose infection status was unknown.
The team developed one tool in English and Spanish to meet the diverse needs of personnel, and created additional tools (ie, case cart sign, hand-over report) that personnel could easily complete to provide even more information. Perioperative personnel were engaged during tool implementation and continue to use the tools to maintain a safe environment during the ongoing COVID-19 pandemic.
workflow processes soon after we created them. Because of the immediate need for clarity and consistency, a trial was not indicated; however, we recognized that the tools would likely need to be improved and modified in the future in accordance with additional information on COVID-19 and personnel feedback. After developing the tools, we educated the remaining personnel on tool use during the designated weekly staff meetings, perioperative leaders reinforced the education at change-ofshift huddles, and personnel had the opportunity to ask questions in small groups.
To foster engagement after the initial development of these tools, we designated a mobile cart to store supplies and frequently needed items outside the individual ORs during procedures for patients who have been diagnosed with COVID-19 or whose infection status is unknown. We placed multiple copies of these tools in a drawer of this cart for easy access, and leaders consistently reinforced the value of continuing to use these tools as needed. By the end of 2021, these tools and the mobile COVID-19 cart had become integrated as routine components of OR patient care.
CONCLUSION
The potential for transmission of COVID-19 via aerosolized particles necessitated practice changes in perioperative environments. Rapidly changing information related to PPE use and environmental cleaning was confusing to personnel at our facility and there was a need to communicate accurately and efficiently. Perioperative and environmental services personnel expressed concern regarding the need for consistent information on environmental cleaning and disposition of supplies, instruments, and equipment, especially when transferring care during a hand-over report. We developed communication tools based on shared mental models and cultural awareness to facilitate education on the rapidly changing processes. Personnel (eg, RNs, surgical technologists, ESAs, CPD technicians) appreciated the new tools and used them during procedures involving patients diagnosed with COVID-19 and whose infection status was unknown. The tools are stored on a mobile cart and leaders continue to support tool use to facilitate communication and maintain a safe work environment.
SUPPORTING INFORMATION
Additional information may be found online in the supporting information tab for this article.
|
2022-03-26T06:23:47.045Z
|
2022-03-25T00:00:00.000
|
{
"year": 2022,
"sha1": "ad609749c12eaf879cabf63ab7612eebe27e5e40",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aorn.13640",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "15169d4f3b60ebd5826fee7d2628cb4e1f93ee19",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6116683
|
pes2o/s2orc
|
v3-fos-license
|
A new model for QPOs in accreting black holes: application to the microquasar GRS 1915+105
(abridged) In this paper we extend the idea suggested previously by Petri (2005a,b) that the high frequency quasi-periodic oscillations observed in low-mass X-ray binaries may be explained as a resonant oscillation of the accretion disk with a rotating asymmetric background (gravitational or magnetic) field imposed by the compact object. Here, we apply this general idea to black hole binaries. It is assumed that a test particle experiences a similar parametric resonance mechanism such as the one described in paper I and II but now the resonance is induced by the interaction between a spiral density wave in the accretion disk, excited close to the innermost stable circular orbit, and vertical epicyclic oscillations. We use the Kerr spacetime geometry to deduce the characteristic frequencies of this test particle. The response of the test particle is maximal when the frequency ratio of the two strongest resonances is equal to 3:2 as observed in black hole candidates. Finally, applying our model to the microquasar GRS 1915+105, we reproduce the correct value of several HF-QPOs. Indeed the presence of the 168/113/56/42/28 Hz features in the power spectrum time analysis is predicted. Moreover, based only on the two HF-QPO frequencies, our model is able to constrain the mass $M_{\rm BH}$ and angular momentum $a_{\rm BH}$ of the accreting black hole.
Introduction
High frequency quasi-periodic oscillations (HF-QPOs) are common features to all accreting compact objects, be they neutron stars, white dwarfs or black holes. A number of recent observations have revealed the existence of these HF-QPOs in several black hole binaries (Strohmayer 2001a;MacClintock & Remillard 2003;Remillard et al. 2006).
Usually, a pair of HF-QPOs appears in a 3:2 ratio. If these oscillations are connected to the orbital motion of the accretion disk at its inner edge as predicted by several models, these QPOs become a useful test of gravity in the strong field regime.
The 3:2 ratio was first noticed by Abramowicz & Kluźniak (2001). In order to explain this ratio, they introduced a resonance mechanism between orbital and epicyclic motion around Kerr black holes therefore leading to an estimate of their mass and spin. Kluzniak et al. (2004a) showed that the twin kHz-QPOs is explained by a non linear resonance in the epicyclic motion of the accretion disk. Rebusco (2004) developed the analytical treatment of these oscillations. This parametric epicyclic resonance model of hydrodynamical modes in the accretion disk was applied by Kluzniak & Abramowicz (2002) to some microquasars. They pointed out the 3:2 ratio in the HF-QPOs observed in GRO J1655-40, XTE J1550-564 and GRS1915+105. Moreover, in this latter black hole binary, Strohmayer (2001b) reported another pair of frequencies, namely 69.2 Hz and 41.5 Hz, which are in a 5:3 ratio as noticed by Kluzniak & Abramowicz (2002) and supports the parametric resonance model. Furthermore, Rezzolla et al. (2003) suggested that the HF-QPOs in black hole binaries are related to p-mode oscillation in a non Keplerian torus.
Nevertheless, the propagation of the emitted photons in curved spacetime can also produce some intrinsic peaks in the Fourier spectrum of the light curves (Schnittman & Bertschinger 2004). In combination with a vertically oscillating torus, the gravitational lensing effect can also reproduce the 3:2 ratio (Bursa et al. 2004;Schnittman & Rezzolla 2005).
Resonances in the geodesic motion of a single particle have been investigated by Abramowicz et al. (2003). The specific coupling force between radial and vertical oscillation was left unspecified. Their results for non-linear resonance were applied to accreting neutron stars.
In this paper, we describe a coupling between spiral density waves in the accretion disk and epicyclic motions of test particles. It is divided in two parts. In Sec. 2 we specify the perturbation pattern used in our model. Then the equation of motions due to the perturbation are derived leading to some resonance conditions. In Sec. 3, the results are applied to GRS 1915+105 for which several components in the Fourier time analysis are predicted in agreement with the HF-QPOs detected for this object. We also put some constrain on the mass-spin relation for several BHCs.
THE MODEL
In this section, we describe the main features of the model, starting with a simple treatment of the accretion disk, assumed to be made of non interacting single particles orbiting in the equatorial plane of the black hole as was already done in the previous studies (see paper I and II). We again neglect the hydrodynamical aspects of the disk such as pressure. However, a detailed (magneto-)hydrodynamical treatment of the response of the disk to gravitational or magnetic perturbations has been given in Pétri (2005cPétri ( , 2006. For a first approach to the problem, we neglect this refinement. Particles evolve in the unperturbed stationary background gravitational field of the black hole. In this simplistic approach, we want to point out the effect of a spiral density wave propagating radially outwards in the accretion disk. In order to account for general-relativistic effects, we use the Kerr metric to perform our calculations. We will show that a local outgoing one-armed m-mode will induce a beat with the test particles and excite vertical oscillations parametrically. Indeed, Mao et al. (2008) showed that oscillatory modes are excited close to the innermost stable circular orbit (ISCO) and propagate outwards to several tens of Schwarzschild radii with constant frequency (nearly equal to the maximum radial epicyclic frequency) and without attenuation. They also found that these oscillations are locally super-Keplerian. Nevertheless, they studied only axisymmetric modes in a pseudo-Newtonian gravitational field. A more detailed study including full general relativity and asymmetric modes propagation would help to give precise quantitative results. In this paper, we will assume that such asymmetric modes exist and exhibit the same properties as the modes found in Mao et al. (2008).
Equation of motion for a test particle
We use the same procedure as the one described in paper I. However, in order to take into account the spiral structure of the density wave of azimuthal mode number m and frequency Ω w , we have to change the equation of motion Eq. (7) in paper I by modifying the terms "cos(m (Ω − Ω w ) t)" in such a way that the spatio-temporal dependency of the argument of "cos" implies a propagation of a pattern of azimuthal mode m at the sound speed c s . We remember that Ω is the local orbital frequency in the accretion disk. Therefore, in cylindrical coordinates (r, ϕ, z), we choose a phase dependency like where t is the time. For a particle in circular orbit at a frequency Ω, we have This means that viewed from the orbiting particle frame of reference, the perturbation is rotating at the speed Ω p = |m Ω − Ω w |. We find again a harmonic oscillator with a periodic variation in the eigenfrequency of the system, driven by an external periodic force generated by the perturbation. The modulation being sinusoidal, the Hill equation specializes to the Mathieu equation, a well known ordinary differential equation extensively studied in mathematical physics (Morse & Feshbach 1953).
Resonance conditions
The rotation of the asymmetric pattern induces a sinusoidal variation of the vertical epicyclic frequency κ z leading to the well known Mathieu equation for a given azimuthal mode m. To take into account general-relativistic effects, we use the characteristic orbital and epicyclic frequencies for the Kerr spacetime geometry, namely • the vertical epicyclic frequency κ z = Ω 1 − 4ã r 3/2 + 3ã 2 r 2 We introduced the mass of the black hole M BH , its gravitational radius R g = G M BH /c 2 , the adimensionalized radius and angular momentum, respectivelyr = r/R g andã = a BH /R g .
We expect a parametric resonance related to this time-varying vertical epicyclic frequency.
The resonance condition derived from the vertical equation of motion is where n ≥ 1 is a natural integer. Note that the speed of propagation c s in the term "(t − r/c s )" drops out from the above resonance condition because it is a local description at fixed radius r.
In the non-rotating black hole case, a = 0, we have κ z = Ω. Therefore the parametric resonance conditions Eq.
(2) splits into two cases, depending on the absolute sign It differs by a factor 1/m from the resonance condition derived in paper I and II. This discrepancy comes from the choice of the phase dependency here being ψ and describing a radially outward propagating pattern. There are some other resonance mechanisms happening in the accretion disk which are not of interest in the present study. Nevertheless we remind them for completeness. First, forced oscillations occur whenever the driven frequency is equal to the free oscillation frequency They are a special case of the above mentioned parametric resonance, corresponding to n = 2. Second, for the corotation resonance to happen, we should have As a consequence, we find again the same resonance mechanisms as in paper I and II.
Eq. (3) is also a good approximation for slowly rotating black holes (ã ≪ 1) because in this case κ z ≈ Ω.
We now discuss in detail the caseã = 0 because it is analytically tractable and gives good insight on the resonance frequency even in the fast rotating case. Moreover, we checked that in case of a maximally rotating Kerr black hole,ã = 1, the frequencies remains nearly the same as in the caseã = 0 because the resonances occur in regions far away from the ISCO, such that in these regions of the accretion disk, κ z ≈ Ω whateverã.
The predicted QPO frequency ratio Ω/Ω w is shown in Table 1 for the first three azimuthal numbers m = 1, 2, 3 and the first four integers n. The highest orbital frequencies where resonance occurs correspond to m = 1 and for the minus sign in Table 1. They are equal to 3 Ω w and 2 Ω w . Therefore the ratio of the strongest oscillations are in the ratio 3:2 as observed in several black hole binaries. This particular ratio is not a direct consequence of the motion in general relativity but rather an intrinsic property of the parametric resonance. General relativity is only needed in order to describe this resonance mechanism correctly in the strong gravity regime. The third strongest resonance occurs when Ω = Ω w overlapping the corotation resonance. However, if the low azimuthal numbers possess the strongest amplitude in the perturbation power spectrum, the m = 2 and m = 3 resonances are expected to be weaker. The most significant QPO frequencies are therefore ordered in the series 3:2:1, the "1" only appearing as a weak feature compared to the 3:2 HF-QPOs pair. A quantitative application of the model follows in the next section.
Application to GRS 1915+105
We test our model with the well studied black hole GRS 1915+105 for which numerous observations of QPO features are available. Several types of QPOs can be identified in this binary system; low frequency QPOs (LF-QPOs) ranging roughly from 1 to 10 Hz, HF-QPOs for frequencies larger than roughly 70 Hz and very low frequency QPOs with frequencies less than 1 Hz (Fender & Belloni 2004). HF-QPO frequencies are detected at ν 1 = 113 Hz and ν 2 = 165 Hz (Remillard et al. 2006 Markwardt et al. (1999). Their properties seem to be different from the HF-QPOs, probably implying a different physical mechanism at work. We emphasize that it is not the scope of this paper to explain all the observed QPOs but just to explain or predict the HF-QPOs. LF-QPOs should be related to Lense-Thirring precession or to some other (magneto-)hydrodynamical modes in the accretion disk.
Nevertheless, from the frequencies of the twin HF-QPOs, the constant angular pattern speed of the density wave is derived from our model by ν w = Ω w /2π = ν 1 /2 = ν 2 /3 ≈ 56 Hz.
Putting this value of the gravitational field pattern speed into Table 1 we get the results shown in Table 2. There is a problem on selecting the relevant and meaningful frequencies in this table. Indeed, the spiral density wave travels outwards to several tens of Schwarzschild radii. According to Mao et al. (2008), the precise extension of the propagation of the wave with constant frequency and without attenuation depends on the accretion rate and on the viscosity (at least for their axisymmetric hydrodynamical modes). Assuming that the wave can travel only up to a radius r out , for a black hole of mass M BH and angular momentum a BH , the lowest excited frequency is To give some orders of magnitude, let's use M BH = 10 M ⊙ , a BH = 0, r out = 30 R g , we find ν low = 19.7 Hz. Which frequencies to select depends strongly on the spiral wave properties, how far they can propagate, which azimuthal numbers m are excited, at which amplitude and so on. The detailed investigation is left for future work. We can only claim that the observed QPOs are retrieved by our model. How to select them remains quantitatively unclear.
Let's discuss however the predicted QPO frequencies. also appear, but due to their higher order (higher integer n) and location farther away from the inner edge of the disk (thus an amplitude of the excitation decreasing with radius), they do not possess a significant growth rate relevant for the study presented here. Moreover, X-ray emission from the outer part of the disk is fainter, thus more difficult to detect. As a consequence, the parametric resonance induced by a spiral density wave passing through an accretion disk predicts the HF-QPOs as well as some LF-QPOs (as a byproduct) in a single unified picture. However, there is no way to predict the angular momentum of the black hole without some knowledge of its mass. We can derived a relation mass-spin to constrain the BH parameters, see next section. Nevertheless, it is worthwhile noting that Kato (2004) was lead to the conclusion that this microquasar is well described by a non rotating geometry indicating that the black hole angular momentum is weak, a BH ≪ R g by fitting the HF-QPOs and several LF-QPOs with a resonant interaction between non linear oscillations and warp modes in the accretion disk. He found a BH = 0 − 0.15 R g .
Note also that the LF-QPOs (27-41-56 Hz) are more difficult to detect than the HF-QPOs (113-165 Hz) because they are located in regions of the accretion disk where emission is fainter. They correspond also to higher azimuthal modes (m = 2, 3 compared to m = 1). Assuming that the main wave possesses an m = 1 structure, the amplitude of the LF-QPOs is expected to be small compared to those of the HF-QPOs. Furthermore, it seems that the presence of these features depend on the emission state of the black hole. We would then expect the accretion rate to have an influence on the efficiency of the parametric resonance occurring in the system. However, the constancy of the QPOs frequency strongly supports the fact the they are closely related to the black hole properties and independent of the flow in the surrounding accretion disk. A hydrodynamical general relativistic description is required to verify this assessment and to check the behaviour of the accretion rate on the presence of certain QPOs. This would be the continuation of the work begun by Mao et al. (2008).
Mass-spin relation for BHCs
Strictly speaking, at this stage, our model cannot estimate independently the angular momentum and the mass of the black hole. Nevertheless, applying it to some BHCs, we are able to give some constrain on their mass, assuming a non-rotating or a maximally rotating black hole.
Following the work done by Mao et al. (2008), we assume that the spiral density wave is launched close to the ISCO, at the location where the radial epicyclic frequency is maximal, note this radius r max . The frequency of the spiral density wave is equal to the local orbital frequency at r max , thus the speed of the density perturbation pattern is Ω(r max , a BH ). Knowing the fundamental frequency Ω w in the accretion disk for a given black hole, the resonance condition Eq.
(2) puts constrains on the relation between mass and angular momentum by imposing Ω(r max , a BH ) = Ω w .
This mass-angular momentum relation is shown in Fig. 1
CONCLUSION
In this paper, the consequences of an outgoing spiral density wave on the evolution of a test particle orbiting around a Kerr black hole have been explored. The twin peaks ratio around 3:2 for the kHz-QPOs is naturally explained by the parametric resonance model. -14 - The connection to lower frequency QPOs is also clearly demonstrated. It is an extension to accreting black hole for the model already suggested in neutron star and white dwarf binaries.
From the analysis of HF-QPOs, we are also able to constrain the mass and the spin of the hole as already suggested by Torok et al. (2005). In the case of GRS 1915+105, it appears that the 56 Hz feature in the Fourier time analysis is a fundamental frequency of the black hole from which the other QPOs can be derived. This feature should therefore be related to the intrinsic properties of the black, namely, its mass and its angular momentum.
The way to associate Ω w with a BH and M BH remains unclear and needs further investigation but spiral density wave excitation close to the ISCO at nearly the maximum of the radial epicyclic frequency as suggested by Mao et al. (2008) is a interesting idea that needs further investigations.
This work was partly supported by a grant from the french ANR MAGNET.
|
2008-09-18T09:44:51.000Z
|
2008-09-18T00:00:00.000
|
{
"year": 2008,
"sha1": "12e52e697a1662f0422afe8abf77d4fb4ac5ff94",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10509-008-9916-2.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "12e52e697a1662f0422afe8abf77d4fb4ac5ff94",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119500325
|
pes2o/s2orc
|
v3-fos-license
|
Strong and Weak Phases from Time-Dependent Measurements of $B \to \pi \pi$
Time-dependence in $B^0(t) \to \pi^+ \pi^-$ and $\ob(t) \to \pi^+ \pi^-$ is utilized to obtain a maximal set of information on strong and weak phases. One can thereby check theoretical predictions of a small strong phase $\delta$ between penguin and tree amplitudes. A discrete ambiguity between $\delta \simeq 0$ and $\delta \simeq \pi$ may be resolved by comparing the observed charge-averaged branching ratio predicted for the tree amplitude alone, using measurements of $B \to \pi l \nu$ and factorization, or by direct comparison of parameters of the Cabibbo-Kobayashi-Maskawa (CKM) matrix with those determined by other means. It is found that with 150 fb$^{-1}$ from BaBar and Belle, this ambiguity will be resolvable if no direct CP violation is found. In the presence of direct CP violation, the discrete ambiguity between $\delta$ and $\pi - \delta$ becomes less important, vanishing altogether as $|\delta| \to \pi/2$. The role of measurements involving the lifetime difference between neutral $B$ eigenstates is mentioned briefly.
I Introduction
The observation of CP violation in decays of B mesons to J/ψ and neutral kaons [1,2] has inaugurated a new era in the study of matter-antimatter asymmetries. Previously, such asymmetries had been manifested only in the decays of neutral kaons and in the baryon asymmetry of the Universe. CP violation in B and neutral kaon decays is described satisfactorily in terms of phases in the Cabibbo-Kobayashi-Maskawa (CKM) matrix, but the baryon asymmetry of the Universe apparently requires sources of CP violation beyond the CKM phases. There is thus great interest in testing the selfconsistency of the CKM description through a variety of processes.
Both model-independent considerations [5,6] and explicit calculations in QCDimproved factorization [7] indicate that a crude measurement of S ππ around zero implies a significant restriction on CKM parameters if the strong phase difference δ between two amplitudes contributing to B 0 → π + π − is small (δ ≃ 10 • in [7]; see, however, [8].) The quantity C ππ provides information on δ if the phase and C ππ are both near zero, but a discrete ambiguity allows the phase to be near π instead.
In the present paper we re-examine the decays B 0 → π + π − to extract the maximum amount of information directly from data rather than relying on theoretical calculations of strong phases. We find that if sin δ is small one can resolve a discrete ambiguity between δ ≃ 0 and δ ≃ π by comparing the measured branching ratio of B 0 → π + π − (averaged over B 0 and B 0 ) with that predicted in the absence of the penguin amplitude. The latter can be obtained using information on the semileptonic process B → πlν assuming factorization for color-favored processes, which appears to hold well under general circumstances [9]. We find that with data foreseen within the next two years it should be possible to reduce theoretical and experimental errors to the level that a clear-cut choice can be made between the theoretically-favored prediction of small δ and the possibility of δ ≃ π, assuming that the parameter C ππ describing direct CP violation in B 0 → π + π − remains consistent with zero. If C ππ ∼ sin δ is found to be non-zero, direct CP violation will have been demonstrated in B decays, a significant achievement in itself. The sign of C ππ will then determine the sign of δ. While the discrete ambiguity between δ and π − δ then becomes harder to resolve, its effect on CKM parameters becomes less important.
We recall notation for B 0 → π + π − decays in Sec. II. The dependence of S ππ and C ππ on weak and strong phases is exhibited in Sec. III. It is seen that when |C ππ | is maximal, there is little effect of any discrete ambiguity, since the strong phase δ is close to ±π/2, while when C ππ ≃ 0 the discrete ambiguity between δ ≃ 0 and δ ≃ π results in very different inferred weak phases. The use of the flavor-averaged B 0 → π + π − branching ratio to resolve this ambiguity is discussed in Sec. IV, while the CKM parameter restrictions implied by the observed S ππ range are compared in Sec. V for δ = 0 and δ = π.
One more observable, which we call D ππ , obeys S 2 ππ + C 2 ππ + D 2 ππ = 1, so its magnitude is fixed by S ππ and C ππ , but its sign provides new information. In principle, it is measurable in the presence of a detectable width difference between neutral B meson mass eigenstates, as is shown in Sec. VI. However, we find that the sign of D ππ is always negative for the allowed range of CKM parameters, and does not help to resolve the discrete ambiguity. A positive value of D ππ would signify new physics. We conclude in Sec. VII.
II Notation
We use the same notation as in Ref. [5], to which the reader is referred for details. We define T to be a color-favored tree amplitude in B 0 → π + π − and P to be a penguin amplitude [10]. Using standard definitions of weak phases (see, e.g., [11]) α = φ 2 , β = φ 1 , and γ = φ 3 , the decay amplitudes to π + π − for B 0 and B 0 are where δ T and δ P are strong phases of the tree and penguin amplitudes, and δ ≡ δ P −δ T . Our convention will be to take −π ≤ δ ≤ π.
The coefficients of sin ∆m d t and cos ∆m d t measured in time-dependent CP asymmetries of π + π − states produced in asymmetric e + e − collisions at the Υ(4S) are [12] where In addition we may define the quantity for which it is easily seen that The significance of D ππ will be discussed in Sec. VI. When δ = 0 or π the quantity λ ππ becomes a pure phase: In such cases S ππ = sin(2α eff ), D ππ = cos(2α eff ). The expressions (1) employ the phase convention in which top quarks are integrated out in the short-distance effective Hamiltonian and the unitarity relation piece of the penguin operator included in the tree amplitude [5]. Using these expressions and substituting α = π − β − γ, we then may write The consequences of assuming δ small, as predicted in Ref. [7], were explored in Refs. [5,6]. In the former, it was shown that even an earlier crude measurement [3] of S ππ , taken at 1σ, drastically reduced the allowed CKM parameter space. In the latter, where a slightly different convention for penguin amplitudes was used, it was shown how to use S ππ and C ππ to determine weak and strong phases.
One needs a value of |P/T | to apply these expressions to data. In Refs. [5] and [6] |P | was estimated using experimental data on B + → K 0 π + (a process dominated by the penguin amplitude aside from small annihilation contributions) and flavor SU (3) including SU(3) symmetry breaking, while |T | was estimated using factorization and data on B → πlν. We shall use the result of Ref. [5], |P/T | = 0.276 ± 0.064. Ref. [7] found 0.285 ± 0.076, which included an estimate of annihilation, and Ref. [6] obtained 0.26±0.08, based on a different phase convention for the penguin amplitude, without including SU(3) breaking effects. The individual amplitudes of Ref. [5], in a convention in which their square gives a B 0 branching ratio in units of 10 −6 , are |T | = 2.7 ± 0.6 and |P | = 0.74 ± 0.05. We shall make use of them in Sec. IV.
It is most convenient to express S ππ , C ππ , and D ππ in terms of α, β, and δ, using γ = π − α − β, since when P = 0 one has S ππ = sin 2α and D ππ = cos 2α. The value of β is fairly well known as a result of the recent measurements by BaBar [1] and Belle [2]: sin 2β = 0.78 ± 0.08, β = (26 ± 4) • . Defining explicit expressions for S ππ , C ππ , and D ππ are then The quantity R ππ itself will be used in Sec. IV to resolve a discrete ambiguity, while the usefulness of the sign of D ππ will be described in Sec. VI. Note that C ππ is odd in δ while S ππ and D ππ are even in δ. Within the present CKM framework one has 0 < α + β < π, implying sin(α + β) > 0, so that a measurement of non-zero C ππ will specify the sign of δ (predicted in some theoretical schemes [7]).
We shall concentrate for the most part on a range of CKM parameters allowed by fits to weak decays, disregarding the possibility of new physics effects. Aside from the constraints associated with S ππ , it was found in Ref. [13] (quoting [14] and [15]; see also [5]) that sin 2α = −0.24 ± 0.72, implying α = (97 +30 −21 ) • , which we shall take as the "standard-model" range.
One could regard the three equations for R ππ , S ππ , and C ππ as specifying the three unknowns |P/T |, δ, and α (given the rather good information on β). In what follows we shall, rather, use the present constraints on |P/T | mentioned above, first concentrating on what can be learned from S ππ and C ππ alone and then using the information on R ππ both as a consistency check and to resolve discrete ambiguities. The information provided by the sign of D ππ will be treated separately.
III Dependence of S ππ and C ππ on α and δ We display in Fig. 1 the values of S ππ and C ππ for α roughly in the physical region, with −π ≤ δ ≤ π. For any fixed α, the locus of such points is a closed curve with the points δ = 0 and δ = ±π corresponding to C ππ = 0 and with C ππ (−δ) = −C ππ (δ). A large negative value of S ππ , as seems to be indicated by the Belle measurement [2], favors large values of α. Negative values of C ππ imply a negative δ. The sum of squares of S ππ and C ππ is always bounded by 1, and one can show that for any value of δ and α + β one has |C ππ | ≤ 2|P/T |/(1 + |P/T | 2 ). For a given value of α + β the bound is stronger: The corresponding plot for (mostly) unphysical values of α is shown in Fig. 2. If desired, one may map negative values of α into the interval [0, π] by the replacement α → α + π, δ → δ ± π, which leaves all expressions invariant. The conventional physical region is bounded by 0 ≤ α ≤ π − β.
Let us imagine a measurement of S ππ and C ππ which reduces present errors by a factor of √ 3. Given that the present measurements are based on around a total of 100 fb −1 , one could envision such an improvement when both BaBar and Belle report values based on 150 fb −1 . Then the size of the error ellipse associated with S ππ and C ππ will be small in comparison with that of the closed curves for α in the vicinity of 90 • , and measurement of these quantities could provide useful information were it not for the fact that every point in the S ππ , C ππ plane corresponds to several pairs α, δ. The most important of these pairs occurs when both values of α are in the physical region but one corresponds to a certain value of δ and the other (roughly) to π − δ. This discrete ambiguity is most severe (corresponding to the most widely separated values of α) when C ππ = 0, corresponding to δ = 0 or π. For example, in Fig. 1, S ππ = C ππ = 0 corresponds to both α ≃ 76 • (when δ = 0) and to α ≃ 105 • (when δ = π). These values of α are separated by nearly 30 • . We shall see in the next section how a measurement of the branching ratio B(B 0 → π + π − ) can help resolve this ambiguity.
IV Information from decay rate
The quantity R ππ , defined in Eq. (10), can help resolve the discrete ambiguity between δ = 0 and δ = π in the case C ππ = 0, where such an ambiguity is most serious. It has been frequently noted [16] that the central value of this quantity is less than 1, suggesting the possibility of destructive interference between tree and penguin amplitudes. With the estimate |T | = 2.7 ± 0.6 mentioned above, and with the experimental average [17] of CLEO, Belle, and BaBar branching ratios equal to B(B 0 → π + π − ) = (4.6 ± 0.8) × 10 −6 , we have R ππ = 0.63 ± 0.30, which lies suggestively but not conclusively below 1. A value of R ππ < 1 would imply cos δ < 0 within the CKM framework, since all currently allowed values of γ correspond to cos γ > 0. Furthermore, a value of R ππ below 1 permits one to set a bound on α + β or on γ, which is independent of δ, R ππ = 1 + (|P/T | + cos δ cos γ) 2 − cos 2 δ cos 2 γ ≥ sin 2 γ , (15) similar to the Fleischer-Mannel bound in B → Kπ [18]. At the 1σ level, this already implies γ ≤ 71 • in the CKM framework. In a more general framework, γ ≥ 109 • is also allowed. We show in Fig. 4 the dependence of R ππ and α on S ππ for the extreme cases δ = 0 and δ = π. For reference we also exhibit the curves for |δ| = π/2. As mentioned, only C ππ depends on the sign of δ. Also shown are experimental points corresponding to present ranges of R ππ , α, and S ππ . If errors on S ππ and C ππ are reduced by about a factor of √ 3, and on R ππ by a factor of about three, as would be possible with a sample of 150 fb −1 for each experiment, one can see a constraint emerging which would favor one or the other choices for δ. We discussed reduction of errors on S ππ and C ππ already. The corresponding reduction for R ππ requires reduction of errors on |T | 2 and B(B 0 → π + π − ) from their present values of 44% and 17%, respectively, each to about 10%, which was shown in Ref. [6] to be possible with 100 fb −1 . Figure 4: Values of (a) R ππ and (b) α as functions of S ππ for the cases δ = 0 and δ = π leading to C ππ = 0 (solid lines), and for |δ| = π/2 (dashed lines). The plotted points correspond to experimental values of S ππ and (a) R ππ or (b) α. Other parameters as in Fig. 1. For these sets of parameters D ππ < 0; when C ππ = 0 one has D ππ = −(1 − S 2 ππ ) 1/2 .
V Comparison with CKM parameters determined by other means
In Ref. [5] we compared the region of CKM parameters allowed by data on various weak transitions with that implied by the first observed range of S ππ [3] and |P/T | for the case δ = 0. In Fig. 5 we reproduce that plot, corresponding to present 1σ limits on S ππ and |P/T | values in the range 0.21 ≤ |P/T | ≤ 0.34, along with the case δ = π. The case δ = π is seen to exclude a large region of the otherwise-allowed parameter space, while δ = 0 is compatible with nearly the whole otherwise-allowed range. Of course this does not permit a distinction at present between the two solutions, but it illustrates the potential of improved data. Turning things around, the examples in Fig. 5 corresponding to δ = 0 and δ = π illustrate the importance of excluding one of these two values by means of the ratio R ππ . Values 0 < |δ| < π with C ππ = 0 correspond to constraints intermediate between those for δ = 0 and δ = π.
The present (ρ, η) constraints differ from those in Refs. [5,6] based on the earlier BaBar data [3], which were consistent (as are the present BaBar data [4]) with vanishing S ππ and C ππ . In that case δ ≃ 0 led to a significant restriction in the (ρ, η) plane, permitting only low values of ρ, while δ ≃ π would have been consistent with nearly the whole allowed (ρ, η) region (as well as with the present data on R ππ ).
VI Information from width difference
The quantity D ππ appears with equal contributions in the time-dependent decay rates of B 0 or B 0 to a CP-eigenstate, when the width difference ∆Γ d ≡ Γ L − Γ H between neutral B mass eigenstates is non-zero [19], Width difference effects in the B s -B s system were investigated some time ago in timedependent B s decays [19,20]. The feasibility of measuring corresponding ∆Γ d effects in B 0 decays, expected to be much smaller but having a well-defined sign (∆Γ d > 0) in the CKM framework, was studied very recently [21]. While a measurement of D ππ in B → π + π − is unfeasible in near-future experiments because of the very small value of ∆Γ d (∆Γ d /Γ d < 1%), we will discuss the theoretical consequence of such a measurement. This brief study and its conclusion seem to be generic to a broad class of processes, including the U-spin related decay B s (t) → K + K − [22], in which width difference effects are much larger [23].
In the absence of P , one just has S ππ = sin(2α), D ππ = cos(2α), so the two quantities are out of phase with respect to one another by π/4 in α. This reduces part of the ambiguity in determining α from the mixing-induced asymmetry. The same is true when δ = 0 or π, since then α is replaced by α eff as noted in the previous section.
The dependences of S ππ and D ππ on δ for fixed α also are out of phase with respect to one another, in the following sense. When S ππ is most sensitive to δ, D ππ is least sensitive, and vice versa. One can show, for example, that D ππ is completely independent of δ when sin 2α = −|P/T | 2 sin 2β , which corresponds, since |P/T | 2 is small, to values of α near 0, π/2, and π. Recall that the corresponding values for S ππ were near π/4 and 3π/4. Conversely, whereas S ππ is maximally sensitive to δ near α = π/2, D ππ is maximally sensitive to δ near α = π/4 and 3π/4. In the absence of the penguin amplitude D ππ would just be cos 2α. Since α is not too far from π/2 in its currently allowed range, D ππ remains negative in this entire range also in the presence of the penguin amplitude. Positive values of D ππ are obtained for values of α which are excluded in the CKM framework. For the values δ = 0 and δ = π, when C ππ = 0, one has D ππ = −(1 − S 2 ππ ) 1/2 . For these values of δ, S ππ is seen in Fig. 1 to lie in the range −1.0 < S ππ ≤ 1.0, implying −1.0 ≤ D ππ ≤ 0. Since in the standard model one expects D ππ to be negative, positive D ππ , obtained for unphysical values of α, would signify new physics.
VII Conclusions
We have investigated the information about the weak phase α and the strong phase δ between penguin (P ) and tree (T ) amplitudes which can be obtained from the quantities S ππ and C ππ measured in the time-dependent decays B 0 → π + π − and B 0 → π + π − . One has a number of discrete ambiguities associated with the mapping (S ππ , C ππ ) → (α, δ). These appear to be most severe when C ππ ≃ 0, since very different values of α can be associated with δ ≃ 0 and δ ≃ π. We have shown that under such circumstances these ambiguities are resolved by sufficiently accurate measurements of the ratio R ππ of the flavor-averaged B 0 → π + π − branching ratio to its predicted value due to the tree amplitude alone. At present this ratio appears to be less than 1, but with large errors. Reduction of present errors on S ππ and C ππ by a factor of √ 3 and on R ππ by a factor of three will have significant impact on these phase determinations. If a non-zero value of C ππ is found, the discrete ambiguity becomes less important, vanishing altogether when |δ| = π/2.
A small value of R ππ , around its present central value, would favor δ = π over δ = 0, as shown in Fig. 4(a). A large negative value of S ππ , as indicated by the Belle measurement [2], favors large values of α, in particular if δ ≃ π. This is demonstrated in Fig. 3 and Fig. 4(b). Correspondingly, Fig. 5 shows that low values of ρ are excluded in the latter case. This figure, drawn also for the case δ = 0, illustrates the important role of the measurement of S ππ and the knowledge of δ in determining the CKM parameters ρ and η.
Another parameter, called D ππ here, equal to ±(1 − S 2 ππ − C 2 ππ ) 1/2 , is measurable in principle in time-dependent B 0 → π + π − decays if effects of the difference between widths of mass eigenstates can be discerned. The sign of D ππ is enough to resolve a discrete ambiguity between values of α expected in the standard model (corresponding to D ππ negative) and unphysical α (corresponding to D ππ positive).
As has been noted previously [16], there are hints of destructive tree-penguin interference in B 0 → π + π − , which may be difficult to reconcile with the favored range of CKM parameters without invoking large values of δ. If this interesting situation persists, one may for the first time encounter an inconsistency in the CKM description of CP violation, which often assumes small strong phases. Improved time-dependent measurements of B 0 → π + π − will be of great help in resolving this question. Given that Standard Model fits [13,14,15] prefer cos γ > 0, a value of R ππ significantly less than 1 in the absence of any other evidence for large δ also could call into question the applicability of factorization to B 0 → π + π − [5]. More accurate measurements of the spectrum in B → πlν [6] and more accurate tests of factorization in other color-favored processes [9] will help to check this possibility.
|
2019-04-14T02:34:02.391Z
|
2002-02-18T00:00:00.000
|
{
"year": 2002,
"sha1": "7f8a146e406dab76da74f646bcf4aa0e9024d0d9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0202170",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "55b3b1667cab328dff983068dcb4d450683febd3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
3511953
|
pes2o/s2orc
|
v3-fos-license
|
MAIT cells: new guardians of the liver
The liver is an important immunological organ that remains sterile and tolerogenic in homeostasis, despite continual exposure to non-self food and microbial-derived products from the gut. However, where intestinal mucosal defenses are breached or in the presence of a systemic infection, the liver acts as a second 'firewall', because of its enrichment with innate effector cells able to rapidly respond to infections or tissue dysregulation. One of the largest populations of T cells within the human liver are mucosal-associated invariant T (MAIT) cells, a novel innate-like T-cell population that can recognize a highly conserved antigen derived from the microbial riboflavin synthesis pathway. MAIT cells are emerging as significant players in the human immune system, associated with an increasing number of clinical diseases of bacterial, viral, autoimmune and cancerous origin. As reviewed here, we are only beginning to investigate the potential role of this dominant T-cell subset in the liver, but the reactivity of MAIT cells to both inflammatory cytokines and riboflavin derivatives suggests that MAIT cells may have an important role in first line of defense as part of the liver firewall. As such, MAIT cells are promising targets for modulating the host defense and inflammation in both acute and chronic liver diseases.
INTRODUCTION
Enteric commensals and pathogens are usually confined to the gut by the intestinal epithelium and mesenteric lymph nodes, but in the presence of intestinal inflammation and increased permeability, the liver is the first organ to receive gut-derived bacteria and their products. Thus, the liver functions as a second 'firewall', clearing commensals from the portal circulation where intestinal defenses are overwhelmed, 1 and is enriched with a number of innate immune cells, including Kupffer cells (liver-resident macrophages), natural killer (NK) cells and innate-like T cells. In the human liver, mucosalassociated invariant T cells (MAIT) cells are the most dominant population of innate-like T cells, comprising up to 50% of all T cells in the liver, 2 which is in contrast to invariant NKT cells (iNKT;~1%) and γδ T cells (~15%). 3,4 The invariant T-cell receptor (TCR) rearrangement of MAIT cells, Vα7.2-Jα33, was first identified during an extensive analysis of the TCR repertoire of human CD4 − CD8 − (double-negative; DN) T cells, Porcelli et al. 5 and subsequently shown to be characteristic of a novel population restricted by the non-polymorphic and highly evolutionarily conserved major histocompatibility complex (MHC) class Ib molecule, MHC class I-related protein 1 (MR1). 6,7 The relative abundance of Vα7.2-Jα33 transcripts in human gut biopsies, as well as the enrichment of homologous Vα19-Jα33 transcripts among murine lamina propria lymphocytes compared with intraepithelial lymphocytes or mesenteric lymph nodes, 7 led to this population being called MAIT cells. Importantly, these cells were found to be broadly reactive to bacterial and yeast species, 8,9 because of their ability to recognize metabolic intermediates of the microbial riboflavin synthesis pathway. 10 Despite their name, in humans MAIT cells are most enriched in the liver, constituting 20-50% of intrahepatic T cells. 2,11 The dominance of MAIT cells in the liver suggests that they have a major defensive role in maintaining the liver firewall and in driving liver inflammation in disease. In this article, we will discuss what is currently known about MAIT cell biology, and explore what role MAIT cells may have in maintaining liver homeostasis and in liver disease.
MAIT CELL BIOLOGY MR1 and its ligand MR1 is an antigen-presenting molecule first sequenced in 1995, 12 and in contrast to the highly polymorphic MHC class I molecules, it is highly conserved among mammals, 13,14 with the α1-α2 domains of human and mouse MR1 being 89-90% identical. 13,15 MR1 expression is essential for the development of MAIT cells, which are absent in MR1 − / − mice. 7 Two seminal papers in 2010 showed that MR1-restricted MAIT cells could be activated by various species of bacteria and yeast, and were critical for early protection against bacterial infections. 8,9 These reports, together with the observation that MAIT cells are absent in germ-free mice, 7,8 suggested that MR1 presents a microbial ligand. The nature of the MR1 ligand was subsequently discovered by Kjer-Nielsen et al. 10 who showed that MR1 can present derivatives of the highly conserved riboflavin and folic acid synthesis pathways. MAIT cells are, therefore, activated by organisms possessing the riboflavin synthesis pathway, including Mycobacteria, Enterobacter, Pseudomonas, Salmonella and Candida species, but not those lacking it (e.g. Streptococcus pyrogenes and Enterococcus faecalis). 8,9 The most potent MAIT cell activatory ligand found to date is 5-OP-RU (5-(2-oxopropylideneamino)-6-D-ribitylaminouracil), generated from the non-enzymic condensation of an early intermediate of the riboflavin synthesis pathway with glyoxal or methylglyoxal byproducts. 16 Folate-based ligands such as 6-formylpterin and its synthetic analog, acetyl-6-formylpterin, have also been shown to inhibit conventional MAIT cell activity, 17,18 but can activate non-conventional, folate-reactive MAIT cells. 19 MR1-tetramers loaded with riboflavin and folate intermediates have subsequently allowed the specific detection and characterization of human and murine MAIT cells. 16,[19][20][21] The MR1 transcript is ubiquitously expressed, 12,13 but endogenous surface expression of MR1 has been difficult to detect. 22,23 Recently, however, it was demonstrated that MR1 accumulates in the endoplasmic reticulum in an incompletely folded form, and in the absence of bound ligands only a few MR1 molecules traffic to the surface. 24 Increased ligand availability leads to the association of MR1 with β2-microglobulin and egress of the MR1-β2-microglobulin-ligand complex, 24 inducing rapid MR1 surface expression. 17,24,25 MR1 surface expression also increases with the nuclear factor-κB-dependent activation of antigen-presenting cells. 26 This contrasts with MHC class II and CD1, which capture their exogenous ligands in endosomal compartments and are highly expressed even in the absence of infection. 24
MAIT cell TCR
The MAIT cell TCR is semi-invariant and relatively evolutionarily conserved within mammals. 6,27 The majority of human MAIT cells express the canonical TCRα chain, Vα7.2-Jα33, although Vα7.2-Jα12 or Vα7.2-Jα20 are also used by a minority of MAIT cells. 20,28 These TCRα chains are preferentially paired with Vβ2 or Vβ13.2 in humans. 20,28 In mice, MAIT cells express Vα19-Jα33 that is paired with Vβ6 or Vβ8, 6,20,28 although Vβ usage can be variable. 6,20,29 Recently, human nonclassical MR1-restricted T cells that have a diverse TCR repertoire and do not express the Vα7.2 TCR chain have also been identified, which are preferentially activated by folate-based ligands, 19 analogous to type II NKT cells.
MAIT cell tissue distribution MAIT cells are rare in lymphoid tissues, 2,30 because of their lack of CCR7 and CD62L expression, required for lymph node homing. 2,31 Instead, MAIT cells preferentially home to peripheral tissues, mediated by expression of chemokine receptors CCR6 and CXCR6, gut-homing integrin-α4β7 and low levels of CCR9. 2,7 Indeed, they are called 'mucosal-associated' because the Vα7.2-Jα33 transcript was enriched in the human gut compared with skin tissues when they were initially characterized. 7 MR1-tetramer studies have since confirmed their enrichment in the gut with different MAIT cell frequencies reported at different anatomical locations within the gastrointestinal tract. A higher frequency of MAIT cells are present in the jejunum (~60% of CD4 − T cells) 20 compared with reported frequencies in healthy ileum (1.5% of T cells), 32 colon (10% of T cells) 33,34 and rectum (2% of T cells) 35 (Figure 1). Importantly, MAIT cells are further enriched within the liver (20-50% of T cells), as will be discussed later. 2,36,37 MAIT cells are also abundant in human peripheral blood (1-10% of T cells) 2 and the lungs (2-4% of T cells). 38 A lower frequency of MAIT cells is also present in the endometrium and the cervix, 39 and MAIT cell TCR transcripts have been reported in tissues such as kidneys, prostate and ovaries. 28 In contrast to humans, MAIT cells are rare in commonly used laboratory strains of mice, 6,40 with the exception of the CAST/EiJ strain, 41 and, therefore, the majority of murine studies have used invariant Vα19-Jα33 TCR transgenic (Vα19i-transgenic) mice. 40 Interestingly, the frequency of MAIT cells in the tissues of wild-type mice has been shown to markedly increase upon infection with Francisella tularensis live-vaccine strain, 42 Salmonella Typhimurium or intranasal administration of 5-OP-RU in the presence of a toll-like receptor (TLR) agonist. 43 MAIT cell phenotype and effector functions In addition to their distinct chemokine receptor profile, human MAIT cells have a characteristic phenotype that has been described in detail ( Figure 2). In adults, MAIT cells express a uniform effector memory phenotype. 2,31 Although cord blood MAIT cells are naïve, they share a preprogrammed transcriptional signature with adult MAIT cells, 44 in line with the acquisition of their innate reactivity and activated phenotype during development. 30 In humans the majority of MAIT cells are CD8 + , with a small fraction of DN cells, as well as a very minor population that express the CD4 coreceptor. 20 Interestingly, more than half of CD8 + MAIT cells express the homodimer CD8αα, with a smaller frequency of cells expressing the CD8αβ heterodimer. This is unique to MAIT cells, as conventional CD8 + T cells express the CD8αβ coreceptor, 20,44 and is acquired early in development. 30 Another key feature of human MAIT cells is the high expression of the C-type lectin-like receptor, CD161, and in the steady state, CD161 ++ Vα7.2 + T cells have been shown to overlap with the cells stained by the MR1 tetramer. 20,45 Furthermore, CD161 is one of the earliest markers to be expressed on MAIT cells, already high in the thymus and fetal organs, 30 as well as in the cord blood. 2,44,46 MAIT cells also express high levels of interleukin-18R (IL-18R), enabling them to rapidly release interferon-γ (IFNγ) 11,47 and tumor necrosis factor-α (TNFα) (unpublished observations) in response to innate cytokines such as IL-12 and IL-18. This is further confirmed by the activation of MAIT cells by E. faecalis, which lacks the riboflavin synthesis pathway, and TLR agonists, in an IL-12-and IL-18-dependent manner. 11,47 In line with this, control of intracellular M. bovis bacillus Calmette-Guérin (BCG) growth in vitro by murine MAIT cells required IL-12, but was independent of MR1 signaling. 48 This ability to be activated by cytokines alone is shared with other innate T cells, 49 as conventional T cells require TCR signaling before the expression of cytokine receptors such as IL-18R, 50 and is attributable to the expression of promyelocytic leukemia zinc-finger by these cells. 21,51 In addition to IFNγ and TNFα, which can be induced both in a TCR-dependent and -independent manner, 2,9,47 MAIT cells have a constitutively high expression of RAR-related orphan receptor γt and the associated ability to express IL-17A, 2,31,44 and constitute the main IL-17-producing T-cell population within the human liver. 36 Although rapid IL-4, IL-5 and IL-10 expression has been described in MAIT cells from Vα19i-transgenic mice, 40 the expression of these cytokines from human MAIT cells ranges from none 2,36,52 to low. 28,53 Interestingly, however, IL-10 expression from MAIT cells is particularly high in adipose tissue, 54 suggesting an immunosuppressive function for MAIT cells in certain tissues.
The effector functions of MAIT cells also includes their ability to degranulate and kill bacterially infected or sensitized cells, lysing cells infected with BCG 28 and Shigella. 55 Ex vivo resting MAIT cells are not efficient killers because of their lack of granzyme B (GrB) and low levels of perforin expression compared with conventional CD8 + T cells. 56 Upon activation, either in an MR1-dependent manner or longer cultures with inflammatory cytokines, however, they upregulate GrB and perforin, greatly enhancing killing of target cells. 2,56-58 GrB may therefore be a useful activation marker of MAIT cells.
Finally, despite expansion of MAIT cells after birth, 2,46 adult MAIT cells lack expression of Ki67 in the periphery 2 and were initially thought to be poorly proliferative. 2,8,40 However, recent studies have confirmed the ability of murine and human MAIT cells to proliferate in both an MR1-dependent manner and in response to cytokines in vitro 21,56,59 and in vivo. 42 As MAIT cells are highly sensitive to activation-induced cell death, 60 one possible explanation for the discrepancy between studies may be that overstimulation of MAIT cells in some studies led to the loss of MAIT cells before they were able to proliferate.
MAIT CELLS AND DISEASE MAIT cells in bacterial infections
High evolutionary conservation of MR1 and its recognition of intermediates of the riboflavin pathway, conserved in various species of bacteria and yeast, suggests that MAIT cells have a critical and nonredundant role in microbial protection. Indeed, a number of papers have suggested that MAIT cells have a protective role in bacterial infections. MR1 − / − mice lacking MAIT cells had a higher bacterial burden in the first few days following intraperitoneal injection of Escherichia coli or intravenous injection of Mycobacterium abscessus, 8 and were overwhelmed by a fatal burden of intraperitonially injected Klebsiella pneumonia. 61 Aerosol infection models have demonstrated MAIT cells to be essential for early control of bacterial burden in the lung. 42,48 Interestingly, mice were protected from F. tularensis live-vaccine strain even in the absence of conventional αβ T cells, but were overcome in MR1 − / − mice, 42 suggesting that MAIT cells may be important for microbial control in immunocompromised patients.
Various studies of MAIT cell frequencies in patients indicate involvement in bacterial infections. For example, there is a higher frequency of MAIT cells in the lung of patients with Mycobacterium tuberculosis infection, with lower frequencies of MAIT cells in the blood. 8,9 Reduced MAIT cell frequencies are, however, only observed in patients with active M. tuberculosis infection, but not latent infection, 9 suggesting MAIT cells are recruited to the lung in active disease. Peripheral MAIT cells in these patients also have increased expression of the exhaustion marker, programmed cell death protein 1, 62,63 and their responsiveness to M. bovis BCG is increased upon programmed cell death protein 1 blockade 62 (Table 1).
In addition to pulmonary infections, there is evidence of MAIT cell involvement in enteric infections, as MAIT cells are reduced early in the blood of patients that received an attenuated strain of Shigella dysenteriae 1, 55 as well as in Vibrio cholera O1-infected children. 64 Interestingly, the presence of activated MAIT cells in the periphery was specific to vaccine responders that developed an SD1-lipopolysaccharide-specific immunoglobulin (Ig) A response, 55 and correlated with protective Vibrio cholera O1-lipopolysaccharidespecific immunoglobulin A and immunoglobulin G antibody responses. 64 These two studies suggest that MAIT cells have a role in the development of protective antibody responses against polysaccharide antigens.
Last, reduced MAIT cell frequencies have been associated with increased severity of cystic fibrosis, and is particularly enhanced in patients with chronic Pseudomonas aeruginosa infections, 65 as well as being a risk factor in critically ill patients with sepsis for subsequent nosocomial infections. 66
MAIT cells in viral infections
Although early studies showed MR1 − / − mice were not susceptible to influenza compared with control mice, 8 MAIT cell frequencies are markedly affected during human viral infections. For example, MAIT cells have been consistently and repeatedly reported to be severely depleted from the periphery of patients infected with human immunodeficiency virus (HIV), 33,35,63,[67][68][69] as well as HIV/M. tuberculosis-co-infected patients. 70 MAIT cell loss occurs as early as 2-3 weeks after HIV infection, 70 and does not recover with successful antiretroviral therapy. 33,35 CD8 + MAIT cells in the rectal mucosa 35 and colon 33 were better preserved, although CD4 + MAIT cells were specifically lost from the rectal mucosa, in line with the significant loss of all CD4 + T cells in the rectal mucosa in HIV-infected patients. 35 As peripheral CD8 + MAIT cells were not specifically infected by HIV, 33 depletion of MAIT cells from the blood of HIV-infected patients is suggested to be due to either the activation-induced cell death of MAIT cells from HIV-induced microbial translocation into the periphery 33 or exhaustion and downregulation of the MAIT cell marker CD161. 35 Although a recent tetramer study confirmed the loss of MAIT cells from the peripheral blood of HIV-infected patients, with no detectable loss of CD161 expression on MAIT cells, 67 the mechanism behind the severe depletion of MAIT cells in these patients remains to be explained. Nevertheless, the loss of MAIT cells may potentially have a profound effect on microbial protection in HIV-infected patients, as exemplified by the loss of mucosal T-helper type 17 cells in simian immunodeficiency virus-infected rhesus macaques. 71 In addition to HIV, a loss of MAIT cells from the periphery has also been observed in patients with dengue and severe influenza infection, 72 as well as in chronic hepatitis C virus (HCV) patients, 72,73 as discussed in detail later. In vivo MAIT cell activation was demonstrated in these patients, and correlated with disease severity in acute dengue. Activation of MAIT cells by the different viruses was dependent on IL-18 in synergy with IL-12, IL-15 and/or IFN-α/β, in line with previous studies showing that IL-12 in combination with IL-18, secreted by monocytes through TLR stimulation, can activate MAIT cells in vitro. 11,47 These studies suggest that MAIT cells may have a larger role in the immune system that is not limited to protection against bacterial infection.
MAIT cells and autoimmunity
There is increasing interest in the involvement of MAIT cells in autoimmune conditions, which has been reviewed in detail elsewhere. 74 In particular relevance to the liver, however, a number of studies have linked MAIT cells with intestinal inflammation. For instance, reduced frequencies of MAIT cells have been consistently demonstrated in the blood of patients with inflammatory bowel disease (both Crohn's disease and ulcerative colitis), 32,75-77 although there are conflicting findings between studies in the frequency of MAIT cells within the inflamed tissues. 32,75,76 MAIT cell frequencies are also altered in coeliac disease, where MAIT cells are depleted from the blood and gut. 78 Although the fate of MAIT cells in the tissue during intestinal inflammation requires further investigation, it is possible that chronic inflammation of the gut leads to both the recruitment and accumulation of MAIT cells, followed by their activation-induced cell death because of bacterial translocation and dysbiosis. In addition to intestinal inflammatory conditions, MAIT cells may also have a role in arthritic diseases, as they have been shown to exacerbate collagen-induced arthritis, 79 whereas depletion of blood MAIT cells has also been reported in patients with systemic lupus erythromatosis and rheumatoid arthritis. 31,53 Various reports have also associated MAIT cells with multiple sclerosis, with the presence of MAIT cells in multiple sclerosis lesions confirmed by immunohistochemistry, 77,80,81 although both regulatory and pathogenic roles have been implicated. 80,82
MAIT cells and cancer
Little is known about the role of MAIT cells in cancer. An early study reported the increased detection of Vα7.2-Jα33 transcripts in kidney cancer tissue and brain tumors compared with control tissue [84][85][86] Kidney and brain cancer Present in tumor tissues; MAIT clonotypes more dominant than blood and peripheral blood samples. 83 Recent studies in colonic adenocarcinoma 84 and colorectal cancer patients 85 have supported these findings, demonstrating highly activated MAIT cell accumulation in tumor tissue, with the degree of MAIT cell infiltration into colorectal tumors negatively correlating with life expectancy. 86 As
IFNγ-and IL-17-secreting cells. Further studies into the role of MAIT cells in cancer and their immunomodulatory potential, as has been demonstrated with iNKT cells, 90 could have important implications for cancer immunotherapy.
Last, it is important to note that the precise physiological role of MAIT cells in most of these conditions (bacterial, viral, autoimmune or cancerous) remains to be defined, with the majority of studies focusing on their frequency within the peripheral circulation and tissue, whether they contribute to disease or have a protective role in humans remains unclear.
Role of MAIT cells in the liver
Mouse models have demonstrated that in the presence of healthy intestinal mucosa, the liver remains a sterile organ, 1 with the mesenteric lymphoid system containing the immune response to commensal gut organisms. 91 Yet, the liver provides an important second 'firewall' where intestinal mucosal defenses are breached or in the presence of systemic infection. 1 The liver hosts not only the large phagocytic Kupffer cell population, dendritic cells, liver sinusoidal epithelial cells and hepatic stellate cells but also rapidly activated innate cells such as NK, iNKT and MAIT populations. 92 The innate reactivity of MAIT cells to both MR1-presented bacterially derived metabolites of riboflavin and proinflammatory cytokines including IL-12, IL-18 and type I IFN indicates that MAIT cells are well placed to have an important role in first line of defense as part of the liver firewall. With little data published to date, we are only just beginning to understand their significance within this complex immunological organ.
MAIT cells are highly enriched in the human liver, representing 20-50% of intrahepatic T cells, compared with the gut. 2,36 This is in keeping with their homing receptor expression profile, and although MAIT cells express the gut-homing integrin-α4β7, 7 they express lower levels of the gut-homing chemokine receptor CCR9 compared with other T cells. 2,37 Instead, MAIT cells express receptors that allow them to home to the liver, such as CXCR6 and CCR6, which binds CXCL16 and CCL20, respectively-chemokines constitutively expressed in the liver. 93,94 This liver-homing phenotype is conserved in mice, as murine MAIT cells also express CXCR6, 21 similar to the murine hepatic iNKT cell population. 95 CCL20 is also upregulated in the inflamed liver and drives CCR6 + T-cell localization to the biliary epithelium. 96 Indeed, recent work by Jeffery et al. 37 has described Vα7.2 + and Vα7.2 + CD161 + cells to predominantly reside around bile ducts within the portal tracts in both healthy and diseased livers. Importantly, bacterially loaded biliary epithelial cells (BECs) were able to activate MAIT cells in an MR1-dependent manner, and suggests a mechanism by which MAIT cells may defend the biliary mucosa against ascending infection from the gut. In the inflamed liver, MAIT cells may be further recruited to the sinusoids through their expression of CXCR3, LFA-1 and VLA-4, 37 as IFNγ-inducible CXCR3 ligands, intercellular adhesion molecule-1 and vascular cell adhesion molecule-1, have all been shown to mediate recruitment of lymphocytes during inflammation. 97 The distribution of intrahepatic MAIT cells is likely critical to understanding their function within the liver.
Three features of MAIT cells in the context of liver immunosurveillance are important to note. First, although at a transcriptional level, intrahepatic MAIT cells are very similar to their blood counterparts, liver MAIT cells are highly activated and almost all express the activation marker CD69, as well as HLA-DR and CD38. 36,37 This suggests that liver MAIT cells are in a highly activated state, poised to respond to incoming antigen from the gut. Second, intrahepatic MAIT cells, along with CD56 bright NK cells, are the main source of IFNγ after TLR8 stimulation of liver-derived mononuclear cells, mediated by their ability to respond to IL-12 and IL-18 from monocytes. 11 The striking sensitivity of intrasinusoidal cells in this study to the TLR8 agonist ssRNA40, compared with other TLR agonists, suggests that intrahepatic cells are highly reactive to viral and phagocytosed bacterial RNA, 98 and that MAIT cells are an important effector population in the liver. Third, MAIT cells are the predominant IL-17 producers among intrahepatic T cells (~65% of IL-17 + T cells) following phorbol 12-myristate 13-acetate/ionomycin stimulation. 36 As IL-17 targets multiple cell types in the liver, including Kupffer cells and BECs, to produce proinflammatory cytokines and chemokines, 99 MAIT cells may be important regulators of hepatic inflammation and fibrosis.
Interestingly, however, liver MAIT cells in the steady state are unable to produce IL-17 upon TCR stimulation. 36 Indeed, MAIT cells from the liver appear less skewed towards type 17 functions (IL-17 and IL-22 production) compared with those of the mucosa. For example, MAIT cells of the fetal liver failed to produce IL-22 on MR1dependent stimulation, in contrast to those of the small intestine. 30 Similarly, MAIT cells derived from the female genital tract were able to produce IL-17 and IL-22 on bacterial stimulation. 39 The presence of commensal bacteria at mucosal surfaces may drive type 17 skewing of mucosal MAIT cells through IL-1β production by resident macrophages. 100 In the steady state, MAIT cells seem to require IL-7 licensing before acquiring the ability to secrete IL-17 in response to TCR stimulation, potentially by increasing their sensitivity to TCR-mediated signals, 36,57 but liver MAIT cells may become similarly skewed to mucosal MAIT cells during episodes of infection.
Taken together these studies suggest hepatic MAIT cells are highly activated within the liver and likely have a defensive role against a range of extra-and intracellular bacteria, fungi and viruses through their abundant and rapid production of IFNγ and IL-17 ( Figure 3).
MAIT CELLS AND LIVER DISEASE Inflammatory liver diseases
In the most comprehensive study to date of MAIT cells in liver disease, Jeffery et al. 37 describe the distribution, function and phenotype of hepatic MAIT cells from healthy controls and explants from patients with acute non-A, non-B hepatitis, as well as the end-stage chronic liver diseases such as autoimmune hepatitis, primary biliary cholangitis, primary sclerosing cholangitis, alcoholic liver disease and non-alcoholic steatohepatitis (NASH). 37 Overall, a reduction in MAIT cells was seen in patient blood and livers compared with controls, in agreement with another study of end-stage liver disease. 11 The similar distribution of Vα7.2 + cells and CD161 + Vα7.2 + cells around the bile duct in both controls and the chronic diseases studied would suggest that the presence of these cells is likely more physiological than pathological in this location (or related to end-stage liver disease). Although the specificity of these cells needs to be confirmed, peribiliary MAIT cells may provide defense against ascending infection via the biliary system, as discussed above. However, as ligation of CD40 on BECs leads to their Fas-dependent apoptosis, 101 upregulation of CD40L on MAIT cells in response to bacteria-derived MR1 ligands presented by BECs suggests that MAIT cells could potentially drive bile duct damage, contributing to pathogenesis in biliary disease.
Hepatitis C infection
Two recent papers have clearly demonstrated that circulating MAIT cells in chronic HCV patients are reduced in frequency. 72,73 This supports previous observations that blood CD161 ++ CD8 T cells, the majority of which are MAIT cells in adults, 2 are significantly reduced in patients with chronic HCV. 44 Whether this represents blood-to-tissue translocation or activation-induced cell death, as has been suggested in HIV infection, 33 has not been addressed. Ex vivo analysis of MAIT cells from patients with chronic HCV showed that activation markers such as GrB, CD38 and CD69 were upregulated, 72,73 and MAIT cells could be activated upon in vitro coculture with HCV-exposed antigen-presenting cell. 72 Additionally, type I IFNs, known to have an important role in viral control, were shown to induce MAIT cell production of IFNγ in combination with IL-12 or IL-18, and where antigen-presenting cells infected with HCV were cocultured with a vaccine virus-derived soluble type 1 IFN receptor (B18R), MAIT cell activation was inhibited. Interestingly, in an HCV Sofosbuvir treatment trial, those patients in the arm receiving pegylated-IFN, in addition to Sofosbuvir and ribavirin, had a higher sustained virologic response rate, as well as activated circulating MAIT cells compared with the other treatment arms. Whether this indicates a direct role for MAIT cells in HCV control and clearance or whether they are simple bystanders requires further investigation.
Non-alcoholic fatty liver disease Non-alcoholic fatty liver disease, affecting up to 40% of western adult populations, 102,103 encompasses a spectrum of disorders ranging from isolated, benign hepatic steatosis to progressive NASH. Closely linked to the metabolic syndrome, non-alcoholic fatty liver disease has a prevalence of 70% among type 2 diabetic patients and 90% among the morbidly obese. 104 The pathogenesis of NASH is much debated, 104 and likely the result of a number of complex factors including insulin resistance and disrupted lipid metabolism; lipotoxicity and hepatocyte death; altered intestinal barrier function, gut dysbiosis and bacterial overgrowth; altered systemic adipokines/cytokines and host genetics. A direct role of MAIT cells in NASH has not been studied, but, in the context of altered gut barrier function, dysbiosis of the microbiome and bacterial overgrowth, described as part of the metabolic syndrome, will be an important area to explore. However, a recent study of MAIT cells in obese and type 2 diabetic patients demonstrated a marked reduction in circulating MAIT cells in both patient groups compared with controls. 45 In this study, the residual circulating population expressed the activation markers CD69 and CD25, with enhanced functionality and proinflammatory cytokine production upon mitogen stimulation. Furthermore, MAIT cells were enriched in the adipose tissues compared with the peripheral blood of obese patients, where they were potent and predominant producers of IL-17, with up to 90% of adipose tissue MAIT cells capable of secreting IL-17. Circulating MAIT cell numbers was restored at 3 months following bariatric surgery, whereas they returned to normal function 6 months after surgery. Increased IL-17 from peripheral blood MAIT cells has also been observed in obese children. 54 Changes to the gut microbiome and intestinal permeability that occur in obesity and type 2 diabetic may lead to the accumulation of MAIT-activating microbial products within adipose tissue and account for the activation state and enhanced function of MAIT cells observed. Indeed, IL-7, known to license MAIT cell function, 36,57 is oversecreted by stromal vascular cells from the omental adipose tissue of obese patients. 105 This may explain the enhanced functionality of MAIT cells in this setting where they likely are important drivers of inflammation and thus of the pathogenic chain of insulin resistance, lipolysis, high circulating free fatty acids and hepatic fat accumulation in the development of non-alcoholic fatty liver disease/NASH. However, the onset of diabetes was delayed in non-obese diabetic Vα19itransgenic mice compared with non-transgenic mice, suggesting that they may also have a suppressive role that is altered in disease. 106 CONCLUSION MAIT cells are emerging as significant players in the human immune system where they represent a major lymphocyte population-this is most obvious in the liver where their dominance, indeed even their presence, has only been evident in the past few years. Our understanding of the distinct biology of MAIT cells is rapidly increasing as the field widens and appears to include not only responsiveness to bacteria but also to inflammatory and viral signals. Their enrichment within the liver is striking and there is much work to be carried out in understanding the physiologic role of MAIT cells, as guardians of the biliary mucosa, as monitors of sinusoidal hygiene and as part of the liver's defensive firewall. Loss of MAIT cells in fibrotic liver disease, as has been indicated, may lead to weakening of this firewall and increased susceptibility to systemic infections-it may be possible to address this mechanistically in emerging mouse models. Their role in the pathogenesis of liver disease also needs to be defined-clearly they are present at the site of inflammation, but to what extent they are protagonists, protectors or just innocent bystanders again may only be solved by analysis using appropriate mechanistic in vivo models. Even with these, given the specific and chronic nature of some of the infectious/inflammatory processes, further clinical correlative and mechanistic studies are needed.
Overall, given their distinctive functions and surface phenotype, MAIT cells represent attractive therapeutic targets to modulate host defense and inflammation in the liver and other organs, and certainly excellent markers of responses to biologic therapies. Both avenuestherapeutic and diagnostic-are ripe for exploitation in the future.
|
2018-04-03T01:12:39.957Z
|
2016-08-01T00:00:00.000
|
{
"year": 2016,
"sha1": "9a9090cbb8628336df92bc185f018d6e6c22935d",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1038/cti.2016.51",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "71e8439fd8d3b9d86f217ad81fd2a4ea50d8084d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
14757596
|
pes2o/s2orc
|
v3-fos-license
|
A light stop and its consequences at the Tevatron and LEPII
An interesting prediction of a string-inspired {\em one-parameter} $SU(5)\times U(1)$ supergravity model, is the fact that the lightest member ($\tilde t_1$) of the top-squark doublet $(\tilde t_1,\tilde t_2)$, may be substantially lighter than the top quark. This sparticle ($\tilde t_1$) may be readily pair-produced at the Tevatron and, if $m_{\tilde t_1}\lsim130\GeV$, even be observed at the end of Run IB. Top-squark production may also be an important source of sought-for top-quark signatures in the dilepton and $\ell$+jets channels. Therefore, a re-analysis of the top data sample in the presence of a possibly light top-squark appears necessary before definitive statements concerning the discovery of the top quark can be made. Such a light top-squark is linked with a light supersymmetric spectrum, which can certainly be searched for at the Tevatron through trilepton and squark-gluino searches, and at LEPII through direct $\tilde t_1$ pair-production (for $m_{\tilde t_1}\lsim100\GeV$) and via chargino and Higgs-boson searches.
1
< ∼ 130 GeV, even be observed at the end of Run IB. Top-squark production may also be an important source of sought-for top-quark signatures in the dilepton and ℓ+jets channels. Therefore, a re-analysis of the top data sample in the presence of a possibly light top-squark appears necessary before definitive statements concerning the discovery of the top quark can be made. Such a light top-squark is linked with a light supersymmetric spectrum, which can certainly be searched for at the Tevatron through trilepton and squark-gluino searches, and at LEPII through directt 1 pairproduction (for mt 1 < ∼ 100 GeV) and via chargino and Higgs-boson searches.
The CDF collaboration has recently announced "evidence" for the existence of the top quark with mass m t = 174 ± 17 GeV [1]. There exists also plenty of indirect evidence for the top quark from precise electroweak measurements at LEP [2], when contrasted with the corresponding theoretical calculations [3]. In the analysis leading to the possible discovery of the top quark, the Monte Carlo simulations which are compared with the data, assume the validity of the Standard Model, and no other processes beyond it contribute to the sought-for signal. In this note we would like to point out that, in the context of supersymmetric models, the pair-production of the lightest top-squark ("stop") may lead to very similar experimental signatures as the pair-production of top quarks. This fact by itself is not new, since it is well known that one can always adjust arbitrarily the parameters of the the Minimal Supersymmetric Standard Model (MSSM) to have a light top-squark [4,5,6,7]. However, in the context of the minimal SU(5) supergravity model [8], i.e., the simplest model underlying the MSSM, the constraints from the proton lifetime [9] force all the squarks to be heavier than the top quark. On the other hand, a light topsquark may be the natural consequence of a one-parameter string-inspired SU(5) × U(1) supergravity model [10], with the dilaton field being the dominant source of supersymmetry breaking [11], and the electroweak-size Higgs mixing parameter µ obtained naturally from supergravity-induced contributions [11,12,10].
Our model [10] is a special case of a generic supergravity model with universal soft supersymmetry breaking, which is described in terms of four parameters: m 1/2 , m 0 , A, tan β. In the "special dilaton" scenario one has [11] where B is the soft-supersymmetry-breaking parameter (at the unification scale) associated with µ. These conditions determine all but one parameter, taken here to be m 1/2 ∝ m χ ± 1 ∝ mg. The requirement of radiative electroweak symmetry breaking, 1 which determines µ up to a sign, can only be satisfied here for µ < 0, in light of the last condition B = 2m 0 . Moreover, this condition determines tan β as a function of m 1/2 ; one finds that tan β must be small: tan β ≈ 1.4, with little dependence on m 1/2 [10]. In what follows we take m t = 162 GeV, i.e., the central value of the world-average fit to m t (m t = 162 ± 9 GeV [14]). (Details of the following analysis will appear elsewhere [15].) For our present purposes, the main result, i.e., a light top-squark, is a consequence of the small value of tan β. Indeed, the lightest top-squark mass is given by where m 2 t L,R are the running top-squark masses. In the present case there is a large cancellation between the first term 1 2 (m 2 t L + m 2 t R ) and the last term in the square root Table 1: Cross sections at the Tevatron (in pb) for pp →t 1t1 X [5] and pp → ttX [19]. All masses in GeV.
, which leads to light top-squark masses, i.e., We find mt 1 > ∼ 67 GeV (c.f., the LEP limit mt 1 > 45 GeV [16]). (This result has a strong tan β dependence, e.g., mt 1 > ∼ 90 (120) GeV for tan β ≈ 1.5 (2.0), but here tan β is fixed and cannot be varied at will.) We also find mq ≈ mg > ∼ 260 GeV, where mq is the average first-or second-generation squark mass. In Fig. 1 we present a collection of spectra plots versus the lightest chargino mass (m χ ± 1 ) for the lighter supersymmetric particles. We note in passing that in this model we find which is in very good agreement with the present experimental results [17]. Also, the relic density of the lightest neutralino satisfies Ω χ h 2 0 < ∼ 0.85, which is in natural agreement with cosmological observations and includes the possibility of a Universe with a cosmological constant [18].
The cross section for pair-production of the lightest top-squarks σ(t 1t1 ) depends solely on mt 1 [5] and is given for a sampling of values in Table 1. Since in this model mt 1 > m χ ± 1 + m b (see Fig. 1), one gets B(t 1 → bχ ± 1 ) = 1. The charginos then decay leptonically or hadronically with branching fractions shown in Fig. 2, i.e., B(χ ± 1 → ℓν ℓ χ 0 1 ) ≈ 0.4 (ℓ = e + µ) for m χ ± 1 < ∼ 65 GeV ↔ mt 1 < ∼ 100 GeV. The most promising signature for light top-squark detection is through the dilepton mode [7]. The number of stop-dileptons is: The dilepton mode is also paramount in top-quark searches: Here we have taken B(t → bW ) = 1, although one should account for the t →t 1 χ 0 1 mode which is open also for light top-squarks. Moreover, pp → ttX →t 1t1 χ 0 1 χ 0 1 X is another source of top-squarks, although much suppressed because of the small branching fraction: we find B(t →t 1 χ 0 1 ) < ∼ 10%. Combining Eqs. (4,5) we obtain This ratio should open the eyes of experimenters because the number of observed dilepton events depends strongly on the experimental biases. This ratio (6) indicates that for sufficiently light top-squarks there may be a significant number of dilepton events of non-top-quark origin, if the experimental acceptances are tuned accordingly.
Perhaps the most important distinction between top-dileptons and stop-dileptons is their p T distribution: the (harder) top-dileptons come from the two-body decay of the W boson, whereas the (softer) stop-dileptons come from the (usually) threebody decay of the chargino with masses (in this case) below m W . Therefore, the top-dilepton data sample is essentially distinct from the stop-dilepton sample. Such distinction is well quantified by the "bigness" (B) parameter B = |p T (ℓ + )|+|p T (ℓ ′− )|+ |/ E T | of Ref. [7]. Another distinction between the two sources of dileptons are the bjets, which are probably softer in the decayt 1 → bχ ± 1 (for light top-squarks) compared to those from t → bW . The above discussion suggests that the CDF top-dilepton data sample should be carefully studied to see if softer stop-dileptons are present: an important new lower bound on the top-squark mass may follow. However, detailed simulations of the stop-dilepton signal and a re-analysis of the top-dilepton data are required before drawing more concrete conclusions.
We also note that in the ℓ+jets channel, the ratio analogous to Eq. (6) is Fig. 2). In this case, the top-squark ℓ+jets events still have softer b-jets and a softer lepton.
The light top-squarks which may be relevant for the top-quark and top-squark searches at the Tevatron (i.e., mt 1 < ∼ 100 GeV) entail a light supersymmetric spectrum, as can be seen from Fig. 1. For mt 1 < 100 GeV, we get the corresponding upper limits shown in Table 2. We now explore the possibilities for direct detection of these light sparticles at the Tevatron and LEPII.
• Tevatron. One could detect these light sparticles in three ways: -The trilepton signal in pp → χ ± 1 χ 0 2 X is the most promising avenue for detection of weakly interacting sparticles at the Tevatron [20,21], as evidenced in the context of SU(5) × U(1) supergravity in Ref. [22]. The leptonic chargino and neutralino branching fractions are given in Fig. 2, and the trilepton rate at the 1.8 TeV Tevatron is given in Fig. 3, where we indicate by a dashed line the present CDF upper limit [23] and by a dotted line the expected reach by the end of Run IB (with ∼ 100 pb −1 of accumu-lated data). This reach corresponds to m χ ± 1 < ∼ 80 GeV ↔ mt 1 < ∼ 130 GeV. Therefore, the light sector of this model -that relevant to top-quark searches -could be definitively falsified in the next few months.
-Directt 1 pair production at the Tevatron has been shown recently [7] to be sensitive to mt 1 < ∼ 100 GeV by the end of Run IB, provided the chargino leptonic branching fraction is taken to be ∼ 20%. For the chargino branching fractions in our model (∼ 40%, see Fig. 2) the reach through the stop-dilepton channel is extended to mt 1 < ∼ 130 GeV. -The standard squark-gluino searches may also be able to reach up to mq ≈ mg ≈ 310 GeV with the Run IB data.
-The lightest Higgs boson should be easily detectable through the standard process e + e − → Z * → Zh. For m h < ∼ 70 GeV (from Table 2), we find a cross section in excess of 0.92 pb, which is much larger than the expected sensitivity limit of 0.2 pb for a 3σ effect at 500 pb −1 [25]. In fact, a 0.92 pb signal corresponds to a significance of 6.2σ at 100 pb −1 .
-The light top-squark may also be produced directly e + e − →t 1t1 via schannel γ, Z exchange, and be probed up to mt 1 ≈ √ s/2 ≈ 100 GeV.
In summary, we have discussed the prediction of a light top-squark in a stringinspired one-parameter SU(5) × U(1) supergravity model. This sparticle (t 1 ) may be readily pair-produced at the Tevatron and, if mt 1 < ∼ 130 GeV, even be observed with the present run accumulated data. Top-squark production may also be an important source of sought-for top-quark signatures in the dilepton and ℓ+jets channels. Therefore, a re-analysis of the top data sample in the presence of a possibly light top-squark appears necessary before definitive statements concerning the discovery of the top quark can be made. Another prediction of this model is a direct link between the light top-squark and a light supersymmetric spectrum, which can certainly be searched for at the Tevatron through trilepton and squark-gluino searches, and at LEPII through directt 1 pair-production (for mt 1 < ∼ 100 GeV) and via chargino and Higgs-boson searches.
|
2014-10-01T00:00:00.000Z
|
1994-06-08T00:00:00.000
|
{
"year": 1994,
"sha1": "496ce407f60e3e7e9d44af25f8f14cd22b95599f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9406254",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "038b2320eb502121276a13d427f1b8109bd36971",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
239300061
|
pes2o/s2orc
|
v3-fos-license
|
The effect of Uchinsk reservoir coastal landscape components on the migration of pollutant elements
Landscape components were studied on the territory of the Volga water management system by the method of landscape-geochemical profiling. The content of chemical elements in the landscape components was determined, sources of pollution were identified, and the pathways of migration of elements were studied. The studies of the ecological state of the coastal zone of the Uchinsk reservoir were carried out in summer of 2020. Plant communities were studied, and soil sections were described. It was established that forest biogeocenoses of the water protection zone of the reservoir do not have noticeable anthropogenic disturbances, there are windblow areas, lesions by the bark beetle-typographer characteristic of Moscow region. Due to the grass and shrub cover in forest biocenoses, subsurface runoff predominates, forest soils serve as an effective barrier that retards the migration of heavy metals. When analyzing the distribution of chemical elements in the soil profile and bottom sediments, their different origin was revealed; for example, phosphorus and potassium enter with waters of the Volga spring, heavy metals enter with the soil runoff, while the accumulation of manganese increases with an increase in soil acidity.
Introduction
The Uchinsk reservoir of the Moscow Canal is a source of water supply for Moscow, and the quality of water supplied depends on the state of its water area and adjacent catchment areas [1,2,3]. Various pollutants can enter with surface runoff from the areas adjacent to the reservoir and as a result of migration of pollutants entering the catchment area with the transfer of air masses. Pollutants can enter from the Volga spring, the runoff of small rivers flowing into the reservoir [4,5].
The water protection zone of the Uchinsk reservoir belongs to specially protected areas; there are no industrial and agricultural activities in this area; the only local source of anthropogenic impact is low-rise residential buildings in the water protection zone and the recreational use of the territory adjacent to the reservoir. Thus, pollutants can enter with waters of the Volga source, surface runoff, primarily coming with the transfer of air masses from distant sources, and as a result of the recreational use of the territory adjacent to the reservoir.
In order to reveal the possible influence of biogeocenoses of the water protection zones on the migration path and the distribution of chemical elements, topoecological studies were carried out in the water protection zone. Three typical biocenoses of the water protection zone were identified: meadows, mixed birch-aspen forests, and spruce forests; within typical plots, topoecological profiles were established for geobotanical descriptions description of soil sections and subsequent chemical analysis of soil samples. To assess the migration paths of chemical elements, samples of bottom sediments were taken in the water area adjacent to the topoecological profiles, and the chemical analysis of moss -Schreber's pleurosis, which is used as an indicator of pollution input with the transfer of air masses, was carried out.
An important role in reducing soil and environmental pollution belongs to geochemical barriers (3,4), where there is a sharp change in the intensity of migration of chemical elements and their accumulation. Various geochemical barriers with a thickness of several millimeters and centimeters are formed in soils, although the thickness of the soil is calculated in centimeters and meters. The acidic or slightly acidic conditions of the soil environment in the humus horizon are replaced by neutral or alkaline conditions in the illuvial horizon. An increase in the pH values is associated with acidic leaching of cations from the upper horizon and their partial accumulation in the lower one. This means that the presence of an acidic environment in the upper soil layer causes the appearance of an alkaline one in the lower one. Cations concentrated in the lower soil layers take part in the formation of the alkaline medium. For example, as a result of biogenic accumulation, the upper soil horizons of the landscapes of forest catenas are enriched in organic matter, nitrogen, calcium, phosphorus, sulfur, and often zinc, copper, nickel, cobalt, and tin. Silicon accumulates in the eluvial horizon and iron and aluminum are removed in the form of chelates. At the border of horizons A2 and B, there are several barriers: alkaline, sorption and even oxygen, where iron, aluminum, manganese, copper, vanadium, nickel, zinc, cobalt are deposited. Iron and manganese accumulate in the B1 and B2 horizons of sodpodzolic soils.
In the subordinate landscapes of hydromorphic catenas, plants receive nutrients from the atmosphere and with the solid and liquid runoff from the eluvial landscapes located above. In acidic taiga landscapes, leaching and removal of mobile elements from soils occur; they partially accumulate on gley and sorption geochemical barriers of subordinate landscapes (floodplains, terraces, depressions, marginal zones of bogs). The components of their landscapes are enriched with calcium, phosphorus, potassium, iron, manganese, cobalt, boron, etc., which contributes to the quality of products. Mobile calcium compounds cause an alkaline (pH up to 8 in the upper soil horizon) reaction of the medium and saturation of the absorbed complex with calcium and magnesium. A characteristic element of these landscapes is iron, which plants accumulate and it actively migrates in waters and soils, concentrating in their upper horizons on the oxygen barrier. Oxidizing and alkaline conditions promote the precipitation of iron, and acidic and reducing conditions contribute to the dissolution of its compounds. In comparison with sandy and sandy loam soils, loamy soil-forming rocks sorb chemical elements from waters more efficiently.
Changes in the redox conditions also contribute to the formation of geochemical barriers. In water bodies, flooded soils and bottom sediments that accumulate elements serve as geochemical adsorption barriers. Under the influence of flooding, significant changes in the chemical and physicochemical properties of soils occur. The reaction of water and salt extracts of soddy-podzolic soils flooded by the reservoir shifts towards more the neutral one in comparison with similar soils of the reservoir bank. There is a decrease in hydrolytic acidity in flooded soils, an increase in the amount of absorbed bases and saturation in horizon A to 62 -82%. A decrease in the content (up to 0.40 -0.32 mg per 100 g of soil) of exchangeable aluminum compared to that (2.47 mg per 100 g of soil) in coastal soils was revealed. An increase was found during flooding of mobile forms of phosphorus, potassium, ammonium nitrogen with a decrease in the humus reserves in the upper soil horizons due to its biogeochemical consumption under the anaerobic conditions. In the gley medium, the transition of ferric iron to the bivalent one is activated. There is a tendency to increase the amount (up to 209 mg FeO per 100 g of soil) of ferrous iron in the upper soil horizons.
Materials and methods
Studies of the biocenoses were carried out by the method of landscape-geochemical profiling, when the profiles were laid in the direction of the flow of substancesfrom the autonomous positions to the subordinate ones and were accompanied by sampling of landscape components [1,4] 3 topoecological profiles were laid from the water edge to 100 m. The choice of location of the profile was determined by the most typical vegetation. Test plots were located at a distance of 0-5, 15-25, 75-100 m from the water's edge. Geobotanical descriptions and descriptions of soil sections were made at the sites; average samples were taken along the soil horizons for the subsequent chemical analysis. For a comparative analysis of the chemical composition and identification of the migration paths of chemical elements, samples of bottom sediments were taken in the water area of the reservoir opposite the topoecological profiles; samples of Schreber's pleurosis moss were taken as an indicator capable of accumulating heavy metals entering with air masses [1]. The chemical analysis of samples was carried out in the laboratory of the Faculty of Geography, Moscow State University using generally accepted methods. To take samples of bottom sediments in the water area, the SOI (State Oceanographic Institute) tube was used.
Results and Discussion
When examining the coastal territory of the Uchinsk reservoir, three profiles were laid. The herbage of the first profile is dominated by awnless rump (aristata caudam), wheatgrass with an admixture of meadow geranium (meadow geranium wheatgrass admixtis), meadow cornflower (cornflower locis palustribus), St. John's wort (hypericum perforatum). The second profile was laid in a mixed birchaspen-spruce forest with an insignificant admixture of pine and pedunculate oak on the sod-podzolic medium loamy soil. The first layer of the stand is represented by birch and aspen, having a height of 15-20 meters and a trunk diameter of 15-35 cm. The second layer consists of birch and aspen, with a slight admixture of spruce; the height of the trees is 5-12 meters and the trunk diameter is 5-10 cm. The undergrowth is also represented by birch and aspen with an admixture of spruce with a height of 0.5-2.5 m.
In the herbage, awnless rump prevails, there are several species of sedges, willowweed and dioecious nettle. The third profile was laid in the sorrel spruce forest. It was revealed that willow stands with a width of 3-5 m predominate directly at the water's edge on the third profile, and reed thickets prevail in shallow water. Spruce forests develop along the willows with a gradual increase in relief, represented by several associations with a predominance of sorrel spruce forest with local areas of raspberry spruce. The soils along the entire length of the profile are podzolic, medium and heavily loamy. The first layer of the stand consists of Norway spruce with a height of 30-35 m and a trunk diameter of 40-100 cm. The second layer of the stand also consists of spruce. The height of the threelayer trees is 15-25 m, the trunk diameter is 15-30 cm. The undergrowth is poorly developed and represented mainly by mountain ash, 0.50-2.50 m high. Warty euonymus and forest honeysuckle are occasionally found. The male thyroid has a height of 40-50 cm. Of rare plants, there is spring combo, confined to the edges and windows in the stand.
There are also Kashubian buttercup, yellow zelenchuk, hoof and other species. Mainly aspen and spruce are renewed. The soil surface is covered with dry leaves and needles. There is no wetting of the soil surface. Mosses (Schreber's pleurosis) are found only at the base of tree trunks and on dead wood. On the test plots, soil sections were made, which revealed the presence of soddy-podzolic soils on the profile with meadow vegetation and the profile laid in a mixed birch-aspen forest; typical podzolic soils were determined on the third profile laid in a spruce forest. A chemical analysis of soil samples was carried out along the horizons for all three profiles from the water's edge to a distance of 75-100 meters. Data on the chemical composition of soil samples, bottom sediments and moss (Schreber's pleurosis) taken on the third profile are presented in Table 1. Table 1 demonstrates the dependence of distribution of chemical elements on the type of biogeocenosis characteristic of each of the profiles and the distance from the water's edge within the profile. The acidity index pH in soil samples depends on the type of biogeocenosis and a soil horizon, which is due to the patterns of migration of elements and biogeochemical processes in the soil. The type of a plant community and species composition have the greatest influence on the pH value. The most acidic soils (pH = 5.4-5.84) are characteristic of the sorrel spruce forest, close to neutral soils are present on the profile with meadow vegetation. Note: dash means no data.
In the mixed birch-aspen forest (the second profile), the soils are slightly acidic (pH = 6.1-6.4). On the meadow profile, the reaction is neutral (7.0-7.3), which is typical of soddy podzolic soil. For all three topoecological profiles, regardless of the soil process and soil acidity, the maximum content of P2O5, K2O is observed in the soil at the water's edge in the zone of development of coastal aquatic vegetation, characterized by periodic flooding with fluctuations in the reservoir level and constant high moisture content. The content of phosphates in bottom sediments is 15-20% less than in the soil at the water's edge, while in soils that are not constantly affected by reservoir water, at a distance of 15 or more meters from the water's edge, the content of phosphorus compounds is 5-7 times lower.
Similar data are also typical for potassium compounds -maximum concentrations are observed in the zone of constant flooding of the reservoir with water; along the profile, the content of potassium salts decreases by 40-50% and remains constant throughout the profiles, the content of salts potassium in bottom sediments is 4-5 times less than in soils at the water's edge. The nature of distribution of phosphorus and potassium salts along all three profiles, characterized by different types of vegetation and different soil processes, is practically the same, which allows us to assume the supply of these elements from the sources located upstream of the reservoir. At the water's edge, in the zone of willow and coastal vegetation, there is a geochemical barrier that assimilates these elements [6,7], partially phosphorus and potassium compounds are assimilated by aquatic and coastal aquatic vegetation; when they die off, they accumulate in bottom sediments.
Comparison of the content of heavy metals in soils of topoecological profiles and their content in Schreber's pleurosis shows that the content of cadmium is 2 or more times higher than its content in the soil samples, the content of zinc is 2-4 times higher. Since Schreber's pleurosium receives mineral nutrition mainly with precipitation, it can be assumed that the main source of zinc and cadmium is air masses [8,9].
The content of cadmium remains constant in the soil samples for all profiles, while its content in bottom sediments is three times lower than in the soil samples; this indicates the weak binding of cadmium by soils of all types, its constant migration into water of the reservoir; there is no accumulation of cadmium in bottom sediments. A similar behavior is typical of zinc. Lead and manganese in Schreber's pleurosis are found in concentrations less than or equal to those found in the soil profile in all types of biocenoses, while the distribution of lead and manganese in the soil in all topoecological profiles is almost uniform and does not depend on the types of soil and biocenosis, which suggests that their source is local soils, while these chemical elements are tightly bound by geochemical barriers and do not migrate with the soil runoff.
Vegetation plays an important role in the retention of elements and suspended matter in spring waters and runoffs. The relationship between plants and soils determines the biological cycle. It includes two opposite processes: production and destruction. The production process is the basis for the plant communities and an important driving force in the development of biogeocenoses. This process is important for the circulation of chemical elements in the biosphere, the dynamics of CO2 and for managing the production process in agrocenoses (130-162 days). Agricultural landscapes are characterized by annual alienation with a harvest of a certain amount of elements of mineral nutrition of plants. Alienation of substances in meadow landscapes occurs as a result of hay-making, cattle grazing and varies significantly depending on the height of the grass stand. Plants react differently to soil fertility and supply of nutrients. The reserves of ash elements in the aboveground biomass of cultivated plants are 1.1-5.0 times less than in meadows, which is due to a decrease in the mass of roots of cultivated plants compared to that of meadows.
The analysis of the ash from the cuttings of plants indicates a change during the growing season in the content of elements in the range from 26, 23 (anthropogenic landscapes) to 243.47 (meadow) kg / ha. The maximum concentrations of silicon (up to 77.00 kg / ha), sulfur (up to 10.90 kg / ha), manganese (up to 0.53), sodium (up to 4.16) were found in the grass stands of meadow landscapes, and calcium (up to 63.62 kg / ha), magnesium (up to 10.10), iron (up to 1.01)in the anthropogenic landscapes. It is necessary to pay attention to the autumn accumulation of silicon and other elements in plants of natural meadow and cultivated landscapes with a simultaneous decrease in the concentration of potassium due to its outflow into the roots. The maximum of phosphorus and potassium was revealed in the mowing of fescue with timothy grass. Clover accumulates a lot (up to 60 kg / ha) potassium in autumn. Magnesium is consumed by plants throughout the growing season until the aging phase. It should be noted that plants serve as biogeochemical barriers in the retention of elements and suspensions from spring waters and effluents. The maximum of elements was revealed in autumn.
Conclusion
Different origins of chemical elements entering the soil, vegetation, water, bottom sediments of the reservoir were revealed. The biogenic elements, phosphorus and potassium enter with the water of the Volga spring, local runoff of small rivers and are actively bound by soils in the zone of coastal and aquatic vegetation. To a lesser extent, the accumulation of phosphorus and potassium compounds is expressed in bottom sediments formed during the dying off of aquatic vegetation and phytoplankton. With fluctuations in the hydrological regime of the reservoir, there is a likelihood of leaching out of the elements accumulated during the dying off of aquatic vegetation and in soils in the immediate vicinity of the water's edge. The regular removal of coastal and aquatic vegetation can reduce the accumulation of nutrients. The study of distribution of heavy metals over the soil profiles with different types of vegetation did not reveal obvious differences in their content depending on the types of soil, soil acidity and various plant communities. Metals such as manganese and lead are almost evenly distributed over all the profiles; a comparison of their content in the soil and Schreber's pleurosis did not reveal their possible input from external sources. On the contrary, the content of zinc and cadmium in Schreber's pleurosis is 2-4 times higher than in the soils of all three profiles, with their minimum content in bottom sediments. This suggests a constant supply of zinc and cadmium with the transfer of air masses and the absence of both geochemical barriers capable of binding these elements in soils, and the absence of their binding by aquatic vegetation and phytoplankton. In the absence of damage to the vegetation cover, there is no surface runoff, and all heavy metals enter the soil profile and either bind by the soil profile like lead and manganese, or migrate into the reservoir with the subsurface runoff. A possible advantage of forest biocenoses is the ability to reduce wind speed in the surface layer and a large absorption surface, which makes it possible to recommend using forests as biochemical barriers.
|
2021-10-22T20:07:13.701Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "bbf6bed9929af0bd8fae34fc66a16df24af16dcd",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/867/1/012086",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bbf6bed9929af0bd8fae34fc66a16df24af16dcd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
245568743
|
pes2o/s2orc
|
v3-fos-license
|
Origin and evolutionary analysis of the SARS-CoV-2 Omicron variant
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has evolved rapidly into new variants throughout the pandemic. The Omicron variant has more than 50 mutations when compared with the original wild-type strain and has been identified globally in numerous countries. In this report, we analyzed the mutational profiles of several variants, including the per-site mutation rate, to determine evolutionary relationships. The Omicron variant was found to have a unique mutation profile when compared with that of other SARS-CoV-2 variants, containing mutations that are rare in clinical samples. Moreover, the presence of five mouse-adapted mutation sites suggests that Omicron may have evolved in a mouse host. Mutations in the Omicron receptor-binding domain (RBD) region, in particular, have potential implications for the ongoing pandemic.
Introduction
The current COVID-19 pandemic is a global human health crisis caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). On 26 November 2021, the World Health Organization (WHO) designated the SARS-CoV-2 variant B.1.1.529, named Omicron, as its fifth variant of concern (VOC). This decision was based on the evidence presented to health officials and researchers that Omicron had numerous mutations with potential implications for the ongoing pandemic. The Omicron variant has now been identified globally, 1 including countries throughout Asia, Africa, Europe, and North America.
The original wild-type SARS-CoV-2 strain likely originated in a bat host. [2][3][4] Initially, pangolins were thought to be the source of spillover to humans, but they may have been infected by other animal species. 5 Since the outbreak of COVID-19, several countries have reported infections of SARS-CoV-2 in animals. Human-toanimal transmission has been observed in pets, farmed animals, and animals held in zoos, in addition to free-ranging wild animals. 6,7 For example, infections under natural conditions have been reported in pet dogs 8 and cats, 9 in farmed mink 10 and ferrets, 11 and tigers, lions, snow leopards, pumas, and gorillas at zoos. 12 Most diseased animals are hypothesized to have been infected through close contact with COVID-19-positive human patients. However, no compelling evidence currently shows that any domestic animal can readily transmit SARS-CoV-2 to other animals, including humans. Few animal cases have shown the potential for further zoonotic and anthroponotic viral transmission. Nevertheless, infection in domestic and wild animal species has possible implications for public health.
SARS-CoV-2 enters host cells via the interaction of spike-like proteins (S proteins) on the viral surface with the host cell entry receptor angiotensin-converting enzyme 2 (ACE2). 2 Some variants that have mutations in the receptor-binding domain (RBD) region of the S protein are VOC because they are potentially associated with enhanced transmission, pathogenicity, and/or immune evasion. 13 Although the initial wild-type strain of SARS-CoV-2 does not infect mice, mouse-adapted SARS-CoV-2 strains have been identified. Several mouse-adapted strains have mutations located in the RBD region, enhancing interactive affinities with mouse ACE2 (mACE2) 14 to facilitate efficient viral replication in this host. A mouse-adapted strain at passage 6 (MASCp6), which has an N501Y mutation, was shown to have increased infectivity in the lung during serial passaging in BALB/c mice. 15 Another study showed that three SARS-CoV-2 VOCs, namely B.1.1.7 and two other N501Y-carrying variants, B.1.351 and P.3, can infect mice. 16 In this study, we constructed a phylogenetic tree of all known VOCs and variants of interest (VOIs). The results showed that the Omicron variant was not present on an intermediate evolutionary branch, suggesting that it may have evolved in a non-human host.
Analysis of Omicron mutation data revealed a high number of mutations, that these mutations are concentrated in the S protein (specifically the RDB region), and that Omicron has five mouseadapted mutation sites. Together, the data suggest that Omicron may have evolved in a mouse host.
Data collection
We downloaded a representative set of SARS-CoV-2 genomes from individuals infected during the COVID-19 pandemic from the GISAID database. 17 The genomes had complete metadata, including patient age and sex and the year and country in which samples were collected. These data were used to test associations between variation in SARS-CoV-2 genomes and available epidemiological metadata.
Mutation analysis
The complete genome of SARS-CoV-2 isolate Wuhan-Hu-1 (NC_045512.2) was used as the reference genome, 18 and mutations in all other samples were compared with this reference isolate. Detected mutations were confirmed with Integrative Genomics Viewer (IGV) and annotated with the SnpEff program. 19
Construction of a phylogenetic tree with full-length genomic sequences
The full-length genomic sequences of VOCs and VOIs used in this analysis included 30 each of the Alpha, Beta, Eta, Iota, Mu, Kappa, Zeta, Theta, Epsilon, and Omicron variants. There were also 28 Gamma, 98 Delta, and 29 Lambda variant genomes included. All 455 genomes were aligned using MAFFT v7.31023. 20 The aligned sequences were converted to the phylip file format with Clustal W, 21 and maximum likelihood (ML) trees were then constructed in RaxML v8.2.12 22 with 100 bootstrap replicates. The timescaled phylogenetic trees were constructed using NextStrain 23 and Treetime 24
High number of mutations
We calculated the average number of mutations in the five VOCs circulating globally and found that the Omicron variant has significantly more mutations than any other variant currently in circulation (Table 1). This observation suggests that the environment in which Omicron evolved may differ from other known VOCs that have infected healthy human hosts. The Omicron variant likely evolved in an immunocompromised patient, although it is possible that this variant also evolved in an animal host.
Key mutation positions
The RBD region recognizes ACE2, the host receptor that binds to the viral S protein. 26 Mutations in the RBD region may increase the binding affinity and viral infectivity. Furthermore, most vaccineinduced neutralizing antibodies and antibody treatments target the RBD. The Omicron variant has at least 15 mutations in the RBD region, including mutations at Q493 and Q498 (Fig. 1), which are especially concerning to public health experts. Studies have shown that mutations at these two sites are related to the infectivity of animals. In 2021, the Jin research team showed that strains with the Q493K and Q498H mutations have significantly enhanced affinity toward mACE2. 14 In a study of New York sewer samples published in July 2021, 27 researchers found many variants with the Q493K and Q498Y mutations, which were rare in clinical samples. At that time, only three reported strains of SARS-CoV-2 had the Q498H mutation, and none had the Q498Y mutation. This study showed that by July 2021, the Q498 mutation had accumulated in large numbers of animal hosts living in the sewers of New York, and the authors discussed the possibility of SARS-CoV-2 spreading between non-human animal hosts. A CSIRO study additionally identified seven key mutation sites potentially related to mACE2 binding affinity. In the S protein, these sites are K417, E484, F486, Q493, Q498, P499, and N501. [28][29][30][31] We compared key mutations in 13 mouse-adapted strains with the Omicron variant (Fig. 2). The results showed that the Omicron variant contains mutations at five key sites of viral S protein: K417, E484, Q493, Q498, and N501. Notably, another strain had mutations at the same five sites, the IA-501Y-MA-30 strain, which was obtained from mouse lung samples after 30 passages of the IA-501Y strain. 32 These results suggest that the Omicron variant may have evolved in a mouse host.
Phylogenetic analysis of VOCs and VOIs
Despite a large number of mutations in Omicron, no evidence was found in known public databases to suggest that these mutations slowly accumulated over time. Additionally, phylogenetic trees showed no intermediate branches of evolution, which is a very surprising result. Starting in August 2021, the Delta variant was dominant globally, and until November 2021, 99.6% of all collected specimens causing new infections were identified as Delta (Fig. 3A). If Omicron evolved from a strain of the Delta variant, such as AY.4, AY.23, or AY.46 (the dominant variants in Europe, Asia, and Africa, respectively), they would share a common mutation profile. However, analysis of data from GISAID showed that the Omicron variant differed from each of these strains and did not evolve from the Delta variant (Fig. 3B). The phylogenetic analysis strongly indicates that the Omicron variant forms a monophyletic group with the Gamma variant as a sister group, and the Omicron group has an extremely long branch length. The time-scaled phylogenetic tree shows that the Omicron and Gamma lineages likely diverged in the first half of 2020. This supports the hypothesis that Omicron may have evolved in a non-human animal species. After accumulating many mutations in the animal host, the altered coronavirus was transmitted back to humans by reverse zoonosis.
The emergence of the Omicron variant indicates that surveillance of SARS-CoV-2 variants should be conducted in economically underdeveloped countries and in the environment to avoid the continuous emergence of new variants of unknown origin. Understanding the threat posed by the Omicron variant will require researchers to gather and analyze a great deal more data in a brief period. Determining the origin of Omicron requires surveillance of animals, especially rodents, because they may have come into contact with humans carrying a strain of the virus with adaptive mutations. Future work should focus on SARS-CoV-2 variants isolated from other wild animals to investigate the evolutionary trajectories and biological properties of these variants both in vitro and in vivo. If Omicron is determined to have been derived from animals, the implications of it circulating among non-human hosts will pose new challenges in the prevention and control of the epidemic.
|
2021-12-31T14:09:56.061Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "cba852547825cebbdf36e432953334666805da63",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jobb.2021.12.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7db0f0d5f089a4a4dddb1d9d509801a0e0d3f3aa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
115677470
|
pes2o/s2orc
|
v3-fos-license
|
Gulf of Mexico Seafood Harvesters: Part 3. Potential Occupational Risk Reduction Measures
: 1. Background: Fishers face many occupational hazards that include a high risk of fatal and nonfatal injuries and a variety of adverse health effects. Our purpose is to provide an overview of potential countermeasures for the control of hazards that threaten the health and safety of Gulf of Mexico (GoM) fish harvesters. 2. Method: Search terms were used to identify relevant literature; two previous reviews regarding injuries and health risk factors also inform this review. 3. Results: Countermeasures against these hazards include winch guards, lifting devices, job redesign, non-slip decks and vessel stability controls as well as using personal flotation devices, wearing gloves and high-friction footwear, increasing sleep time and using vessel motion to assist lifting. Knowledge about secondary prevention (such as rescue, first aid and making mayday calls) is also important. that incorporates other fishers’ experiences with innovations. Fatigue and lack of sleep contribute to vessel disasters and injury-related errors. 4. Conclusions: The prevention of injuries and diseases among GoM fishers depends on a combination of focusing on work-processes, instilling a broader safety culture, engineering controls, identifying and sharing fisher innovations, promoting fall overboard prevention and protection and providing culture-based incentives, training and narrative outreach. and door openings to insure they are closed in rough seas. Another design, a Flood-Rate Monitor, tracks the status of water entry into holds and activates pumps to expel excess water. The captain and crew can monitor the flood level risk from a wheelhouse display. The sloshing effect of water in the hold of the vessel can affect the stability (roll) but debris can clog float monitoring devices rendering them ineffective in monitoring changing water levels in the holds. To counter this problem, engineers designed a “Slack Tank Monitor” to measure underwater pressure differences and the sloshing effect in the hold [12].
Introduction
This review is the third in a series of descriptive literature reviews and it addresses the prevention of a range of potential injuries and adverse health effects among US Gulf of Mexico (GoM) fish harvesters. The first review described injury risk factors among fish harvesters that included falls overboard (FOB), slippery and inherently unstable work platforms, working alone, not wearing personal flotation devices (PFDs), vessel casualties, lack of response time to crises, lack of vessel repair and maintenance, harsh weather, gear and line contact including winch entanglements, fatigue, lack of a safety culture and onshore hazards, including boarding and debarking the vessel. It also addressed fatigue as a risk factor for human error regarding vessel disasters and injuries [1]. In our second review, we identified risk factors for adverse health effects other than traumatic injuries, which included musculoskeletal disorders, poisoning from aquatic animals, dermatitis, cancer, ocular disorders, hearing loss and respiratory problems as well as diving hazards [2]. For a geographical perspective of the GoM, Figure 1 shows a map of the GoM and locations of fish harvester fatalities over the period 2010-2014, which represents the most severe malady facing these workers. A vessel disaster is defined a sinking, capsizing, grounding, fire, or other event that forces the crew to abandon ship [3].
The purpose of this review is to identify and describe potential countermeasures for the control of hazards that threaten the occupational health and safety of US Gulf of Mexico (GoM) fish harvesters based on an overview of the literature. Literature from outside the GoM informs countermeasures that are potential interventions for protection of fish harvesters in the Gulf. The audience of this analysis is researchers and interveners for the safety of fish harvesters. locations of fish harvester fatalities over the period 2010-2014, which represents the most severe malady facing these workers. A vessel disaster is defined a sinking, capsizing, grounding, fire, or other event that forces the crew to abandon ship [3]. The purpose of this review is to identify and describe potential countermeasures for the control of hazards that threaten the occupational health and safety of US Gulf of Mexico (GoM) fish harvesters based on an overview of the literature. Literature from outside the GoM informs countermeasures that are potential interventions for protection of fish harvesters in the Gulf. The audience of this analysis is researchers and interveners for the safety of fish harvesters.
Method and Materials
This normative literature review identifies studies that inform our broader study, Occupational Health and Safety of Gulf Seafood Workers Project, based on three health priorities: (1) severity of the individual case (e.g., fatality); (2) frequency of the condition (e.g., back pain that is frequently reported) and (3) preventability, (e.g., PFD use to prevent drowning). We consulted several databases including google scholar and PubMed as well as the authors' files. Search terms reflected occupational safety and health in the fishery sector that included in combination (string search); safety, injuries, health, fishing, aquaculture, engineering, drowning, Gulf of Mexico and several specific terms such as ocular hazards and solar radiation. We also consulted a NIOSH bibliography (n = 156) [4] and Proceedings of the International Fishing Industry Safety and Health Conferences of 2000 and 2003 (n = 43 and 40, respectively) [5,6], Recent articles were also identified and provided by contacts through ResearchGate. Selection criteria were (1) direct or indirect relevance to GoM fisher safety and health; (2) recent investigations that build on earlier investigations and (3) research designs that can inform our study. Exclusion criteria were studies that did not address interventions regarding occupational safety and health of fish harvesters. Typically, we chose literature that was published since 2002 with few exceptions. We selected 52 articles for review as listed in the Appendix A.
Results
Our results start with studies from within the GoM followed by safety culture and how it fits into the fishing culture. Next, work processes are explained as a way to understand where the risks
Method and Materials
This normative literature review identifies studies that inform our broader study, Occupational Health and Safety of Gulf Seafood Workers Project, based on three health priorities: (1) severity of the individual case (e.g., fatality); (2) frequency of the condition (e.g., back pain that is frequently reported) and (3) preventability, (e.g., PFD use to prevent drowning). We consulted several databases including google scholar and PubMed as well as the authors' files. Search terms reflected occupational safety and health in the fishery sector that included in combination (string search); safety, injuries, health, fishing, aquaculture, engineering, drowning, Gulf of Mexico and several specific terms such as ocular hazards and solar radiation. We also consulted a NIOSH bibliography (n = 156) [4] and Proceedings of the International Fishing Industry Safety and Health Conferences of 2000 and 2003 (n = 43 and 40, respectively) [5,6], Recent articles were also identified and provided by contacts through ResearchGate. Selection criteria were (1) direct or indirect relevance to GoM fisher safety and health; (2) recent investigations that build on earlier investigations and (3) research designs that can inform our study. Exclusion criteria were studies that did not address interventions regarding occupational safety and health of fish harvesters. Typically, we chose literature that was published since 2002 with few exceptions. We selected 52 articles for review as listed in the Appendix A.
Results
Our results start with studies from within the GoM followed by safety culture and how it fits into the fishing culture. Next, work processes are explained as a way to understand where the risks occur in order to eliminate or abate hazards. Then the following topics are discussed: potential engineering controls for preventing falls overboard (FOB), machine and line entanglements, vessel instability, musculoskeletal hazards, falls on vessels and injuries and illnesses in mariculture. Following engineering controls, personal protective equipment, how incentives have been used, the need for training and the importance of using stories through narrative outreach are addressed.
Gulf of Mexico Interventions
"Culture refers to a way of life of a group of people: the patterns of behavior among this group, their beliefs, values and symbols they accept, generally without thinking of them." -Ann K. Carruth et al., 2010 [7] Fisher safety and health research in the Texas and Louisiana areas of the GoM was started by Levin et al. in 2004, an area where many Vietnamese shrimpers work. The study trained 535 fishers how to signal their presence to approaching vessels by horn and how to radio mayday calls in English and used the Vietnamese language to overcome a barrier in communicating safety messages to immigrant fishers. In addition to overcoming the language barrier, hands-on training used a simulated vessel bridge and targeted captains for the training [8].
In 2005, a 30-question survey was administered to 133 fishers along the Texas coast, 59% of whom primarily fished for shrimp. Subjects included Asians (n = 76; 57%) and Hispanics (n = 35; 26%). Familiarity with safety equipment varied from a high of 91% of respondents knowing how to don a personal floatation device (PFD) to 62% of respondents knowing how to activate an electronic emergency beacon, 50% knowing how to deploy a survival craft, 47% having an awareness of machine hazards and 44% knowing chemical and preservative hazards. Of the respondents, 81% believed alcohol consumption at sea can cause accidents. Fifty-nine percent of respondents believed that the work was very safe to neutral. However, 70% of the shrimpers held this belief despite a high fatality rate. Shrimpers (59%) had a higher participation rate than non-shrimpers (18%) in annual training. They received language-appropriate training and instruction focused on risk factor awareness especially about the use of survival crafts and machine-related hazards [9].
A focus group-based study of safety culture among Vietnamese GoM fishers was conducted that included adult family members of the fishers, specifically women, who treat illnesses and injuries of family members and uniquely express cultural memories of ethnic groups. Key results about the safety culture on vessels included (1) essential leadership skills of the captain; (2) deck hands learning through on-the-job training and (3) the inclusion of family members as stakeholders [7].
Based on focus group feedback, interventions were launched in the GoM in 2008 to address fatigue, machinery/winch and hearing hazards. Asian fishers predominated in the pretest survey (n = 217) in 2008 and post-test survey (n = 206) in 2012. Interventions included training and safety messaging with signs and written information. For hearing protection, signs were attached to engine room doors with protective ear muffs placed on door hangers as shown in Figure 2. Signs regarding machinery were placed near the vessel winches and a checklist was posted on signs reminding the fishers to rest to counter fatigue. The intent for action or adoption in three groups was high at 82%, 95% and 95%, respectively [10]. Winch entrapment was identified as a major hazard aboard shrimping vessels in the GoM [11]. Countermeasures to this hazard for the GoM were informed by a previous study in Alaska in which winch entanglements on purse-seine vessels related to the deployment and retrieval of fishing gear. The off-switch was out of reach of an entangled fisher; thus, the research engineers designed an emergency stop button on top of the electrohydraulic-powered winch to stop the power instantaneously. Rather than using hydraulic power, many winches used on fishing vessels in the GoM are powered mechanically by a power take-off system connected directly to the vessel engine [12]. Thus, research engineers designed and fabricated guards, and three vessel owners in the GoM agreed to test them. One design is shown in Figure 3 [13]. The designs underwent their third test to improve endurance and acceptance in 2017 [14]. In another study, 300 GoM fishers were trained on ergonomics for deck work where there is limited physical space, unstable work platforms and a wide variety of tasks and vessel layouts. The fishers (98%) reported the highest value of the training was learning stretching exercises and 35% reported a high value for learning about lifting guidelines and body mechanics. The subjects suggested that the motion of the vessel could be used to assist in lifting or lowering loads [15,16].
An additional study addressed PFD acceptance with 9 captains and 24 deckhands on Texas and Louisiana vessels. Each subject was asked to wear three different PFDs for a minimum of three hours while shrimping. Afterwards, the respondents reported the suspender as the least constrictive to movement as compared to the inflatable belt and ski belt [17]. More detail regarding PFDs will be addressed under Section 3.5.
Safety Culture
"…safety can be defined as that state for which the risks are judged to be acceptable." Winch entrapment was identified as a major hazard aboard shrimping vessels in the GoM [11]. Countermeasures to this hazard for the GoM were informed by a previous study in Alaska in which winch entanglements on purse-seine vessels related to the deployment and retrieval of fishing gear. The off-switch was out of reach of an entangled fisher; thus, the research engineers designed an emergency stop button on top of the electrohydraulic-powered winch to stop the power instantaneously. Rather than using hydraulic power, many winches used on fishing vessels in the GoM are powered mechanically by a power take-off system connected directly to the vessel engine [12]. Thus, research engineers designed and fabricated guards, and three vessel owners in the GoM agreed to test them. One design is shown in Figure 3 [13]. The designs underwent their third test to improve endurance and acceptance in 2017 [14]. Winch entrapment was identified as a major hazard aboard shrimping vessels in the GoM [11]. Countermeasures to this hazard for the GoM were informed by a previous study in Alaska in which winch entanglements on purse-seine vessels related to the deployment and retrieval of fishing gear. The off-switch was out of reach of an entangled fisher; thus, the research engineers designed an emergency stop button on top of the electrohydraulic-powered winch to stop the power instantaneously. Rather than using hydraulic power, many winches used on fishing vessels in the GoM are powered mechanically by a power take-off system connected directly to the vessel engine [12]. Thus, research engineers designed and fabricated guards, and three vessel owners in the GoM agreed to test them. One design is shown in Figure 3 [13]. The designs underwent their third test to improve endurance and acceptance in 2017 [14]. In another study, 300 GoM fishers were trained on ergonomics for deck work where there is limited physical space, unstable work platforms and a wide variety of tasks and vessel layouts. The fishers (98%) reported the highest value of the training was learning stretching exercises and 35% reported a high value for learning about lifting guidelines and body mechanics. The subjects suggested that the motion of the vessel could be used to assist in lifting or lowering loads [15,16].
An additional study addressed PFD acceptance with 9 captains and 24 deckhands on Texas and Louisiana vessels. Each subject was asked to wear three different PFDs for a minimum of three hours while shrimping. Afterwards, the respondents reported the suspender as the least constrictive to movement as compared to the inflatable belt and ski belt [17]. More detail regarding PFDs will be addressed under Section 3.5.
Safety Culture
"…safety can be defined as that state for which the risks are judged to be acceptable." In another study, 300 GoM fishers were trained on ergonomics for deck work where there is limited physical space, unstable work platforms and a wide variety of tasks and vessel layouts. The fishers (98%) reported the highest value of the training was learning stretching exercises and 35% reported a high value for learning about lifting guidelines and body mechanics. The subjects suggested that the motion of the vessel could be used to assist in lifting or lowering loads [15,16].
An additional study addressed PFD acceptance with 9 captains and 24 deckhands on Texas and Louisiana vessels. Each subject was asked to wear three different PFDs for a minimum of three hours while shrimping. Afterwards, the respondents reported the suspender as the least constrictive to movement as compared to the inflatable belt and ski belt [17]. More detail regarding PFDs will be addressed under Section 3.5.
Safety Culture
" . . . safety can be defined as that state for which the risks are judged to be acceptable." -Fred A. Manuele and Bruce W. Main, 2002 [18] A gap analysis of fisher health concluded that cultural issues of total health and how the working conditions affect personal behavior risk, such as smoking and alcohol consumption (e.g., risk coping). More broadly, safety culture as part of fishing culture depends on acceptable risk by the fishers and more specifically, depends on the vessel captains [19]. Table 1 lists several studies and findings related to safety and fishing culture. Table 1. A list of studies and findings related to the safety culture in the fishing culture, n = 9.
Study Findings
Acheson interviewed 12 British Columbian fishers in Canada from a variety of vessels types who had experience with an injury or participated in an incident [20].
Fishers downgrade risk by not reporting work injuries. Denial and trivialization of danger is part of the occupational culture. The author suggests fisher participation in which they discuss their own experience through the lens of risk assessment and their own injuries and near misses.
Törner and Eklöf investigated fishers' attitudes towards risks and attempted to enhance a sense of risk control in their work. They began with a questionnaire to 92 subjects followed by two discussion groups of two and three crews that met six times over eight months [21].
The fishers described 43 incidents and reported that technology was at fault in 34 (79%) of the incidents, deficient work organization was at fault in 5 (12%) cases and 4 (9%) were caused by an individual's actions. Common factors included weather (n = 16; 37%), deficient routines (n = 10; 23%) and faulty equipment (n = 8; 19%). One conclusion was that slipping was a risk factor, which was underreported since it became a norm.
Marshall et al. focused on 215 commercial fishers in small and medium scale operations in North Carolina.
Most fishers were self-employed or working in small operations [22].
A principle concern was making a "day-to-day living" and a general theme included a reluctance to seek medical care and concerns regarding regulations. They likely lacked job-related health insurance or workers' compensation coverage.
Bezerra et al. conducted a study of 19
Brazilian fishers that included an evaluation of clinical, histological and immunological effects of chronic exposure to ultraviolet radiation from the sun [23].
Fishers stay with their occupation longer than other workers and they start as adolescents based on family tradition or contacts with other fishers. They concluded that fishers learn through experience whereas workers in other occupations require prior training and thus start work later in life.
Bye and Lamvick surveyed 487 employees and interviewed 45 subjects from fishing and other vessels in Norway to determine the relationship of risk perception to countermeasures [24].
Small fishing vessels, 35 feet or less in length, were found as the most dangerous but 71% of injuries were never reported. Several subjects stated that their work was dangerous but they rarely discussed this danger among crew members. Defects on the vessel could be easily fixed but remained unrepaired. Fatalism ruled: 13% did not know how to swim, 41% said they rarely donned a PFD and 34% rarely used a safety line on board. Donning a PFD could lead to ridicule by the crew and fishers on passing vessels.
Brooks examined safety management and culture in 2002 in a small Australian fishing town [25].
The simple shared mission was to catch quality fish and sell them to buyers and the skipper's word was "law." Family continuity over time in the business and independence was important, risk taking was enculturated and experience was more important than training. A new deckhand typically experienced 10 near misses on their first day on the job, which dropped dramatically within a week. They assumed that humanity is subservient to the powerful sea. The workers grasped the majority of physical risks well but failed to design, plan, or implement risk reduction measures. The challenge is to build a learning culture with actions based on risk perceptions.
Hávold investigated safety measures among 9
Norwegian fishers based on initial conditions [26].
These conditions included the fishery type, type and size of vessel, fisher time-based experience. He identified the experience of injuries or near-misses as a learning effect.
Christiansen and Hovmand reported on interventions to protect workers in the Nordic nations [27].
Nordic fishers rated written materials, guidelines and information low and preferred direct dialogue, which appear to have greater impact on safety. They stated that an effective safety culture requires the skipper to take the lead.
Murray et al. interviewed 55 fishers from small communities in Newfoundland, Canada about factors related to injuries and fatalism preceded by high threat perception and anxiety [28].
Work Process Risk Factors
Job hazard analysis is a work process approach for identifying and controlling hazards in aquaculture [29]. In addition, this analysis technique was used for commercial crab fishing to (1) describe steps in a job task; (2) evaluate the risk of an injury in each step; and (3) recommend alternatives to reduce risk for each step in the task [30].
A similar method has been developed for coding of fishing operations and work processes, such as in seining and trawling. An example of work tasks in sequence is: preparing gear and nets (the second most dangerous task) → setting gear → hauling gear (the most dangerous task) → handling fish [31]. This system was adapted to non-fatal injuries for the Alaska commercial fishing industry based on United States Coast Guard (USCG) data. Across the different vessel types-gillnet, seine, longline, pot/trap and trawl-the principle work processes were traffic on board, shooting/setting the gear, hauling the gear, handling the gear, processing the catch, handling frozen fish, working in the engine room, mooring, off-duty and other (e.g., firefighting, maintenance, repair) [32].
Engineering Control
"The most effective means of preventing and controlling occupational injuries, illnesses and fatalities is to "design out" hazards and hazardous exposures from the workplace." -Paul A. Schulte et al., 2008 [33] Design engineers follow a hierarchy of controls by first eliminating the hazard; failing this, next they guard against the hazard and last warning against the hazard [34]. Elimination of or guarding against the hazard are considered passive controls that do not rely on the potential victim's action, whereas warning-the "be careful" approach-is considered as an active control that depends on human action [35]. Passive controls include eliminating or guarding against hazards.
In 2014, a study reported decreases in commercial fisher fatality rates in northern climates from the 1980s in all countries except Great Britain. There, factors for the increased fatality rate included lone fishers, the high risk of pot fishing, financial pressures and unseaworthy vessels [36]. The retirement of older vessels for new ones likely contributed to the overall decline in fatalities elsewhere [37].
Fall Overboard and Entanglement Control
Potential engineering and administrative controls for FOBs are shown in the left column of Table 2 [38]. The right column of the table shows potential protections against and escape options from line entanglements [39].
Vessel Stability
Vessel instability is a recurring theme and fishers have reported that many vessels do not have roll stabilization tanks. Roll stabilization tanks reduce the lateral motion with a tank of water and a central baffle to counter sideways motion of the vessel [27]. Another control against roll is a pair of paravanes drawn through water along the sides of the vessel, preferably from the end of the extended outrigger on a double rig trawl vessel (as shown in Figure 4) [40,41].
Vessel Stability
Vessel instability is a recurring theme and fishers have reported that many vessels do not have roll stabilization tanks. Roll stabilization tanks reduce the lateral motion with a tank of water and a central baffle to counter sideways motion of the vessel [27]. Another control against roll is a pair of paravanes drawn through water along the sides of the vessel, preferably from the end of the extended outrigger on a double rig trawl vessel (as shown in Figure 4) [40,41]. NIOSH engineers also addressed the risk of flooding of the vessel with three designs. One design monitors hatch and door openings to insure they are closed in rough seas. Another design, a Flood-Rate Monitor, tracks the status of water entry into holds and activates pumps to expel excess water. The captain and crew can monitor the flood level risk from a wheelhouse display. The sloshing effect of water in the hold of the vessel can affect the stability (roll) but debris can clog float monitoring devices rendering them ineffective in monitoring changing water levels in the holds. To counter this problem, engineers designed a "Slack Tank Monitor" to measure underwater pressure differences and the sloshing effect in the hold [12].
Ergonomics
Regarding musculoskeletal disorders, engineering interventions focused primarily on reducing stress from lifting. One engineering design alleviated ergonomic stress among crabbers in North Carolina [42]. The investigators developed two engineering controls, a crab pot ramp and a crab pot boom as shown in Figure 5. NIOSH engineers also addressed the risk of flooding of the vessel with three designs. One design monitors hatch and door openings to insure they are closed in rough seas. Another design, a Flood-Rate Monitor, tracks the status of water entry into holds and activates pumps to expel excess water. The captain and crew can monitor the flood level risk from a wheelhouse display. The sloshing effect of water in the hold of the vessel can affect the stability (roll) but debris can clog float monitoring devices rendering them ineffective in monitoring changing water levels in the holds. To counter this problem, engineers designed a "Slack Tank Monitor" to measure underwater pressure differences and the sloshing effect in the hold [12].
Ergonomics
Regarding musculoskeletal disorders, engineering interventions focused primarily on reducing stress from lifting. One engineering design alleviated ergonomic stress among crabbers in North Carolina [42]. The investigators developed two engineering controls, a crab pot ramp and a crab pot boom as shown in Figure 5. Regarding musculoskeletal disorders, engineering interventions focused primarily on reducing stress from lifting. One engineering design alleviated ergonomic stress among crabbers in North Carolina [42]. The investigators developed two engineering controls, a crab pot ramp and a crab pot boom as shown in Figure 5. In tasks such as stacking traps in crab harvesting, icing fish in gillnetting and sorting fish in trawling, ergonomic interventions need to make the best use of the limited space and adjust the work processes to require as little unnecessary lifting as possible [43]. In addition, Nordic fishers have expressed concerns about fatigue, lifting heavy boxes with fish and a need for a box lifter [27].
Fatigue is recognized as a combination of environmental and personal risk factors and one factor is high workloads with frequent manual handling and use of heavy equipment in a wet, slippery and dynamic environment. These conditions affect the fisher's posture, performance, workload, rest and sleep. A study examined self-reported fatigue among 270 Danish fishers (28% response rate) based on five dimensions: general fatigue (tiredness), physical fatigue (experience related to feeling tired), reduced motivation to start an activity, the reduced motivation and mental fatigue (e.g., reduced concentration). General fatigue was the dominate factor and was closely associated with physical workload. Subjects reported mental fatigue as the least affected of the five factors. Differences were found in vessel types and days at sea and related to the role of automation in reducing fatigue. Trawlers scored lower for fatigue, which were larger vessels than seiners with more automation and seiners scored the highest in general fatigue among different vessel types [44]. Thus, ergonomics that reduces workload through automation is a factor for reduced general fatigue.
Fall Prevention
A number of studies have identified slips on the deck as the initial event leading to an injury. Slip resistance of deck surfaces and footwear under different conditions has been evaluated. Even though non-slip coatings are put on steel deck floors, they can be compromised on fishing vessel decks by water, ice, snow, oils and fish/shellfish body tissues and fluids. These compromises may be compounded by sea conditions on deck angles and motions and further affected by the fisher's footwear and dynamics of gait and movement. Three combinations provided good protection against falls: (1) traditional footwear and non-skid paint, (2) grit-impregnated boots and (3) foamed polyurethane paint [45].
Near Shore Mariculture Innovations
Clam mariculture involves both offshore and onshore work and the onshore processes may present unique indoor risks. The type of hazards and interventions present in clam mariculture are listed in Table 3. These observations show the value of practical active and passive solutions by clam farmers to protect workers against hazards [46].
Personal Protective Equipment: Personal Flotation Devices and Eye UV Protection
"For example, one fisherman told stories about how some of his colleagues from other vessels, in a joking manner, were shouting 'dog' to him when he chose to make use of a safety line during periods with rough sea." -Rolf Bye and Gunnar M. Lamvick, 2007 [24] In 2009, Turner et al. evaluated PFD acceptance in Great Britain (including among fishers) with an aim to increase PFD usage by recreational boaters. The highest PFD use was among kayakers. Using stages of change theory (hazard appraisal → decision making → initiation → adherence), they concluded that the majority of individuals who fail to wear PFDs are in the hazard appraisal and decision-making stages. Therefore, the challenge in encouraging PFD use is to advance people into the adherence stage. The authors recommended that campaigns to increase PFD use provide personal experience of the initial phases of cold water immersion [47]. The four stages leading to death after a FOB in cold Alaska waters are as follows: cold shock → swimming failure → hypothermia → post-recue collapse [48]. Regarding swimming ability, Bye and Lamvick referred to a 2004 study in which 13% of fishers did not know how to swim [24]. In British Columbia, Brooks et al. found swimming failure was the underlying cause of 5.4% of 128 drownings of commercial fishers over the 1976-2002 period [49], and Marshall et al. found from self-reports of 215 North Carolina fishers in their study that 3.3% did not know how to swim and 31.2% had little to adequate swimming ability [22].
Alaskan fishers reported the following complaints when wearing a PFD: PFD weight, bulk, chafing, constriction and interference with work; however, some designs were considered comfortable and fishers on different vessel types had different preferences for the six PFD designs evaluated [50,51]. In another study in Alaska, 146 subjects wore different types of PFDs while fishing over 30 days. The subjects rated the inflatable rubberized neoprene suspender the highest on the 30th day but preferences varied, likely due to differences in fishing equipment, on-deck activities and weather conditions. The authors suggested PFD use must be tailored to the specific vessel types [52]. Table 4 shows the subjects who represent the different gear types, including Dungeless crabbers, their problems with PFD use and their preferences for particular types after working with the different designs. Preferences narrowed to four types of PFDs, two of a foam type and two of an inflatable type as shown in Figure 6 [52]. •Regatta raingear with built-in foam flotation-light weight, does not limit motion, does not interfere with work, easy to keep clean, non-chafing •Stearns Inflatable Suspenders-does not snag on gear, easy to keep clean, does not limit motion •Mustang Inflatable Suspenders (MD3188)-does not snag on gear, easy to put on, comfortable to wear (not tight or bulky), did not constrict motion. Did not interfere with their work, non-chafing •Stearns Foam Work Vest-non-chafing, light weight, comfortable Source: National Institute for Occupational Safety and Health. Publication Number 2013-131 [52]. In a Massachusetts study, fishing captains and sternmen reported rarely donning PFDs. Reasons for PFD disuse were discomfort, risk acceptance, social stigma and anticipating a lingering death when a FOB is unnoticed. Discomfort factors included bulkiness, interference with work and a higher likelihood of entanglement. The social stigma factor centered on a norm of regular disuse, that when PFDs are used they appear "strange," and coworker perceptions that PFD use indicates inexperience. Recommendations included improving PFD designs and trials for worker satisfaction, making designs more socially acceptable, increasing confidence of a rescue in the event of a FOB through the use of personal locator beacons and using an Emergency Position Indicating Radio Beacon (EPIRB) to alert the USCG of a sinking vessel and the need and location for a rescue [53].
A similar study is underway in the Northeast lobster fishery to redesign the PFD for acceptance by dealing with discomfort, interference with work and potential safety hazards (e.g., entanglements). In January 2017, 80 lobster fishers started wearing eight different PFD models to provide feedback on design changes [54].
Ultraviolet (UV) radiation from the sun has been associated with cataract formation and other eye disorders. In 1988, a study was conducted of the use of wide brimmed hats for reducing UV exposure. They measured UV exposure to 81 subjects with and without wide-brimmed hats over a 6-month period, totaling 178 samples. The subjects harvested oysters and crabs. Brimmed hats provided significant eye protection from UV exposure across all subjects as shown in Figure 7 [55]. In a Massachusetts study, fishing captains and sternmen reported rarely donning PFDs. Reasons for PFD disuse were discomfort, risk acceptance, social stigma and anticipating a lingering death when a FOB is unnoticed. Discomfort factors included bulkiness, interference with work and a higher likelihood of entanglement. The social stigma factor centered on a norm of regular disuse, that when PFDs are used they appear "strange," and coworker perceptions that PFD use indicates inexperience. Recommendations included improving PFD designs and trials for worker satisfaction, making designs more socially acceptable, increasing confidence of a rescue in the event of a FOB through the use of personal locator beacons and using an Emergency Position Indicating Radio Beacon (EPIRB) to alert the USCG of a sinking vessel and the need and location for a rescue [53].
A similar study is underway in the Northeast lobster fishery to redesign the PFD for acceptance by dealing with discomfort, interference with work and potential safety hazards (e.g., entanglements). In January 2017, 80 lobster fishers started wearing eight different PFD models to provide feedback on design changes [54].
Ultraviolet (UV) radiation from the sun has been associated with cataract formation and other eye disorders. In 1988, a study was conducted of the use of wide brimmed hats for reducing UV exposure. They measured UV exposure to 81 subjects with and without wide-brimmed hats over a 6-month period, totaling 178 samples. The subjects harvested oysters and crabs. Brimmed hats provided significant eye protection from UV exposure across all subjects as shown in Figure 7 [55].
Incentives
A 1971 USCG study concluded that a regulated fishing safety program could prevent 72% of fisher fatalities but it determined that such a program would create an unsustainable financial hardship on the industry [56]. An Individual Fishing Quota (IFQ) system is based on the amount of fish caught over a longer period of time than a fishing derby (seasonal closures) and therefore,
Incentives
A 1971 USCG study concluded that a regulated fishing safety program could prevent 72% of fisher fatalities but it determined that such a program would create an unsustainable financial hardship on the industry [56]. An Individual Fishing Quota (IFQ) system is based on the amount of fish caught over a longer period of time than a fishing derby (seasonal closures) and therefore, reduces the amount of time spent fishing when fatigued or fishing when conditions are dangerous. Under the IFQ system, fishers were free to choose not to fish during dangerous high winds and simply catch their quota later when conditions are safer.
An IFQ system based on the amount of fish mitigates these risks. As an example, under an IFQ system on the US West Coast, fishing on high wind days was reduced by 79%. High winds correlate with higher waves and stormy conditions. The USCG saw an 83% decrease in incident rate in this fishery after the IFQ system was used rather than season compression [57]. A study examined data in the GoM where a shift was made from the seasonal closures to IFQs for two reef fish, grouper and red snapper. Data from before and after the switch for these fisheries showed a 19% drop in the fatality rate after the change and associated this decrease with taking less risky trips during adverse weather conditions [58].
Törner et al. launched a program for promoting safety measures on 101 Swedish fishing vessels. They used a cost-benefit approach for the business, the victim and co-workers as an incentive for the adoption of safety or ergonomic measures. Inspections identified 36 deficiency types totaling 1427 specific deficiencies. In a 6-month follow-up, 160 (11%) safety measures had been adopted to mitigate the deficiencies [59].
Training
Training has been used to develop a culture of safety among Massachusetts fishers. A recent tragedy, community leadership and boat captain encouragement facilitated training attendance. About 700 fishers attended two-week hands-on training courses [60].
A safety training program in Alaska was found to be effective when comparing training between decedents with survivors of vessel disasters. However, survival skills eroded over time, thus refresher courses are needed, and monthly drills are a way to maintain survival skills [61]. Safety training received a boost from the National Oceanic and Atmospheric Administration's (NOAA) fishery observer program. NOAA observers monitor catches onboard the vessels while fishing occurs [62]. Observers are not members of the crew but are, nonetheless, at risk of dangers on the vessel and undergo survival training. In addition, the USCG inspects the vessels and conducts monthly emergency drills for compliance with regulations for the safety of the observer and indirectly, for the crew [63].
Narrative Outreach
"This approach basically argues that human beings are natural storytellers and that the exchange of stories permeates our everyday social interaction" -Michael Murray, 2000 [63] Incorporating cultural aspects of commercial fishing into the educational process was found to authenticate the learning experience. The need is for training that is focused, exploratory and interactive with less emphasis on the instructor providing knowledge. The aim is to connect new knowledge with lived experience facilitated by the use of narrative [64].
Interviews of more than 40 Newfoundland fishers in Canada resulted in narratives about their perceptions of incident causes and possible means to prevent injuries and descriptions of resulting fatalities and/or other adverse events and their perceptions of potential safety measures. The fishers were eager to recount their tales and concluded that narratives rather than didactic instruction provided a teaching opportunity for action [63].
Discussion
Our two previous reviews identified studies that examined safety and health risks faced by fish harvesters. This third review evaluates countermeasures. In combination, these three reviews focus on three criteria: (1) frequency of the condition; (2) severity in the individual case and (3) preventability. Below, we discuss the limitations and implications of this review.
Limitations
This is a narrative review and not a systematic review. Thus, the findings are generalized and descriptive and do not necessarily reflect statistically significant results or countermeasures. Another limitation is the recognized serious underreporting of nonfatal injuries regarding small, sometimes single operator fishing vessels. There was serious underreporting of nonfatal injuries and illnesses across all vessel types but more so among small, sometimes single operator, fishing vessels. In addition, there is the abundance of literature we cited from outside of the GoM but nevertheless, elements of the literature inform our research on GoM fishers with cross-regional patterns and investigation and intervention methods. This review included non-refereed journal articles but the government documents and conference proceedings reviewed provide important information regarding innovative countermeasures that may lead to studies of effectiveness and the eventual reduction in occupational injuries and diseases [65].
The Implications for Countermeasures
Many risks to health and safety face GoM fishers that may result in adverse outcomes: traumatic injuries, FOB and onboard injuries, back disorders, traumatic eye injuries and ocular exposures to ultraviolet radiation, hearing loss, breathing and skin effects and fatigue-related injuries [9]. The fishing community accepts risk as natural and normal, conditioned as a birthright from early in life and that fishing is an economic necessity. Independence from authority is valued. Experience drives safe behavior and the captain controls the vessel. The challenge is to change acceptance to less risky personal behaviors through education and investment. A summary of countermeasures is listed in Table 5. FOB-related fatalities are the most severe and a frequent safety problem in the GoM followed in prevalence by vessel disasters. The literature addressed countermeasures to deaths from FOB by encouraging fishers to wear PFDs, not working alone on the deck, wearing personal locator beacons to signal the helmsman of an overboard fisher, ladders for boarding the vessel, and swimming ability. Personal locator beacons need to be evaluated for killing the motor on a lone fisher's skiff in the event of a FOB, alerting the helmsman of a FOB on vessels and alerting the USCG of a FOB. Much research was conducted about acceptability of PFDs in the Northern States but regarding the GoM, a factor is discomfort of wearing PFDs in warm climates. Another barrier is the stigma of wearing PFDs as not manly but this runs counter to the rough and tough look of those in the NIOSH Alaska cohorts, which could help to overcome this perception. This stigma is also affected by the anticipation of a slow death after a FOB when lost at sea, which the personal locator beacons could help avoid.
Problems of clutter on deck pose trip hazards. Installing and using boxes to hold the ropes and lines could abate this hazard. Guards have been designed to cover winches to prevent entanglements but their use awaits acceptance. Regarding vessel floundering, the attachment point on the vessel can be changed so a net obstruction does not pull down the end of the vessel underwater and sink it. Roll stabilizers such as paravanes can counter wave motion as does roll stabilization tanks. Innovations in monitors for open hatches and forces of sloshing water in the holds can help counter floundering of the vessel.
Reported environmental factors that include the need for major wave prediction [66] to prevent weather-related vessel disasters [67,68] deserve more attention in the literature. Wind has been identified as a predictor for dangerous weather. A shift in some fisheries from time-limited fishing "derbies" no matter the weather to an IFQ system made it possible to pick fair weather for fishing and not overload the vessel because of time demands [69]. While there are other risk factors such as noise and work exhaustion and sea-sickness that are associated with fatigue, the principle intervention for fatigue reduction is to increase the quality and quantity of sleep [70].
The Haddon matrix offers a way to bring the environment into the picture. It is a qualitative epidemiology approach to organize the information useful for attacking the safety and health hazards associated with GoM commercial fishing and aquaculture hazards [71,72]. Table 6 shows a simulation of the matrix [73,74].
Nonfatal Injuries
The deck is the dominant location of nonfatal injuries, and across much of the literature, sources of the most serious injuries are falls, machinery (e.g., winch entanglements) and struck-by injuries. Line entanglement is another serious hazard on the deck. Some interventions to control nonfatal injuries are similar to those for fatality prevention, such as FOB protection and placing guards on the winches or permanent guides for the line. Keeping the deck clear to avoid trips, dealing with slippery surfaces, wearing appropriate footwear and keeping potential moving and falling objects secure are also important preventive measures.
Musculoskeletal Disorder Hazards
Many studies around the world address the problem of musculoskeletal disorders associated with fishing. The universal problem is back pain, especially in the lumbar region. Other body parts affected include the shoulders, hands and wrists, and elbows and knees. Finding ways to reduce heavy lifting is critical. Researchers suggested capitalizing on the inventiveness of fishers by teaching them the principles of ergonomics to enable them to innovate practical ways to make work easier and more efficient. Engineering designs or fisher innovations can provide alternatives to manual lifting.
Biological Hazards
Avoiding animal bites and stings is important as is being able to identify and avoid contact with marine venomous animals such as stingrays, the Portuguese man-of-war and certain jellyfish and marine snails. Immediate treatment for some toxic stings is critical, and informing emergency responders and crew members of anti-venom administration, anaphylaxis treatment and surgery as well as long-term treatment is important [75].
Skin and Eye Hazards
Skin hazards include some of the biological hazards, such as allergic contact dermatitis. Irritant contact dermatitis is another skin condition. Gloves are a good protection against both hazards. Skin cancer is an obvious hazard but surprisingly, people who are exposed continually over a long period develop protective barriers to cancer called photoprotection [23]. However, sun burns need to be avoided (e.g., new deckhands) with sunscreen and clothing and lips and eyes need protection, such as with wide-brimmed hats.
Hearing Hazards
Noise from vessel engines is a uniform hazard for all motorized fishing vessels around the world. Traditional hearing protectors should be worn near running engines. Dampening some of the engine vibration may be helpful and noise dampening needs to be considered in the sleeping quarters [76]. Carbon monoxide exposure should be avoided. Engine maintenance may reduce noise levels.
Breathing Hazards
The principal respiratory hazard to fishers was thought to be protein aerosols emitted from fish but work on deck provides protective ventilation. Gutting fish (and perhaps deheading) shrimp off the deck also poses a respiratory hazard. Other respiratory hazards include exposure to engine exhaust, seafood preservation materials and anti-fouling applications on the vessels. Divers' air source needs to avoid vessel exhaust [77].
Work Process Risk Factors
As described earlier, the general steps in the fishing process are (1) preparing gear and nets; (2) setting gear; (3) hauling gear and (4) handling fish [36]. Building a flowchart of sub-processes aids in identifying high-hazard tasks. In effect, it is a logic diagram and illustrates that much of the work is repetitive (e.g., hauling pots or removing shrimp heads). Safety professionals use job hazard analysis to define the sequence of tasks in a job, the hazards associated with each task and possible interventions to prevent or protect against each hazard. For ergonomists, this approach is critical for understanding the combination of stresses on a worker.
Safety Culture
Research into the fishing culture indicates that, in stark contrast with the injury data, fishers believe their work is safe, even though they are aware of the hazards. Moreover, they learn by experience. Several studies made the point that the captain's word is law at sea and the captain is an opportune change agent for a commitment to and competency for safety. An intervention strategy suggested by some researchers was to use a narrative rather than a didactic approach for training that emphasizes the use of stories to teach. In some studies, the fishers valued hands-on training over classroom training and some fishers expressed a desire for using pictures for training. More attention is needed to consider the members of the whole family in any learning agenda like the Fishing Partnership Program in Massachusetts [78].
Conclusions
Fatalities are the most severe injury experienced by the GoM fish harvester community and the most frequent cause of these fatalities is FOB and vessel disasters. The principle intervention for FOB is donning PFDs and for vessel disaster is warning other vessels of potential collisions. Behavioral barriers diminish PFD use but current studies aim to resolve these barriers. Personal locater beacons to alert the helmsman and the USCG or to stop the engine has promise to protect lives but needs evaluation, perhaps by NIOSH as personal protective equipment. No study of nonfatal injury severity and frequency has been conducted in the GoM but the USCG documents cases by the Abbreviated Injury Scale, which could provide insight into both measures. These injuries include slip and fall, entanglement and struck-by injuries, which occur predominately while working with gear on the deck. Interventions include providing winch guards and non-slip surfaces and footwear.
The literature was rife with cases of musculoskeletal disorders and the interventions included ergonomic-based job redesign and training. Another frequent health effect was injury and poisoning by animal contact. Gloves protect against injury and contact dermatitis. Wearing ear plugs or muffs provides protection against the most prevalent source of hazardous noise, the vessel engine. Protection of the eyes and lips and skin from sunburn against sun rays is important to prevent cancer. Natural ventilation on deck protects fishers from respiratory hazards; in enclosed areas, exposure to engine exhaust and protein aerosols pose hazards to the lungs, thus good ventilation is important.
Fishers accept risks as normal. Thus, the challenge is to improve fisher competence and commitment to improve the safety culture, particularly among vessel captains. Involvement of fishers in identifying hazards and interventions to improve the safety culture is critical. Researchers point to using captains' practical innovation capacity as an intervention by teaching them ergonomic principles. This involvement also includes identifying interventions in place to be shared more broadly and encouraging adoption of both active and passive controls. Enculturating a safety checklist with the fishers that would include obvious protections (e.g., zero-alcohol tolerance, no open footwear) may help.
|
2019-04-16T13:28:36.924Z
|
2018-08-02T00:00:00.000
|
{
"year": 2018,
"sha1": "962f9a76a310780794806abc8bf857d57d3922d8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-576X/4/3/33/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "50d2ed2fbdd69b5090c0e56362c1b44aa6ae45b3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
18517366
|
pes2o/s2orc
|
v3-fos-license
|
Absorption of angular momentum by black holes and D-branes
We consider the absorption of higher angular momentum modes of scalars into black holes, at low energies, and ask if the resulting cross sections are reproduced by a D-brane model. To get the correct dependence on the volume of the compactified dimensions, we must let the absorbing element in the brane model have a tension that is the geometric mean of the tensions of the D-string and an effective stringlike tension obtained from the D-5-brane; this choice is also motivated by T-duality. In a dual model we note that the correct dependence on the volume of the compact dimensions and the coupling arise if the absorbing string is allowed to split into many strings in the process of absorbing a higher angular momentum wave. We obtain the required energy dependence of the cross section by carrying out the integrals resulting from partitioning the energy of the incoming quantum into vibrations of the string.
Introduction.
With the development of string theory and the ideas of duality, there has been considerable progress in our understanding of black holes. Following suggestions of Susskind, the number of string theory states at weak coupling have been found to agree with the number of states expected from the Bekenstein entropy of the hole that would form at strong coupling [1] [2][3] [4]. Further, it was found that the rates of absorption and emission of minimal scalars computed at weak coupling matched the Hawking radiation rates expected from the black hole at strong coupling [5] [6]. This calculation has been extended to emission of charged quanta [7], to higher orders in the energy of the incident quantum under certain conditions [8], and to nonminimal 'fixed' scalars [9].
To make contact with the black hole information paradox, we need to understand the absorption of quanta that are small in size compared to the horizon, so that they can fall into the horizon through a reasonably localised direction. This implies that we understand the absorption of higher angular momentum modes, since a wavepacket that is to be localised in the angular directions must be composed of several components of angular momentum l. It is also important to understand the absorption at wavelengths small compared to the size of the horizon, since that too is required to well localise the infalling quantum.
In this paper we discuss the absorption of low energy higher l modes for minimal scalars, by the classical black hole and by the D-brane model of the hole. For the case l = 0 it had turned out to be adequate to use a model where a D-string absorbed an incoming scalar by converting its energy into vibration modes on the D-string [5]. An equivalent result was obtained in the S-dual model where the absorbing element was an elementary string, and the absorption amplitude was computed using standard perturbative string theory [6].
But there are difficulties with naively extending these models to the absorption of quanta with higher l. Let us consider the model of the 4+1 dimensional black hole introduced in [4]. The spacetime has total dimension 10, of which 5 space dimensions are compactified on a 5-torus T 5 = T 4 × S 1 . The S 1 is the direction in which the absorbing string is wound, while the 5-branes in the model wrap all over the T 5 . Let the volume of the T 4 be V 4 , and the length of the S 1 be L.
The incident quantum is expected to convert its energy into modes that travel in the direction of the circle S 1 . As the angular momentum l of the incident quantum is increased, we expect that more and more such (fermionic) modes will be created. But if these modes are vibrations of the D-string, then the absorption calculation will be confined to the vicinity of this string, and will not be sensitive to the volume V 4 that is available transverse to the D-string. Thus the V 4 dependence of the cross section will not change with l. But the classical cross section does depend on V 4 ; this dependence is ∼ V −1−l 4 .
If we use a dual model where the absorbing element is an elementary string, and consider the absorption process as a fundamental string interaction diagram, then we see that the incoming massless quantum can create no more than 4 new fermions in the final state, if we use the three point tree vertex. This is because the world sheet conformal theory is a free theory, and the massless scalar has at most two fermionic oscillator to contribute to each of the left and right sides. But we need a number of fermions that increases without bound with increasing l. If we allow loops, then we get additional powers of g 4 in the cross section for every extra loop, while the classical cross section is seen to increase by powers of g 2 as l increases by one unit.
In this paper we do the following: (a) We observe that in the D-brane model we get the correct dependence of the absorption cross section on V 4 if we let the absorbing element be a long string with a tension that is the geometric mean of the tension of the D-string and the effective tension obtained for vibrations of the 5-D-brane which travels in the long direction S 1 . We note that such a choice is also T-duality symmetric.
We also note that in the dual elementary string model, the correct dependence on V 4 and g is obtained if we allow the initial string to split into l + 1 strings when absorbing a quantum of angular momentum l. The details of the amplitude calculation are however not very clear in such a model.
(b) The energy of the incoming quantum is expected to be shared between a pair of bosonic quanta and 2l fermionic quanta, travelling in the direction of the circle S 1 . We carry out the integrals over momenta, and obtain the energy dependence that is required by the classical cross section.
Recently the absorption of higher angular modes has been considered for 3-branes [10] and for the 4+1 dimensional black hole with three charges through an effective conformal theory [11]. There now exist a large number of results pertaining to the black hole -Dbrane comparison. The behavior of the D-brane as a black body was discussed in [12], where it was shown that emissions of quanta are proportional to the classically expected emissions. The issue of higher orders in coupling was discussed in [13]. Comparisons of brane and classical absorption were discussed in [14] [15].
The plan of this paper is as follows. In section 2 we discuss the classical cross section.
In section 3 we discuss the issue of V 4 dependence in the D-brane model. Section 4 discusses dependence on other parameters of the model. In section 5 we discuss a possible description in the dual model where we use the elementary string Polyakov amplitudes. In section 6 we discuss the energy dependence of the amplitudes. Section 7 is a discussion.
The classical absorption cross section.
The metric of the 5-dimensional hole is We will be in the region of parameter space where r p ∼ r 0 << r 1 , r 5 . Thus only the momentum-antimomentum excitations on the string are excited; the excitations of stringsantistrings and 5-branes-anti 5-branes is suppressed.
We will consider the absorption of a graviton that is a scalar from the 5-dimensional point of view. This is a minimally coupled scalar, and satisfies the free wave equation on the 5-dimensional black hole metric. The absorption probability for a spherical wave of angular momentum l was computed in [16] [11]. In the limit where r 1 , r 5 >> r 0 , r p this probability is is the area of the horizon. The temperature of the black hole is The left and right temperatures are [8] T L = r 0 e σ p 2πr 1 r 5 , T R = r 0 e −σ p 2πr 1 r 5 (2.7) The absorption cross section for angular momentum l is (see Appendix C for details) We have, for l odd For l even As a check we note that for l = 0, ω → 0, we obtain σ = A H in accordance with the universal form of the low energy cross section for minimal scalars [17].
The behavior of the classical cross section.
In terms of microscopic variables, we have for the D-brane model [18]: Here n 1 , n 5 are the numbers of D-strings and D-5-branes respectively. V 4 is the volume of the 4-torus transverse to the direction in which the D-string is wound. L (S) is the string length, defined so that under T-duality, a circle of circumference AL (S) goes to a circle of circumference A −1 L (S) . g = e φ is the elementary string coupling. The tension of the elementary string is T (S) = 2πL (S) −2 . The tension of the D-string is The tension of the 5-D-brane is The cross section (2.9), (2.10) is seen to depend on V 4 as Suppose we have a bound state of 5-D-branes and D-strings. In the effective string model of [19] the effect of the 5-D-branes can be taken into account through a fractionation of the tension of the D-string: This model was motivated by performing a duality on the case studied in [20]. In the latter case it was shown by using S-duality that the momentum modes that travel on a D-string in the cross section, which is indeed appropriate to the l = 0 cross section: where we have noted that But for the case of l > 0 we would continue to find the V −1 4 dependence, since the quanta created on the D-string would not see the size of the transverse T 4 . This is in contradiction with (3.5).
Duality considerations, and the 'mean string'.
Let us take one 5-D-brane bound to one D-string. Consider a situation where the length L of the circle where the D-string is wrapped is very long compared to the sides of the compact torus perpendicular to the D-string, which we take to be of order V The 5-brane is wrapped on T 4 × S 1 , so it also sees the length L. Further, in this case we can consider excitations of wavelength There are two kinds of such excitations that we can naively see in this system. If we oscillate the D-string, we get vibrations on a string with tension: If we oscillate the 5-D-brane, then we expect that this will behave as a string with tension If we perform four T-dualities, in the four directions of the torus T 4 , then the D-string will become a 5-D-brane, and the 5-D-brane will become a D-string. The new coupling will be and the new volume of T 4 will be The 5-D-brane becomes a D-string, which has the tension which agrees with (3.11) as it should. The original D-string becomes a 5-D-brane with tension so the effective tension for the long wavelength modes considered here is Thus the two tensions T 1 , T 2 get interchanged under T-duality. From the point of view of the noncompact spacetime, both tensions T 1 , T 2 that appear are on a symmetrical footing.
Thus it is not natural to choose either as the effective tension for the vibrations that are excited on the system. We take instead the geometric mean of the two tensions We will have 4n 1 n 5 bosonic degrees of freedom on this string, together with their 4n 1 n 5 fermionic superpartners. Let us call the string with this tension the 'mean' string to differentiate it from the string with tension given by (3.6), which is usually termed the 'effective string'.
We can still have the case that the ∼ n 1 n 5 degrees of freedom in certain domain of parameters give 4 bosonic and 4 fermionic degrees of freedom on a circle of length n 1 n 5 L, just as was the case for the effective string model. But note that the tension T m does not give in any simple way the mass of either the D-strings or the 5-D-branes in the system.
It is an effective parameter for the excitations of the D-string -5-D-brane bound state.
Disc diagram calculations.
When the incoming scalar is absorbed in the brane model, we expect that there is one bosonic excitation created on each of the left and right sides; these bosons carry the spins of the scalar. there are also l fermions on each side, for absorption of angular momentum l.
(Some details of the group theoretic structure of partial waves is given in Appendix B. The above kinds of excitations were also involved in the effective conformal theory description used in [11].) Again note that the l = 0 case is not altered, since the change in the normalisation of the two open strings is compensated by the change in the volume of the interaction region, which also appears in the amplitude.
Obtaining the V 4 dependence.
In any of the above ways of taking into account the effective tension for the vibrations, Thus in each case we get the desired V 4 dependence.
In the classical cross section, we note that with regard to g, L dependence, Further with regard to the dependence on the number of 1-branes and 5-branes, (r 1 r 5 ) 2 ∼ n 1 n 5 , σ l ∼ (n 1 n 5 ) l+1 (4.4) Thus σ l ∼ g 2l+2 (n 1 n 5 ) l+1 (4.5) All these dependences are seen to result if we assume that we have ∼ n 1 n 5 degrees of freedom on a very long string. The local nature of the interaction says that the cross section is not sensitive to the length L of the string, apart from the l independent factors that were found in [6] , which gives ∼ g 2l+2 closed in the cross section. (These dependences on g and n 1 n 5 were noted in [11].) Thus note that here we seem to need that the different strands of the string interact locally with each other, and the essential physics is not contained in just the vibrations of one long effective string.
The dual model.
In [6] we had seen that the leading term in the absorption of minimal scalars could be reproduced from a calculation where the absorption of the incident scalar by the string present in the black hole model was viewed as a three point vertex of ordinary string theory. For convenience we use the S-dual model to the brane model used in the preceeding sections, though the same method could be applied to either model with suitable changes of string tensions and couplings. We wish to see if some fundamental string interaction vertex reproduces the V 4 and g dependences required by the classical cross section.
We consider the S-dual model where the black hole is composed of solitonic 5-branes, elementary strings, and momentum along the elementary strings. In this case with regard to V 4 and g dependence The dependence in (5.4) is the same as that in the D-brame model.
Let us postulate that when the incoming scalar is absorbed then the initial string bound to the 5-brane splits into a total of l + 1 strings, all bound to the 5-brane. (Thus for the case l = 0 we have just one string in the final state, as was the case in [5].) The total number of strings involved in the intertaction is l + 3, because the initial state had a masslesss scalar and the initial string bound to the 5-brane. Each string has a normalisation factor V −1/2 4 , and the amplitude also has a factor V 4 from the volume of the interaction region. The cross section contains then the square of the resulting V 4 dependence: which agrees with (5.4).
Note that we have assumed here that the volume V 4 is small, and the wavelength of the incoming scalar is large, so that there is no energy to excite momentum modes of the strings in the directions of the torus T 4 . In the opposite limit, where such momentum modes are in fact continuous, we would have a sum n ∼ V 4 d 4 k for each string in the final state, and we would obtain that σ l ∼ V −1 4 for all l, which is not in agreement with (5.4).
Now note that the amplitude depends on g as g l+1 , for a tree vertex involving l + 3 closed strings, which gives in the cross section which also agrees with (5.4). Thus we get the powers of both V 4 and g to agree at the same time, which a priori need not have been the case.
Spin dependence.
Let us see how the strings on the 5-brane world volume carry the angular momentum of the 4+1 dimensional transverse space. The rotation group of the transverse space is . The string is confined to the 5-brane, and its low energy bosonic excitations are thus vibrations in the compact directions, labelled by an index i = 6, 7, 8, 9 for the 4 directions in the 5-brane transverse to the string. If we quantise the string by an NSR prescription, we would take fermions ψ i , i = 6, 7, 8, 9, and it is not immediately clear where the angular behavior in the directions X 1 , X 2 , X 3 , X 4 would come from.
But we can rigorously prove that the ground states of a string bound to a 5-brane can carry spin for the directions X 1 , X 2 , X 3 , X 4 . By a sequence of S dualitities and T dualities in the compact directions, we can map the D 5-brane bound to a D-string to a D-string carrying say a left moving momentum mode. The 5 brane has been transformed to the D-string, and the D-string has been transformed to the momentum mode. But the momentum mode can be one of 8 bosonic and 8 fermionic states, which were described in [20]. An analysis of the spin properties of the string bound to the 5-brane can be found in [21]. (Since the string [21] was quantised while ignoring the fact that it was not a critical string, one may argue that this is not a rigorously correct deriuvation of the degrees of freedom. But the argument of duality in the above paragraph shows rigorously that the obtained spin properties of the ground state are correct.) There is indeed a bosonic vector state, and fermionic states that are spinors, for the transvese SO(4). Thus the ground states can be writen as Here |α, k L > are the two left moving bosonic ground states, while |a, k L > are the two left moving fermionic ground states. Overall we get 16 ground states, of which 8 are bosonic and 8 are fermionic, just as expected from the above argument through duality.
The vibrations of the string are the following. There are bosonic modes X i , i = 6, 7, 8, 9 that can travel left or right on the string. There are fermionic modes λ α a which travel left on the string and λβ b that travel right on the string. Note that the ground states of the string on the left side, say, carry either the index a or the index α, while the travelling fermionic modes carry two indices α, a.
To get the spin required of the final state, we postulate that each time a new closed string is produced by splitting, one left and one right moving fermion wave is produced on the initial string. The new closed string is taken to be in its ground state, with polarisations given by (a,ȧ) so that there is no spin of the transverse SO(4) carried by this string. One possible form of the interaction, for the case l = 1, is where the λ ′ refer to the polarisations of the new closed string in its ground state, and the λ are the fermionic waves that are created on the (long) initial string during the process of absorption of the scalar. (The spatial momentum p i of the incident scalar has components only in the transverse directions, since we are considering neutral scalars.) The details of such interactions are, however, not clear. In particular normalisation factors suggest that the new strings that are produced will have small winding number, also such strings will prefer to be in their ground states because they cannot support very low energy excitations. Summing over winding numbers may give rise to additional logarithms, not present in the classical cross section (2.9),(2.10).
Sources of ω dependence.
We find ω dependence of the absorption cross section from the following sources: (a) As explained in [9], the absorption cross section is not given by Γ(ω), the absorption when unit flux is incident, but by The reason is that while the systen can absorb from the incident flux, it can also radiate at the same time, and the absorption cross section only measures the net amount of absorption. Thus we will apply (6.1) to find σ(ω) after computing Γ(ω); the steps below pertain to the calculation of the latter quantity.
(b) The amplitude contains a factor of the energy |ω 1 | of each boson that is produced, and also the normalisation factor for the boson which is ω (c) The incoming scalar contributes a normalisation factor ω −1/2 in the amplitude, which gives ω −1 in the cross section.
(d) The excitations on the string have a left temperature T L = β −1 L and a right temperature T R = β −1 R . The incoming quantum interacts with the string through a vertex that involves one boson and l fermions on each side, when the angular momentum absorbed is l. These bosons and fermions can either be added to the initial state of the string or can be absorbed from the initial state. The analysis of weight factors for these two cases was carried out for bosonic excitations in [9]. We repeat such an analysis for our case here.
Consider either the left or the right set of variables, and let the inverse temperature be denoted by β. The distribution function for bosons is ρ B = (e βω − 1) −1 and for fermions is ρ F = (e βω + 1) −1 . If the boson appears in the final state then ω > 0 and the weightage factor is 1 + ρ B (ω) = −ρ B (−ω). If the boson was absorbed from the initial state then ω < 0 and the weightage factor is ρ B (−ω). Note that the weight factor from part (b) above is always positive; as a consequence the two cases of the boson being in the final and in the intial state can be combined to have an integral Similarily, a fermion in the final state has ω > 0 and a weight 1 + ρ F (ω) = ρ F (−ω). A fermion in the initial state has ω < 0 and a weight ρ F (−ω). The two cases can thus be combined to an integral We have on each of the left and right sides the energy conservation delta function where ω 1 is the energy of the boson and ω i , i = 2 . . . l + 1 are the energies of the fermions.
Note that we are considering the absorption of neutral quanta, so half the energy ω goes to left movers and half to right movers.
(f) We assume that the absorption vertex has a factor ω/2 for each pair of fermions (one left and one right) that are involved in the interaction. The factor 1/2 is added for convenience; since we are not computing the actual numerical amplitude here it is of no real significance. But the factor ω can be seen in the disc amplitude. In a Green-Schwarz formalism, the fermions either have the form ∼ S, or the form ∼ (∂X)S. The term ∂X is contracted with the factor e ikX in the incident scalar vertex, and gives a factor |k| = ω.
The absorption of angular momentum l needs l fermion vertices of each type, giving (ω/2) l in the amplitude, and thus (ω/2) 2l in the cross section.
Example: l = 1
We have one boson and one fermion on each of the left and right sides. Following where we have noted the temperature dependence of the distribution functions.
where J BF is defined in Appendix A.
Using (6.1) we get the contribution
Example: l = 2
We have one boson and two fermions on each of the left and right sides. Following steps (b)-(g) above we obtain 2T L ) (6.10) Using (6.1) we thus obtain the contribution
General form of ω dependence.
For l odd we obtain 1 4 For l even we obtain 1 4 We see that these dependences on ω agree with the dependences required by the classical cross section (2.9), (2.10).
Discussion.
We have seen that to have the classical absorption agree with the brane models, we need to obtain a dependence on V 4 (the volume of the compact torus perpendicular to the string) in the brane model. While this may show up in different ways in different treatments of the string dynamics, it is possible that these differences are due to the different coupling regimes appropriate to these calculations, and not to an error in either description. Recently it has been shown that there is 'stringy dynamics' in all higher branes, in some domain of parameters [22].
The issue of V 4 dependence may appear in other calculations, for example that for the fixed scalars in [9]. In this calculation it was assumed that r 1 = r 5 , which is equivalent to choosing a particular value for V 4 . It would be interesting to see the details of the agreement when r 1 = r 5 .
Regardless of the details of the absorption, we note that the desired ω dependence arises from a partitioning of the energy of the incoming scalar into the energy of a certain number of momentum modes, with this number being determined by the angular momentum of the partial wave that is absorbed. The calculation also provides naturally the factorials present in the relative cross sections for different l (though since we have not computed the actual disc amplitudes themselves, we cannot know that there will be no other factorials from that source.) The argument using an 'effective conformal theory' carried out in [11] yielded an ω dependence that was the desired one, but there was no known way to normalise the amplitude. The energy dependence calculation of the effective conformal theory calculation is plausibly equivalent to summing over ways of sharing energy between l + 1 quanta of the left and right sides when the quanta are at temperatures T L , T R respectively, since one has to evaluate correlators of free fields at the appropriate temperatures.
In [16] it was noted that when we are working in a domain r 5 >> r 1 , r p , r 0 then the absorption of the lth partial wave becomes significant when ωr 5 = l + 1. In particular the l = 1 partial wave is significantly absorbed starting at the energy where the first massive mode of the effective string can be created. Creation of a new string state would bring in the required factor of V 4 as we have seen. It would be interesting to connect the present calculations to this domain of parameters where r 1 is also small, and so winding modes and momentum modes play a more symmetric role.
It may be thought that the agreement of cross sections for D-branes at low energy with the black hole cross sections implies that for low energy quanta we understand the mechanism by which the Hawking paradox is to be resolved in string theory. This is not the case. A black hole geometry takes an incident low energy quantum and, in the Schwarzschild coordinate system, converts it to a high frequency mode close to the horizon. This high frequency mode is then eaten by the hole. Starting with a quantum of even lower energy just means that we have to follow the mode closer to the horizon before we see it attaining a short wavelength. (Of course most of the low energy quanta escape falling into the hole, but the cross section we compute relates to those that are in fact swallowed by the hole in the above fashion.) Equivalently, if we study the deflection of a particle trajectory by the gravitational field of the hole, and consider the deflection to be expanded in powers of the mass of the hole, then we would see a divergence of the series when the impact parameter approaches the value where the particle will be swallowed by the hole. When we study the D-brane calculation we need to understand whether or not such a divergence occurs, when the number of branes and the coupling are increased to a value large enough to give a classical sized black hole. In the classical calculation taking low energy while keeping other parameters fixed simply pushes the growth of the perturbation series towards higher terms in the series; thus there may be a similar phenomenon for the D-brane calculation as well.
If string theory is to resolve the Hawking paradox, then we either need to see that the effective size of the solitonic bound state is comparable to the horizon size, so that there is really no black hole, or we need to see that loops of virtual quanta in a theory with strings and higher dimensional branes are quite different from loops in particle theory, and give nonlocal effects that take information from near the singularity and send it out with the Hawking radiation. The latter is equivalent to finding a length scale in string theory that is not plank length but is a length that grows with the number of branes involved.
Thus it appears that the agreements found between cross sections of branes and for black holes are to some extent both mysterious and interesting, and provide a strong suggestion that the black hole paradox may actually be resolved in string theory. One needs to better understand the bound states of many branes at strong coupling; for the non-BPS interactions that are involved the values of moduli also affect the nature of energy levels and interaction properties [23] [16].
Appendix A. The basic integrals.
Let us define the following two basic integrals (The subscripts B and F stand for bose and fermi type distributions respectively.) These integrals can be calculated in the following way. We take a contour in the complex x plane, running along the real x-axis, and backwards along the line x + 2πi.
Consider the integralÎ The segments at ℜx = ±∞ do not contribute. So we havê where now x runs along the real line. There is one pole in the contour, at x = iπ − log A.
|
2014-10-01T00:00:00.000Z
|
1997-04-23T00:00:00.000
|
{
"year": 1997,
"sha1": "949ebdcc8d5cc7388572d0a227068765343d92e8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9704156",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "949ebdcc8d5cc7388572d0a227068765343d92e8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119296562
|
pes2o/s2orc
|
v3-fos-license
|
Large Deflection of Cantilever Rod Pulled by Cable
The article discusses six problems which can arise in the determination of the equilibrium configuration of an elastic cantilever rod pulled by an inextensible cable. The discussions are illustrated with graphs of equilibrium shapes and tables providing some reference numerical values.
Introduction
This paper is mainly motivated by Yau's [1] recent article in which he provided a solution of the large deflection of a guyed column pulled by an inclined cable based on an elliptic integrals description of the deformed cantilever. In particular, he considers the case when the cable length and distance of the cable support point are given and where the tension in the cable and cable inclination angle are unknowns.
[We refer the reader to the Yau article for discussion of the importance of the study of cable supported structures and for some future references.] While these problems were already considered as early as the end of the 19 century by Saalschütz [2] in his book on elastic rods, where he also used elliptic integrals for the formulation of the problem, it seems that, as also observed by Yau, there are relatively few articles that consider the problem analytically.
The aim of this short article is to provide an alternative solution of the problem in terms of Jacobi elliptic functions. As Saalschütz and Yau both do we will reduce the problem to the solution of a system of two nonlinear equations; however, we will give the solutions and numerical examples of all six possible problems that can results from these equations.
Formulation of the problem
We consider an initial straight inextensible elastic cantilever rod pulled by an inextensible cable supported at points A and B where point A is the cantilever tip point and B is a fixed space point (see Figure 1). As was shown by the present author [3] the shape of the base curve of a deformed cantilever can be described by the following parametric equations sin sin cos 0 1 x s s s y s s s s ξ α η α ξ α η α Here K and E are complete elliptic integrals of the first and second kind, sn, cn and Z are Jacobi sine, cosine and zeta elliptic functions, F is applied force, EI is the bending stiffness of the rod. The geometric meaning of ( ) As is seen from the above equations the shape of the cantilever is completely determinate when k (or equivalently 0 ϕ ) and ω are known. Once they are known we can by Eq (3) 1 calculate α and further by Eqs (2) and Eqs (1) determine the cantilever shape. Two equations from which these parameters can be calculated follow from the geometry of the problem. where a is the distance between the cantilever clamped end and cable support at B. Now, by using trigonometric identities 2 sin 2sin cos cos 1 2sin and relation (3) 1 we can eliminate α from (6). In this way we obtain On the other hand, the coordinates of the cantilever tip point A in the rotated coordinate system are obtained from equations (2) by setting 0 Equating these with (8) yields The two relations connect four parameters: ω , k, a and L. We can thus choose two of them as given and the remaining two as unknowns. We thus can solve one of the six problems, which will be discussed in the next section. However, before proceeding we will present some special solutions of system (11) and (12) on the basis of the properties of Jacobi elliptic functions.
Case when
There is no force acting on the cantilever. We have As expected the results correspond to the geometry of an undeformed cantilever (see Figure 1). Note that in this case 0 α ϕ = .
Case when 2nK We Consequently, by Eq (5) we in this case have 0 x L = and 0 y a = . In words: the cable takes a horizontal position.
In Figure 3
Numerical examples
In this section we will give examples of a solution of the six problems which can be derived from the basis of the system (11)-(12).
Case when 0 and are given. In this case a and L are explicitly given by Eqs (11) and (12). Because for the given tip angle 0 both a and L are one-valued functions of load parameter ω we therefore for the given data have only one possible solution. Example of graphs of a and L as a function of ω when 0 is given and examples of equilibrium shapes are shown in Figure 4. Some numerical values corresponding to cantilever deformed shapes in the figure are given in Table 1. Figure 6 and some numerical values are presented in Table 2. Figure 5. Enumeration of shapes corresponding to cases given in Table 2. Table 2. Numerical values corresponding to shapes shown in Figure 6. Case when a and L are given. In this case the Eqs (11)-(12) become the system of two transcendental equations for unknowns 0 ϕ and ω. Solving the system presents no numerical difficulty; however, since the system has multiple possible solutions the contour plots of the equations like that presented in Figure 7 are helpful for providing the initial guess. An example of cantilever equilibrium configurations for two cases are presented in Figure 9. Some reference numerical values are provided in Table 3 and in Table 4 the comparison between the results for the case 0.5 a = and 1 L = given by Saalschütz [2] and the present solution are illustrated. For practical purposes it is interesting to know the relation between load parameter ω and cable length for a given a. To obtain such dependence the system (11)-(12) was for a given a repeatedly solved with decreasing length L starting with initial length The graph of ω as a function of ε for various values of a is shown in Figure 9. In the figure we can observe a strong nonlinear dependence between ω and ε . We note also that when 1 a ≥ the relative length has the upper limit Practically, this limit can never be reached since the cantilever can never be completely directed vertically.
Conclusions
We presented a new solution of the problem of determining the equilibrium configurations of a cantilever pulled by cable. The problems were formulated as two relations connecting load parameter ω with geometry parameters a, L and 0 ϕ . We show that the obtained relations lead to six possible problems, where only the problem with a given ω and 0 ϕ has a unique solution. All other pairs of data yield multiple possible solutions. The numerical values given for various types of problems can be used as reference values for solutions obtained by other methods.
|
2019-04-13T07:22:15.070Z
|
2013-11-18T00:00:00.000
|
{
"year": 2013,
"sha1": "54397b26fea5bf60fe84a828ccdbe2824909a942",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.apm.2014.10.073",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "54397b26fea5bf60fe84a828ccdbe2824909a942",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
254707373
|
pes2o/s2orc
|
v3-fos-license
|
THE EFFECT OF INFORMATION TECHNOLOGY, WORK DISCIPLINE AND WORK MOTIVATION ON THE PERFORMANCE OF STATE CIVIL APPARATUS IN MENUI DISTRICT, MOROWALI REGENCY
This study aims to determine how much influence information technology, work discipline and work motivation have on employee performance at the Menui Islands sub-district office. Inferential statistical analysis method with multiple linear regression model, census retrieval. This study used a questionnaire given to 23 employees of the District Office of Menui Islands. Furthermore, the data were analyzed using quantitative descriptive. The results showed that information technology, work discipline and work motivation simultaneously had a positive and significant effect on employee performance. Then partially each variable has a positive and significant effect on the performance of the employees of the Menui Islands sub-district. After the data is processed, the resulting value (r) of 0.972 indicates a strong positive relationship. The contribution of the influence of information technology, work discipline and work motivation on employee performance is 97.2% and the remaining 2.8%. Influenced by variables not examined. Hypothesis testing is proven to have an effect on information technology, work discipline and work motivation on employee performance at the Menui Islands sub-district office
INTRODUCTION
In today's era, life is all modern, with today's sophisticated information systems, the role of employees who carry out activities to serve the community or in the public sector as workers in an agency is needed to produce the best possible service. In managing a good government agency, it is always full of challenges, the ease of the situation with the conditions of the work environment. Government agencies are required to have information technology. In the development of sociocultural science and technology, it requires State Civil Apparatus employees to work appropriately and quickly in serving the community. With technology starting to enter the work environment, State Civil Apparatuses are required to understand and be able to operate technology so that goals can be achieved. The activities of government agencies will run well if the agency has knowledgeable human resources, skills in managing government agencies. However, in reality bureaucratic services are still performance of the State Civil Apparatus (ASN) which was still low and irresponsible, services that are still traditional, resulting in people complaining about the services provided by the sub-district government, but there are still governments that pay attention to the quality of services to meet the needs of the community by implementing e-governance or electronic governance, namely public services to the community based on technology to facilitate the government in carrying out service to the community.
The purpose of this study was to determine and analyze the effect of the use of information technology, work discipline and work motivation on the performance of the State Civil Apparatus in the Menui Islands. Knowing and analyzing the effect of the use of information technology on the performance of the State Civil Apparatus in the Menui Islands. Knowing and analyzing the effect of work discipline on the performance of the State Civil Apparatus in the Menui Islands. And to find out and analyze the effect of work motivation on the performance of the State Civil Apparatus in the Menui Islands.
The benefits that can be taken from this research are that it can be a reference, increase knowledge, can add to the STIE Six Six Kendari library collection and become a reference for anyone who wants to do research on the same topic. For the State Civil Apparatus, this research is expected to be a reference material in motivating the State Civil Apparatus (ASN) to be more disciplined in their work in order to improve the performance of government agencies in the sub-district office in the islands, especially the State Civil Apparatus in a government of the Menui Islands District.
METHOD
Based on the objectives to be achieved, this research includes explanatory research, namely the type of research that aims to determine the effect of two or more variables (Sugiyono, 1999). In this case, to test and analyze the effect of motivation, welfare and the relationship between village officials' performance in improving the performance of village officials in the Menui Islands subdistrict. According to Arikunto (2006) that the research method is the method used by researchers in collecting research data. The research method is one of the important steps in a scientific research. The method or research method is a tool to achieve the objectives and the quality of research is largely determined by the method or method used. This study uses a descriptive type of research with a quantitative approach. Quantitative research methods, as stated by (Erwanto et al., 2012) are: "Research methods based on the philosophy of positivism, are used to examine certain populations or samples, data collection uses research instruments, data analysis is quantitative/statistical, with the aim of testing hypotheses has been established".
According to (Sugiyono, 2012) descriptive research, namely, research conducted to determine the value of independent variables, either one or more variables (independent) without making comparisons, or connecting with other variables. The object of this research is the performance of the State Civil Apparatus in the Menui Islands District. The population of this study were all 23 employees of the State Civil Apparatus in Menui District. In this study, researchers used 2 kinds of variables, namely: 1) The dependent variable used in this study was the performance of village officials. 2) The independent variables used in this study are motivation, welfare and discipline.
The author takes the object of research on the State Civil Apparatus in the Menui Islands District. Data collection in the study in the Menui Islands District used 3 methods, the following is a description used: 1. Observation, data collection carried out in the Menui Islands District. 2. Interviews, namely conducting direct questions and answers with the village head and the State Civil Apparatus who are authorized to be interviewed 3. Questionnaires, namely distributing questionnaires or lists of written statements that are structured in a structured manner to obtain information about the variables studied (Motivation, Welfare, Discipline Against the Performance of State Civil Apparatus).
RESULTS AND DISCUSSION A. Respondent Characteristics 1. Characteristics of Respondents Based on Age
In accordance with the data obtained from data identification, it shows that the majority of employees at the Menui Islands District Office, Morowali Regency are in the age category of 45 years and over. for more details can be seen. Based on the identification of the data above, it can be seen that the majority of employees at the Menui Islands sub-district office are aged 31 years and over. This age is a productive age for employees who provide benefits for the Menui Islands District office to maximize the employee's work potential in every work implementation can be completed with quality and on time.
Characteristics of Respondents by Gender
The composition of employees at the Menui Islands sub-district office by gender can be seen in table 2. Based on the identification of the data carried out, the employees of the Menui Islands sub-district office were dominated by men. This is because basically the duties and responsibilities of the Menui Islands sub-district office so that the role of male employees is very much needed by an organization. In addition, male employees have a greater stake in making decisions because of their responsible nature. Employees with male gender basically tend to have work spirit, tenacity and persistence in carrying out their work. Employees with male gender are also considered to have a very high level of mobilization at work.
Characteristics of Respondents Based on Education Level
Employee Distributors at the Menui Islands Sub-District Office by education level can be seen in table 3. Thus, it can be said that the employees of the Menui Islands sub-district office have a high school education level of 12, 10 respondents with a bachelor's degree in education, and 1 respondent with a master's degree education. However, in this case, the largest workforce is employees with high school education and employees with higher education to carry out work communication, counseling and provide each other with knowledge in the implementation of work.
Characteristics of Respondents Based on Working Period
Distribution of employees of the Menui Islands Sub-district Office according to years of service can be seen in table 4. Thus, it can be said that most of the employees of the Menui Islands Sub-District Office have a sufficient period of service so that they are expected to have high performance. Employees with a long service period of course have experience in dealing with every work problem. For this reason, employees with quite a lot of work experience with low tenure have the potential to improve employee performance.
B. Descriptive Research Results Variables
Descriptive research variables aim to interpret the frequency of respondents' answers from the data that has been collected. In this study, respondents' answers were categorized into five categories using a Likert scale. Each scale has a gradation of assessment from very negative to very positive which is included in the answer choices of the questionnaire. In order to give meaning to the empirical assessment of the variables, this research adopts the principle of weighting proposed by (Sugiyono, 2005). The average value of the weighting or score of respondents' answers obtained is classified in the range of the value category scale that is presented in table 5.5 Table 5. above shows the categorical meaning in interpreting the results of this research based on the score of respondents' answers. The basic reason is that respondents are given the freedom to give an objective assessment based on what they see, hear and feel while being an employee of the Menui Islands sub-district office. The description of the data and respondents' responses from the results of the study regarding the four latent variables examined using the average score of the respondents' statements can be described as follows:
Information Technology Variables
Based on the research results obtained through the questionnaire, the objective conditions for the information technology variable referred to in this study were 2 (two) indicators, namely (1). Benefits include: Make work easier, Useful, Increase productivity. (2) Effectiveness includes: Enhancing effectiveness, Developing job performance. As for the respondents' responses to the work variable indicators, it can be seen in table 6. below:
Frequency of Respondents' Answers (f) and Percentage (%)
Average In Table 6. above shows that information technology is very useful for employees at the Menui Islands sub-district office in the good category as indicated by the average score for information technology of 3.91, this shows that the respondents feel helped by the information technology at the Menui Islands sub-district office.
Indicators of the effectiveness of work that are getting easier and of higher quality have the highest average score of 3.94 or are in a good category, this is because most of the employees at the sub-district office in Menui Islands are able to complete work on time and effectively and efficiently for service to the community.
Meanwhile, the indicator that has the lowest average score is usefulness with a score of 3.87. This is because there are still some employees who have not mastered the benefits of software such as operating (Microsoft Excel). Based on the information above, the sub-district office in Menui Islands needs to improve employee performance for knowledge and utilization in operating computer-based information technology systems for employees.
Work Discipline Variables
Based on the research results obtained through the questionnaire, the objective conditions for the work discipline variable referred to in this study were measured by 6 (six) indicators, namely (1) attendance, (2) punctuality, (3) adherence to uniform office rules office, (4) employees wear office uniforms, (5) adherence to work standards, (6) work ethically. The responses to the work discipline variable indicators can be seen in table 7. below. Table 7. The above shows that in general the work discipline at the sub-district office is in the good category as indicated by the average score for work discipline of 3.81 with a good category.
Indicators of obedience to office rules, employees wearing office uniforms against office rules have the highest average score of 3.93 or are in a good category, this is because Adrianto, Abdul Razak, Ibnu Hajar, Arifin The Effect of Information Technology, Work Discipline and Work Motivation On the Performance of State Civil Apparatus in Menui District, Morowali Regency most of the employees at the sub-district office on the islands obey the office rules. While the indicators that have the lowest average score are employees wearing office uniforms and attendance with a score of 3.70. This is because there are still some employees who have not complied with the provisions on the use of attributes and the provisions for the use of clothing, especially on Thursdays, there are still various motifs, some even wear clothes like going to a party or a trip. Apart from that, several employees were present who were often late and lazy to go to the office during work days.
Based on the information above, the Menui Islands sub-district office needs to increase the use of official clothes or office uniforms and the time of day when it will be used for employees especially attendance needs to be increased during working days that have been regulated according to the regulations at the Menui Islands sub-district office.
Variables of Work Motivation
Based on the research results obtained through the questionnaire, the objective conditions for the work motivation variable referred to in this study were measured by 6 (six) indicators, namely: (1) Remuneration, (2) Working Conditions, (3) Work Facilities, (4 ) Work Performance, (5) Recognition from superiors and (6) the work itself. As for the respondents' responses to the work variable indicators, it can be seen in table 8. below: In Table 8. The above shows that in general the work motivation at the Menui Islands sub-district office is very good, which is indicated by the average score for work motivation of 3.65 well.
The working condition indicator has the highest average score of 3.78 or is in a good category, this is because most of the employees at the Menui Islands sub-district office already have a good and pleasant work environment.
Meanwhile, the indicator that has the lowest average score is remuneration with a score of 3.59. This is because there are still some honorarium employees who are not satisfied with the current salary they receive at the Menui Islands sub-district office.
Based on the information above, the Menui Islands sub-district office needs to pay attention to the distribution of salaries for honorarium employees in the Menui Islands subdistrict so that employees are motivated in their work.
Employee Performance Variables
Based on the research results obtained through the questionnaire, the objective conditions for the performance variables referred to in this study were measured by 3 (three) indicators, namely (1) work results, (2) work behavior, (3) personal characteristics. The responses to the performance variable indicators can be seen in table 9. below. In Table 9. the above shows that in general the performance of employees at the subdistrict office in the archipelago is in a good category, which is indicated by the average score for employee performance of 3.75 with a good category.
Adrianto, Abdul Razak, Ibnu Hajar, Arifin The Effect of Information Technology, Work Discipline and Work Motivation On the Performance of State Civil Apparatus in Menui District, Morowali Regency
The work behavior indicator has the highest average score of 3.87 or is in a good category, this is because most of the employees at the Menui Islands sub-district office are able to work together responsibly and are able to adapt when things go wrong as new as new technology. Meanwhile, the indicator that has the lowest average score is attendance with a score of 3.63. This is caused because there are still some employees who just let go of responsibilities and there is no commitment to work, and there is no self-awareness that the work delegated to employees is a responsibility that must be completed.
Based on the information above, the Menui Islands sub-district office needs emphasis from the leadership to complete the work in accordance with the duties and responsibilities so that the expected results can be maximized in accordance with the predetermined targets.
C. Results of Hypothesis Analysis and Testing 1. Simultaneous Regression Model Testing Results
To prove the research hypothesis proposed in this study, the multiple linear regression method was used with the results of the analysis as follows: Based on the results of calculations as in table 10. then the following explanation can be put forward: a. Fcount value of 221.415 with a significance value of Fsig = 0.000 which means that (Fsig <0.05), then statistically the information technology variable (X1) work discipline (X2) and work motivation (X3) on employee performance (Y) simultaneously (together) has a significant effect on employee performance (Y) at the 95% confidence level b. The R value (correlation coefficient number) of 0.986 indicates that the close relationship between information technology variables (X1) work discipline (X2) and work motivation (X3) on employee performance (Y) is 0.986. This relationship is statistically very strong, as stated by Sugiyono (1999:216) that the relationship is classified as very strong at 0.80-1,000. c. The value of R (R-Sguare) of 0.972 indicates that the magnitude of the direct influence of information technology variables (X1), work discipline (X2), work motivation (X3) on employee performance (Y) is 97.2%, which means that the technology variable information (X1), work discipline (X2), and work motivation (X3) on employee performance (Y) at the Menui Islands sub-district office. The remaining 2.8% is influenced by other variables from this study. Therefore, the resulting regression model can be said to be a "fit" model or can be a good predictor model by explaining the influence of information technology, work discipline, and work motivation on the performance of the sub-district office employees in the Menui Islands. On this basis, the resulting regression model as a model for explaining the influence of information technology, work discipline, and work motivation on employee performance at the Menui Islands sub-district office can be stated as follows: 0.279X1 + 0.325X2 + 0.419X3+ 1.328 Where: Y = Employee Performance 1 = 0.279 X1 = Information Technology 2 = 0.325 X2 = Work Discipline 3 = 0.419 X3 = Work Motivation £ (standard error) = 1.328
Partial Regression Model Test Results
The results of the regression analysis in table 5.10 above can be interpreted as follows: a. The effect of the variable X1 (work technology on Y (employee performance) obtained a sig value of 0.007 which means it is smaller than the value of = 0.05. Therefore, the information technology variable (X1) partially has a sig. effect on employee performance (Y). On this basis, the information technology variable (X1) can be included as one of the estimating variables for the performance of the staff of the Menui Islands sub-district office. b. The effect of the X2 variable (work discipline) on the sig value obtained. sebasar 0.009 which means it is smaller than the value of = 0.05. Therefore, the work discipline variable (X2) partially has a sig. on employee performance (Y). On this basis, the work discipline variable (X2) can be included as one of the estimating variables for the performance of the staff of the Menui Islands sub-district office. c. The effect of the X3 variable (work motivation) on the sig. sebasar 0.000 which means it is smaller than the value of = 0.05. Therefore, the work motivation variable (X3) partially has an effect on sig. on employee performance (Y). On this basis, the work motivation variable (X3) can be included as one of the estimating variables for the performance of the employees of the Menui Islands sub-district office.
Hypothesis Testing
The first hypothesis proposed in this study is that information technology, work discipline, and work motivation have a positive and significant effect on employee performance at the Menui Islands sub-district office. To prove this hypothesis using simultaneous regression testing using this significance of 0.000 which means it is smaller than the value of = 0.05. Therefore, overall or jointly the variables of information technology (X1), work discipline (X2), and work motivation (X3) have a significant effect on employee performance (Y) at the Menui Islands sub-district office. On this basis, the first hypothesis that was previously proposed can be accepted because it is proven true.
The second hypothesis proposed in this study is that information technology has a positive and significant effect on employee performance at the Menui Islands District office. To prove this hypothesis using partial regression testing using a significance value of 0.007, which means it is smaller than a = 0.05. Therefore, partially the X1 variable (information technology) has a significant effect on the performance of employees at the Menui Kepulan sub-district office. On this basis, the second hypothesis previously proposed can be accepted because it is proven true The third hypothesis proposed in this study is that work discipline has a positive and significant effect on employee performance at the Menui Islands sub-district office. to prove this hypothesis using partial regression testing using a significant value of 0.009 which means it is smaller than the value of a = 0.05. Therefore, partially the X2 variable (work discipline) has a significant effect on the performance of employees at the Menui Islands sub-district office. On this basis, the third hypothesis that was previously proposed can be accepted because it is proven true.
The fourth hypothesis proposed in this study is that work motivation has a positive and significant effect on the performance of the employees of the Menui Islands sub-district office. to prove this hypothesis using a significant value of 0.000 which means it is smaller than the value of a = 0.05. Therefore, partially the X3 variable (work motivation) has a positive and significant effect on the performance of employees at the Menui Islands sub-district office. On this basis, the fourth hypothesis previously proposed can be accepted because it is proven true.
The Influence of Information Technology, Work Discipline and Work Motivation
Against the Performance of the Menui Islands District Office Employees Based on the results of data analysis in the study, the regression coefficient value was obtained which showed a positive and significant effect between information technology, work discipline and work motivation on employee performance at the Meni Islands sub-district office. This shows that information technology, work discipline and work motivation will be able to improve employee performance.
The results of this study are also in line with the opinion expressed by Robins (1996:152) which states that the level will greatly depend on the ability of the employee itself, such as the level of education, knowledge, experience where the level of ability is also higher. Thus, low levels of education, knowledge and experience will have a negative and negative impact on employee performance.
The results of this study are in line with the opinion expressed by (Robins et al., 2009)) which reveals that employees who have a high level of involvement are very impartial and really care about the field of work they do. Someone who has high work involvement will melt in the work he is doing. High levels of job involvement are associated with Organizational Citizenchip Behavior and performance. In addition, high levels of engagement can reduce employee absenteeism.
The results of this study are in line with the opinions expressed by (Cahyono, 2005) and (Miles & Sunstein, 2006) which state that there is one factor that affects performance, namely the motivation factor, where motivation is a condition that moves a person to try to achieve goals or achieve the desired result. (Y. Rivai et al., 2004) shows that the stronger the work motivation, the higher the employee's performance. This means that every increase in employee work motivation will provide a very significant increase with an increase in employee performance in carrying out their work.
The Effect of Information Technology on Employee Performance
Based on the results of partial regression model testing, it is known that information technology has a positive and significant effect on employee performance at the Menui Islands sub-district office. It can be explained that each factor that is an indicator item used to measure information technology is a factor that determines employee performance at work.
The results of this study are in line with the opinion expressed by (Muzakki et al., 2016) which revealed that the ease of use of IT can improve employee performance, ease of use of IT such as easy to understand and learn, controlled, clear and understandable, flexible, skilled and easy to use influential positive on employee performance. If the ease of using IT is implemented properly and appropriately, it will support employee performance optimally.
The results of this study indicate that the benefits of IT have a significant effect on employee performance. This shows that the benefits of IT are able to improve employee performance, the benefits of IT such as working faster, better performance, increasing productivity, making work more effective, making work easier, and useful having a positive effect on employee performance. If the benefits of IT are implemented properly and appropriately, it will support employee performance optimally.
Then the results of this study are also in line with the opinion expressed by (Fajri, 2011) which states that the use of information technology has a positive influence on employee performance. This means that for the ease and suitability of the task with the software used, it affects the speed of the employee working in the work he is doing, it will affect the accuracy in doing the work. The suitability of tasks in accordance with the expertise of using technology will improve the technical ability of employees. Employees who have expertise according to their field of work will greatly affect their work. This will also affect the resilience of employees in solving problems. Conditions that facilitate the use of adequate information technology will increase employee creativity, this is because curiosity for something new will grow if there are adequate facilities.
The Effect of Work Discipline on Employee Performance
Based on the results of the partial regression model testing, it is known that work discipline has a positive and significant effect on employee performance at the Menui Islands sub-district office. This can be interpreted in the sense that every time there is an increase in the quality of work discipline, it will be followed by an increase in employee performance.
Based on the test results, it can be seen that work discipline is the most powerful variable in influencing employee performance at the Menui Islands sub-district office. The basic reason and in accordance with the facts that happened was that the staff of the Menui Islands sub-district office had work discipline that obeyed the rules and had good performance.
The results of this study are in accordance with the theory put forward by (Mayulu & Sutrisno, 2010) Sutrisno saying that work discipline can be seen as something that has great benefits, both for the benefit of the organization and for employees. , so that optimal results are obtained.
The results of this study are in line with previous research, namely research conducted by (Septiasari, 2017)with the title The Effect of Work Discipline on Employee Performance at the Department of Industry, Trade and Cooperative Business and Micro, Small and Medium Enterprises of East Kalimantan Province in Samarinda (Secretariat Sector and Industrial Sector) which explains that Work Discipline has a significant effect on the Employee Performance variable. In terms of punctuality (going to work and leaving work), compliance with regulations, work responsibilities and carrying out duties and obligations.
The Effect of Work Motivation on Employee Performance
Based on the results of partial regression model testing, it is known that work motivation has a positive and significant effect on employee performance at the Menui Islands sub-district office. It can be interpreted that the higher the work motivation, the higher the performance of employees at the Menui Islands sub-district office. This is very logical, because employees assume that their needs are in accordance with what they get as an employee of the State Civil Apparatus (ASN). Motivation arises because of a need and therefore the action is directed towards the achievement of certain goals. If the goal has been achieved, satisfaction will be achieved and it tends to be repeated, so that it is stronger and more stable.
Based on the test results, it can be seen that work motivation is the most powerful variable in influencing employee performance at the Menui Islands sub-district office. The basic reason and in accordance with the empirical phenomenon is that employees at the Archipelago Menu Sub-district office have good work motivation. This can be seen from the work motivation obtained by employees both within themselves and co-workers and other parties through the need for employees to increase knowledge, expertise, cooperation and independence in work with the aim of meeting work performance needs, trusting leaders to delegate authority to complete work.
With the emergence of enthusiasm and enthusiasm for employee work, it is hoped that employees can be motivated, to achieve what they want, they must try to do their best. A person's performance or achievement depends on that person's motivation for the work done. The higher a person's motivation to do the job, the higher the level of performance. Conversely, the lower a person's motivation to do a job, the lower the level of performance.
The results of this study are in line with the opinions expressed by (Cahyono, 2005) and (Judges, 2006) which state that there is one factor that affects performance, namely the motivation factor, where motivation is a condition that moves a person to try to achieve goals or achieve goals desired result. (V. Rivai, 2016) shows that the stronger the work motivation, the higher the employee's performance. This means that every increase in employee performance motivation in carrying out their work.
The results of this study are also in line with the opinion expressed by (Porter, 1991) which states that one of the things that is closely related to how to change poor work performance or maintain good performance is to increase work motivation. Employees who have high work motivation will produce good performance, they will be more involved in all aspects of their work and will also be easier to work with to achieve goals. The results of this study are also in line with the results of previous research conducted by (Listianto & Setiaji, 2005) which states that work motivation has a significant effect on performance. The higher the employee's work motivation, the higher the employee's performance will be.
CONCLUSION
Information technology, work discipline, and work motivation have a positive and significant effect on the performance of the staff of the Menui Islands sub-district office. Changes in information technology, work discipline, and work motivation have a positive and significant effect on employee performance. The more mastery of information technology and the increase in work discipline and work motivation, the higher the performance of employees at the Menui Islands District Office.
Work discipline has a positive and significant effect on employee performance at the Menui Islands District Office. Changes in work discipline are positive and significant in increasing employee performance. The better the work discipline, the higher the employee performance at the Menui Islands District Office. Work motivation has a positive and significant effect on employee performance at the Menui Islands District Office. Changes in work motivation are positive and significant in increasing employee performance. The better the work motivation, the higher the employee performance at the Menui Islands District Office.
|
2022-07-06T15:12:56.783Z
|
2022-06-16T00:00:00.000
|
{
"year": 2022,
"sha1": "a6c1db937acd0c47105421fecbde9996ee85b0fc",
"oa_license": "CCBYSA",
"oa_url": "https://jws.rivierapublishing.id/index.php/jws/article/download/52/118",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a6c1db937acd0c47105421fecbde9996ee85b0fc",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": []
}
|
121090285
|
pes2o/s2orc
|
v3-fos-license
|
Flow control over a NACA 0012 airfoil using dielectric-barrier-discharge plasma actuator with a Gurney flap
Flow control study of a NACA 0012 airfoil with a Gurney flap was carried out in a wind tunnel, where it was demonstrated that a dielectric-barrier-discharge (DBD) plasma actuator attached to the flap could increase the lift further, but with a small drag penalty. Time-resolved PIV measurements of the near-wake region indicated that the plasma forcing shifted the wake downwards, reducing its recirculation length. Analysis of wake vortex dynamics suggested that the plasma actuator initially amplified the lower wake shear layer by adding momentum along the downstream surface of the Gurney flap. This enhanced mutual entrainment between the upper and lower wake vortices, leading to an increase in lift on the airfoil.
Introduction
The dielectric-barrier-discharge (DBD) plasma actuator has received much attention over the last two decades due to its unique advantages over traditional actuators. It usually consists of an exposed electrode and an embedded electrode, separated by a dielectric sheet. The electrodes are energized at high voltage and frequency, causing the air over the embedded electrode to ionize which induces a wall-jet flow (Jukes et al. 2006). The DBD plasma actuator can be rapidly turned on and off as required.
The plasma actuator has been used to improve the aerodynamics of an airfoil, either by placing it near the leading edge as a separation control device (Post and Corke 2006;Sosa et al. 2007;He et al. 2009) or near the trailing edge as a plasma flap (He et al. 2009;Little et al. 2010). Both methods can result in an increase in lift coefficient. He et al. (2009) proposed a concept of ''virtual section shape'' using the plasma actuator, while Okita et al. (2008) used the DBD plasma actuator as a vortex generator to control flow over a NACA 0024 airfoil. More details on the DBD plasma actuator can be found elsewhere (Moreau 2007;Corke et al. 2009Corke et al. , 2010. The Gurney flap is a small flat plate attached to the trailing edge on the pressure side of an airfoil, which enhances the aerodynamics performance of aircrafts, wings and high lift devices (Wang et al. 2008). Li et al. (2002Li et al. ( , 2003, Lee and Ko (2009) and Lee (2010) applied the Gurney flap to control a NACA 0012 airfoil. They all concluded that the Gurney flap could increase the lift coefficient, where the Gurney flap increases the effective camber of the airfoil to enhance the lift performance. However, there is an inevitable drag penalty associated with this lift enhancement. Therefore, the Gurney flap would be more useful if it could be stored during cruise. Traub et al. (2004) carried out a wind tunnel study of a NACA 0015 airfoil with a jet slot located at 2% chord upstream of the trailing edge. The jet Gurney flap with a 0.68% momentum coefficient resulted in lift and momentum increases equivalent to a 0.75% chord Gurney flap. Moreover, the power required by the jet flap was less than the power loss due to drag penalty of the conventional Gurney flap at low angles of attack. Traub and Agarwal (2008) further undertook an investigation into the Gurney flap in conjunction with a jet flap at low Reynolds numbers. They found that the jet forcing further increased the lift coefficient by the Gurney flap alone. Recently, Zhang et al. (2009) numerically investigated the effect of a ''plasma Gurney flap'' on the aerodynamic characteristics of a NACA 0012 airfoil, where the plasma actuator was attached to the blunt trailing edge of the airfoil. Their results indicated that the ''plasma Gurney flap'' increased the lift and nose-down moment of the airfoil in a similar way to the conventional Gurney flap. It was also shown that the Kármán vortices were weakened near the trailing edge, which reduced the drag and thus increased the lift-to-drag ratio.
The aim of the present investigation is to obtain a greater lift enhancement with less drag penalty by combining a DBD plasma actuator with a Gurney flap. Here, a possibility of adopting the DBD plasma actuator to enhance the lift on a NACA 0012 airfoil was experimentally investigated by attaching it to the Gurney flap. A dynamic force balance was used to measure the lift and drag forces, while a time-resolved PIV system was employed to measure the velocity field in the near-wake region to investigate the vortex dynamics associated with the lift enhancement.
Experimental set-up
The experiment was conducted in a low-speed wind tunnel with a test section 1.5 m 9 0.3 m 9 0.3 m at the University of Nottingham. The test model used was a NACA 0012 airfoil with the chord length c = 100 mm and the span b = 250 mm, giving the aspect ratio b/c = 2.5. Tests were carried out at three free-stream velocities of U ? = 3.0, 4.3 and 5.3 m/s, corresponding to Reynolds numbers Re = 20,000, 28,000 and 35,000, respectively, based on the airfoil chord length. End plates of 300 mm 9 200 mm in the streamwise and vertical directions were used to improve the two dimensionality of the flow field.
The Gurney flap was attached to the airfoil perpendicular to the bottom surface at the trailing edge, as shown in Fig. 1. Three different Gurney flap configurations incorporating DBD plasma actuators were tested, of height h = 3.0 mm (3.0%c), 4.5 mm (4.5%c) and 7.0 mm (7.0%c), as listed in Table 1. For the 7.0 mm Gurney flap, for example, the plasma actuator consisted of a 2.5-mmwide upper copper electrode with a 4.5-mm-wide lower copper electrode (see Fig. 1). The thickness of both copper electrodes was 17 lm and the plasma actuator spanned the central 220 mm of the airfoil. The Gurney flap was constructed from 250-lm-thick Mylar sheet, which also served as the dielectric for the DBD actuator. The lower electrodes of plasma actuators were covered by an insulating tape to prevent plasma discharge on the upstream side of the Gurney flap. It was not possible to produce Gurney flaps less than 3.0 mm in height since an arc would occur between the upper and lower electrodes around the tip of the flap, preventing the formation of stable DBD plasma.
A two-component dynamic force balance was mounted on the side wall of the test section to measure the time history of lift and drag coefficients on the airfoil. The balance consisted of two parallelograms arranged in an ''L'' shape. Each parallelogram was instrumented with four strain gauges, which were wired to form two Wheatstone bridges. The bridge input voltage was supplied from two Fylde FE-379-TA transducer amplifiers, while the bridge output was recorded at 2 kHz using an IoTech 488/8SA analog to digital converter and stored on a computer. Care was taken to shield the strain gauges and transducer amplifiers from the RF noise emitted by DBD by using copper Faraday cages and shielded cables. This reduced the noise pickup to less than ±4lV (±1lN). Further details of the force balance can be found elsewhere (Jukes and Choi 2009a, b, c). The airfoil model was mounted on the force balance through a rod located at the centerline, 25%c from the leading edge. Force calibration was performed by attaching precision weights to the supporting rod vertically for lift force and horizontally via a low-friction pulley system for drag force. The accuracy of force measurements was better than ±0.01 N and the angle of attack for the airfoil could be set within ±0.25°. Time-resolved particle image velocimetry (PIV) from Dantec Dynamics was used to measure the velocity field in the airfoil wake. This system consisted of a Phantom V12.1 high-speed camera, a Litron LDY302-PIV 100 W Nd:YLF laser, a seeding particle generator and a computer. The seeing particles used here were 1-lm-diameter droplets of olive oil. For the flow around the airfoil, the field of view was approximately 70 mm 9 40 mm, with 1,280 9 800 pixels resolution in the streamwise and vertical directions, respectively. The time delay between laser pulses was typically 50-95 ls, where the timing was controlled within Dantec Dynamic Studio v3.0 and Dantec timing hardware. This typically resulted in a particle displacement of around 6 pixels in the free-stream region (about 20% of an interrogation area). The sampling frequency of the camera was 2 kHz, and 4,000 image pairs were recorded continuously for each case. This included 1 s data without plasma control and 1 s data with plasma control (1.3 and 0.7 s data, respectively, for 7.0%c height Gurney flap).
Dantec Dynamic Studio v3.0 was used to calculate the velocity fields from the acquired data. The interrogation window was 32 9 32 pixels with 50% overlap in both the streamwise and vertical directions. Velocity vectors were computed using a recursive cross-correlation technique (adaptive correlation with local median filter). Vectors were validated using local and median filters by calculating the deviation from the surrounding vectors. This always resulted in less than 5% erroneous vectors, which were replaced using interpolation of the surrounding vectors. The origin of the coordinate system used here is located at the trailing edge of the airfoil at 0°a ngle of attack, with the x and y axes pointing in the streamwise and vertical directions, respectively (see Fig. 1). The plasma actuator was driven sinusoidally at high AC voltage with high frequency using a PSI PG1040F power supply, with excitation voltage E and excitation frequency f as listed in Table 1. In order to quantify the plasma forcing magnitude, the induced force by each plasma actuator was calculated based on the momentum theory (Jukes and Choi 2009b). Here, a control volume of 18 mm wide at 15 mm downstream of the upper electrode edge was chosen in quiescent air. The thrust per unit width is equal to the total momentum flux across this volume. Therefore, the plasma-induced force can be calculated by integrating the velocity. Figure 2 shows the change in the momentum flux M with time, which quickly settles to a quasi-steady value after 0.2-0.3 s. The initial overshoot is due to the formation of the starting vortex . It is shown that the 7.0%c plasma actuator induced the greatest momentum flux, while the 3.0%c plasma actuator had the smallest due to the difference in E.
The momentum coefficient of plasma jet, as defined by C l = 2M m /qU ? 2 , is given in Fig. 3. This shows that the momentum coefficient C l takes a value between 0.04 and 1.39%, where the largest momentum coefficient is provided by the 7.0%c plasma actuator. h/c = 3.0%, E = 6.9 kV p-p , f = 19.8 kHz h/c = 4.5%, E = 9.8 kV p-p , f = 17.8 kHz h/c = 7.0%, E = 9.8 kV p-p , f = 18.5 kHz 3 Results and analysis
Aerodynamic forces
The instantaneous lift and drag coefficients obtained from dynamic force balance measurements are shown in Fig. 4 as an example to show the control effect of plasma forcing. When the plasma actuator is turned on, both the lift and drag coefficients are increased by about 35 and 30%, respectively. Figure 5 shows the time-averaged lift and drag coefficients of a NACA 0012 airfoil versus angle of attack a at Re = 20,000, showing that the lift coefficient is increased with an increase in the Gurney flap height. When the plasma actuator is turned on, the lift coefficient is increased for the entire angles of attack tested here. The plasma forcing with C l = 1.15% on a 4.5%c Gurney flap seems to achieve a lift coefficient comparable to a 7.0%c Gurney flap without plasma control. The plasma forcing with C l = 1.39% on a 7.0%c Gurney flap shifts the lift coefficient upwards by about 0.15 for the entire angles of attack. Therefore, the DBD plasma actuator on the Gurney flap acts to increase the equivalent flap height. However, the drag coefficient also increases with the Gurney flap height (Fig. 5b). The additional lift produced by the present plasma forcing is similar to the results by the jet flap (Traub et al. 2004;Traub and Agarwal 2008), the plasma flap (He et al. 2009) and the plasma Gurney flap (Zhang et al. 2009).
Based on thin-airfoil theory, Liu and Montefort (2007) proposed that the lift coefficient increments DC L by Gurney flaps should be proportional to the square root of the flap height. On the other hand, the drag coefficient increments DC D by Gurney flaps should be linearly proportional to the flap height (Greenblatt 2011). These predictions were confirmed by Bechert et al. (2000) and Yu et al. (2011), who carried out wind tunnel tests and numerical simulations of Gurney flaps, respectively. Figure 6a compares the present data on NACA 0012 to show DC L as a function of h/c at zero angle of attack, indicating that the lift increments by Gurney flaps is given by DC L = 1.5Hh/c. Also shown in Fig. 6a are the data obtained by Bechert et al. (2000) on an HQ17 airfoil, which can be represented by DC L = 3.2Hh/c. The difference in the proportionality constant is due to the difference in the airfoil type as well as the Reynolds number (Greenblatt 2011). The drag increments DC D by Gurney flaps at zero angle of attack are shown in Fig. 6b as a function of h/c, suggesting that a linear relationship given by DC D = 0.6 (h/c) is valid for a wide range of the Reynolds number (Re = 20,000-1000,000) and the non-dimensional flap height (h/c B 7.0%), even for different airfoil types. The lift coefficient increment DC L by plasma forcing is shown in Fig. 7 as a function of the plasma momentum coefficient C l , indicating that the relationship can be represented by DC L ¼ 1:5 ffiffiffiffiffi ffi C l p : This has a good agreement with the theoretical prediction for a jet flap by Siestrunck (1961), suggesting that the mechanism of lift increment by plasma forcing over a Gurney flap is similar to that of jet flap. A comparison of data given in Figs. 6a and 7 suggests that the plasma forcing with C l = 1% has an effective Gurney flap height increment equivalent to h/c = 1%.
3.2 Characteristics of the wake Figure 8 shows the time-averaged velocity distribution in the near-wake region of the Gurney flap without and with plasma control for a = 2°, h/c = 7.0%, Re = 20,000, C l = 1.39%. Time averaging was performed over 2,200 and 1,400 instantaneous snapshots for plasma-off and plasma-on cases, respectively. Figure 8 indicates that additional plasma forcing makes the recirculation length behind the Gurney flap much shorter. Here, the recirculation length, l r , was measured as the distance from the Gurney flap to the saddle point, as shown in Fig. 8c, d, which is summarized in Fig. 9. This figure clearly shows that the recirculation length is reduced by plasma forcing, where the reduction becomes greater with an increase in the pre-stall angle of attack. Zhang et al. (2009) and Little et al. (2010) also observed a reduction in the length of recirculation region by the plasma Gurney flap and the plasma flap, respectively.
The time-averaged streamwise velocity distribution in a wake is shown in Fig. 10. This indicates that Gurney flap control turns the wake downwards, which is much greater with additional plasma control. According to Lee and Ko (2009), the downward turning of the near wake increases the suction pressure over the Gurney flap, leading to an increase in the lift coefficient of the airfoil. It should be noted that there is a greater velocity defect in the wake, therefore, a greater drag with the Gurney flap and plasma control. However, as shown in Fig. 10, the velocity defect recovers rapidly with additional plasma forcing. Figure 11 shows the distribution of turbulent kinetic energy behind the 7%c Gurney flap without and with plasma at a = 2°. This shows that the turbulent kinetic energy in a near-wake region is increased by plasma forcing, where the energy peak is moved much closer to the Gurney flap. However, the turbulent kinetic energy quickly reduces in the downstream, reflecting an initial increase in the velocity deficit, followed by a reduction in downstream (see Fig. 10). The wake vortex dynamics of near-wake region to bring these changes in turbulent kinetic energy will be discussed further in Sect. 3.3. Figure 12 shows the streamwise distribution of the wake half-width b 1/2 obtained from the velocity deficit profiles given in Fig. 10. Here, the wake half-width is defined as the distance across the wake at which the velocity defect becomes a half of its maximum value. It is shown that the minimum wake width position is shifted closer to the Gurney flap by the DBD plasma actuator. Here, the minimum half-width location corresponds to the end of the recirculation region. Our results, therefore, suggest that the recirculation region becomes much shorter and narrower with additional plasma forcing on the Gurney flap.
Using a technique developed by von Ellenrieder and Pothos (2008) and Godoy-Diana et al. (2009), the wake deflection angle h between the free-stream direction and the line of minimum mean streamwise velocity (the maximum defect velocity) is obtained, as shown in Fig. 13. These results are summarized in Fig. 14, showing that the airfoil wake is shifted downwards by plasma forcing by up to Dh = 3°. h/c = 7.0%, plasma off h/c = 7.0%, plasma on, C = 1.39% Fig. 9 Summary of the recirculation length versus angle of attack at Re = 20,000
Wake vortex dynamics
The Gurney flap decreases the dominant shedding frequency of the wake vortex. This reduction becomes even greater when the plasma forcing is applied. The result for a = 2°, Re = 35,000 is shown in Fig. 15, depicting that the dominant shedding frequency is reduced by about 27 and 32% by the Gurney flap without and with plasma forcing, respectively. It is worth noting, however, that the dominant shedding frequency does not change within a near-wake region. The reduction in the power spectrum peak indicates that there might be a possible noise reduction by the plasma control (Kuo and Sarigul-Klijn 2010).
The Gurney flap increases the dimensionless frequency fh/U ? , while the additional plasma forcing seems to decrease the shedding frequency for a B 4°, as shown in Fig. 16a. Lee and Ko (2009) suggested that the reduction in shedding frequency with an increase in the flap height was due to the increased distance between the two separating shear layers, requiring more time for the opposite shear layer to cross the wake. It has been shown by Yarusevych et al. (2009) that the variation of non-dimensional shedding frequency of a NACA 0025 airfoil can be dramatically reduced when the lateral distance d between upper and lower shear layers is used for the length scale. Accordingly, we defined the Strouhal number by fd/U ? to show the nondimensional shedding frequency as a function of the angle of attack in Fig. 16b. It shows that the variation of frequency is indeed reduced to give fd/U ? & 0.35 as compared to the scaling based on the flap height, fh/U ? (see Fig. 16a). Anomalous data at a = 4°for the 7.0%c Gurney Fig. 19 Evolution of the phaseaveraged spanwise vorticity field x z c/U ? superposed with velocity vector (with 0.8U ? subtracted from streamwise velocity) at four phases 0 (a, b), 0.5p (c, d), p (e, f) and 1.5p (g, h) for the airfoil installed Gurney flap without plasma control (left column) and with plasma control (right column) at a = 2°, h/c = 7.0%, Re = 20,000, C l = 1.39% flap are believed to be due to a greater uncertainty involved in determining d as the field of view for PIV measurements is limited. Figure 17a-i shows an evolution of spanwise vorticity for a = 2°, h/c = 7.0%, Re = 20,000, C l = 1.39%. The phase angle of shedding vortices is indicated in Fig. 17j based on the spanwise vorticity at x/c = 0.2, y/c = -0.1. Figure 17a, b show the formation and detachment of the vortex from the lower shear layer before the plasma forcing is applied. The wake shear layers interact with each other Fig. 20 Evolution of the phaseaveraged vertical velocity field V/U ? at four phases 0 (a, b), 0.5p (c, d), p (e, f) and 1.5p (g, h) for the airfoil installed Gurney flap without plasma control (left column) and with plasma control (right column) at a = 2°, h/c = 7.0%, Re = 20,000, C l = 1.39% by drawing fluid from the opposite side of the Gurney flap across the wake, thus forming alternating vortex shedding downstream. This is similar to the Kármán vortex street from a circular cylinder (Gerrard 1966;Cantwell and Coles 1983).
When the plasma is actuated (from Fig. 17c forward, as indicated by the red Gurney flap in this and subsequent plots), the momentum is added to the lower shear layer from the downstream side of the Gurney flap, leading to a stronger vortex on this side. Thus, the vortex grows quicker and it reaches across the wake faster to entrain the shear layer from the opposite side of the airfoil. Therefore, the upper wake vortex is enhanced by the entrainment effect of the stronger lower wake vortex. During its formation process, the upper vortex also entrains the lower wake shear layer across the wake close to the Gurney flap (Fig. 17i). This leads to a shorter vortex formation length. Such process is very similar to that observed around a circular cylinder under plasma actuator control, as described by Jukes and Choi (2009b).
It was observed that no matter when the plasma actuator turned on, the lower shear layer was always magnified first. This is followed by the enhanced mutual entrainment between the upper and lower wake vortices. Figure 18 shows a sequence of vortical flow in the near-wake region, where the plasma actuator is turned on as the upper wake vortex is detaching from the shear layer. This is the opposite case to that in Fig. 17, showing a similar control process as mentioned above, although it takes two extra periods before the lower wake vortex is magnified.
For a better understanding of the variations of the flow field induced by the plasma forcing, phase-averaged results based on PIV measurements are given in Figs. 19 and 20. The technique used here to determine the phase angle was similar to that of Kim et al. (2006) and Zhou and Yiu (2006). Here, the spanwise vorticity at x/c = 0.2, y/c = -0.1 was used as a reference signal. Phase average was performed over 78 and 45 cycles for plasma-off and plasma-on cases, respectively. Figure 19 shows the phase-averaged spanwise vorticity field for a = 2°, h/c = 7.0%, Re = 20,000, C l = 1.39%. The enhanced entrainment effect due to additional plasma forcing between the upper and lower wake vortices can be clearly seen here. For example in Fig. 19a, b, the vortex on the upper shear layer has just reached the shear layer on the lower surface so that the vortex formation process of the lower vortex is about to start. With plasma, the lower vortex develops stronger and closer to the downstream side of the Gurney flap (see Fig. 19c, d). The effect of added momentum is especially evident in Fig. 19e, f, where the entrainment of the upper vortex has occurred more rapidly due to the enhanced lower shear layer. Here, it can also be seen that the upper shear layer has been entrained toward the DBD plasma actuator on the Gurney flap. This may be due to the suction effect of the actuator as it accelerates fluid in the negative y-direction (downwards). In fact the upper shear layer appears to change its direction sharply, diverting vertically down along the Gurney flap rear surface (see Fig. 19f at x/c = 0, -0.1 B y/c B 0). As a result, the vortex formation length is reduced from 0.12c to 0.06c, based on the position of the maximum streamwise velocity fluctuation (Zdravkovich 1997).
The velocity field in the near-wake region is also changed by the additional plasma forcing. This is shown in Fig. 20, where the vertical velocity magnitude is increased by plasma control. This explains a reduction in the vortex formation length behind the Gurney flap. Troolin et al. (2006) have pointed that such increase in the net negative vertical velocity on the airfoil wake leads to an enhancement of circulation and thus the lift force.
Conclusions
Flow control around a NACA 0012 airfoil by the DBD plasma actuator on a Gurney flap has been investigated. A dynamic force balance and time-resolved PIV were used to measure the lift and drag coefficients and to study the velocity and vorticity fields. The present results showed that both lift and drag coefficients were increased when the plasma actuator was turned on. They also indicated that an additional plasma forcing on the Gurney flap with a jet momentum coefficient C l = 1% has an effective increment in the flap height equivalent to h/c = 1%.
The Gurney flap reduced the dominant shedding frequency of the wake vortex and its power spectral peak. These reductions became even greater when the plasma forcing was applied. The velocity distribution in the nearwake region indicated that the Gurney flap turned the wake downwards, and this turning became much greater with additional plasma control. This can be interpreted as an increase in the suction pressure over the Gurney flap, thereby increasing the lift coefficient of the airfoil. The recirculation region behind the Gurney flap became shorter and narrower with the plasma forcing, leading to a stronger vortex on this side. Thus, the vortex can reach across the wake faster to entrain the shear layer from the opposite side of the airfoil. It has been observed that no matter when the plasma actuator is turned on it is always the lower shear layer that was magnified first by plasma. It was also found that the negative vertical velocity on the airfoil wake was increased by plasma control, leading to an enhancement of circulation and thus the lift force.
|
2019-04-19T13:02:33.238Z
|
2012-02-15T00:00:00.000
|
{
"year": 2012,
"sha1": "b1c4d636b7683ca934d5d61241d8e5bff8d3eeda",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00348-012-1263-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "518367541d79a422b8344faee44551d8e8d599bd",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
266432641
|
pes2o/s2orc
|
v3-fos-license
|
Urease inhibitors technologies as strategy to mitigate agricultural ammonia emissions and enhance the use efficiency of urea-based fertilizers
Experiments were conducted to evaluate the stability and degradation of NBPT under storage conditions and to quantify urease activity, ammonia losses by volatilization, and agronomic efficiency of urea treated with different urease inhibitors, measured in the field. Experiments included urea treated with 530 mg NBPT kg−1 (UNBPT) in contact with six P-sources (monoammonium phosphate-MAP; single superphosphate; triple superphosphate; P-Agrocote; P-Phusion; P-Policote), with two P-concentrations (30; 70%); the monitoring four N-technologies (SoILC; Limus; Nitrain; Anvol); and the application of conventional urea (UGRAN) or urea treated with urease inhibitors as topdressing in three maize fields, at three N rates. It is concluded that: the mixture of UNBPT and P-fertilizers is incompatible. When MAP granules were coated to control P-release (P-Agrocote), the degradation of NBPT was moderate (approximately 400 mg kg−1 at the end of the storage test). SoILC and Limus solvent technologies extended the NBPT half-life by up to 3.7 and 4.7 months, respectively. Under field, each inhibition technology reduced urease activity, and lowered the intensity of ammonia emission compared to UGRAN by 50–62%. Our results show that the concentration of NBPT is reduced by up to 53.7% for mixing with phosphates. In addition, even with coatings, the storage of mixtures of urea with NBPT and phosphates should be for a time that does not reduce the efficiency of the inhibitor after application, and this time under laboratory conditions was 168 h. The reduction of NBPT concentration in urea is reduced even in isolated storage, our results showed that the half-life time is variable according to the formulation used, being 4.7, 3.7, 2.8 and 2.7 days for Limus, SoILC, Nitrain and Anvol, respectively. The results of these NBPT formulations in the field showed that the average losses by volatilization in the three areas were: 15%, 16%, 17%, 19% and 39% of the N applied, for SoILC, Anvol, Nitrain, Limus and urea, respectively. The rate of nitrogen application affected all agronomic variables, with varied effects in Ingaí. Even without N, yields were higher than 9200 kg ha−1 of grains. The increase in nitrogen rates resulted in linear increases in production and N removal in Luminárias and Ingaí, but in Lavras, production decreased above 95.6 kg ha−1 of N. The highest production in Lavras (13,772 kg ha−1 of grains) occurred with 100 kg ha−1 of N. The application of Anvol reduced the removal of N in Ingaí.
Nutritionally, N stands out as the nutrient most required by the crops.It assists in growth and increases plants yields 2 .Owing to the tropical conditions, N undergoes several transformations in the system.The most important would be the ammonia emissions (NH 3 ) to atmosphere 3 , particularly when urea is used without any associated technology [2][3][4][5][6] .The losses of NH 3 by volatilization for conventional urea, in Brazil, range between 20 and 40% of the applied N. The N application rate for maize is usually 100 kg ha −1 , and the urea ton costs nearly US$ 1200.Then, the NH 3 losses can cost between US$ 54 and 107, or between US$ 54,000 and 107,000 in a 1000 ha −1 property.
Technologies that reduce N losses through, ammonia, and nitrous oxide emissions are strategies to be used in cleaner agriculture [2][3][4][5][6][7] .National policies that focus on good agricultural practices have been implemented as an attempt to minimize the adverse effects caused by volatilization and its environmental consequences.In 2001, the European Union (EU) adopted a directive to mitigate emissions of polluting gases, such as ammonia and nitrous oxide, regulating reductions of 30% of NH 3 for some countries until 2030, relatively to 2005 8 .
Thus, researchers and fertilizer industries have studied and created alternatives to reduce greenhouse gas emissions and increase N efficiency.Such alternatives include treating urea with some additive and coating the granules of fertilizers 9,10 .
In the soil, the urease activity is essential for urea breakdown and N release in the soil, but increased enzymatic activity may cause high N losses in agriculture 11,12 .Thus, some products like urease inhibitors can inhibit the activity of the enzyme 6,13 .Currently, this group of technologies includes the association of different inhibitors in the same granule of fertilizer (such as Limus and Anvol technology) 13,14 or with biocatalysts (Nitrain technology) 15 .However, there is a lack of information on the mechanisms involved in these technologies, and such knowledge is crucial for their proper functioning in the field.
Studies show that fertilizers treated with NBPT (N-(n-butyl) thiophosphoric triamide) can reduce up to 78% of NH 3 emissions compared to urea 16 .However, the performance of the urease inhibitors depends on the type of solvent used in the additive 12 , storage conditions 17 and factors related to the soil system, they are influenced by urease activity 18 , temperature 11 , pH 19 , and soil moistur 20 .The solvents can improve the molecule stability and protect it for more time in field 6,[21][22][23] , but this depends on their properties, including pH, absence of water, and dissolution capacity.
Despite being an interesting strategy in the mitigation of N losses by volatilization, some particularities related to the formulation of urea treated with NBPT in the fertilizer industry still need to be understood and improved.For example, the mixture between NBPT and phosphate fertilizers which is often commercialized in the market.However, such mixture may present incompatibility in fertilizers based on a mixture of granules.Despite the relevance of this issue, the literature has almost no reports on the mechanisms involved in the possible inefficiency of NBPT in such conditions.Sha et al. 17 evaluated the stability and efficiency of NBPT after storage with diammonium phosphate (DAP).The researchers demonstrated that the free acidity associated with this fertilizer was too harmful to NBPT, for promoting fast degradation of the molecule.Thus, conducting studies with other fertilizers is also needed.At the same time, strategies that allow using urea treated with NBPT and phosphate fertilizers should be evaluated.Such evaluation can be particularly relevant for phosphate fertilizers coated with polymers, which can prevent direct contact between NBPT and the granule of phosphate fertilizer.
Moreover, the storage conditions of urea treated with NBPT should be better explained.It is well known that high temperature, humidity, and product storage time can affect the use efficiency of treated urea in the field, thus affecting the ability of inhibitors to reduce NH 3 losses 24 .Some improvements in urea treatment processes have been proposed, including the association of inhibitors 21,22 and increasing the NBPT concentration in urea.
In summary, this research includes both laboratory studies and conducted under field conditions.Considering the many issues discussed about urea treated with NBPT, the hypotheses of the present study were: (1) the contact between NBPT and conventional fertilizers [Monoammonium Phosphate (MAP), Triple Superphosphate (TSP), and Single Superphosphate (SSP)] leads to NBPT degradation; (2) Phosphate fertilizers with associated technologies (Agrocote, Policote, and Phusion) can reduce or prevent NBPT degradation; (3) The varying inhibitor formulations can protect the NBPT molecule when urea is stored without mixing with phosphate; (4) The varying inhibitor technologies available on the market can reduce urease activity and mitigate NH 3 losses in maize production areas; (5) The reduction of volatilization losses can influence the N accumulation in the plant at the flowering stage and increase maize yield.
To test our hypotheses, we firstly evaluated the effect of mixing urea treated with NBPT and conventional phosphate fertilizers and associated technologies in reducing the concentration of NBPT in urea.Then, the different technologies of urease inhibitors present on the market were stored, and the NBPT concentrations were evaluated at 30-day intervals.Lastly, the effects of different inhibitor technologies were evaluated under field conditions, and their ability to reduce urease activity and volatilization and influence N accumulation at flowering and maize yield.
Storage compatibility of mixtures between urea treated with NBPT and phosphate fertilizers
Among the technologies for phosphate fertilizers (Agrocote-P AGR , Phusion-P PH , and Policote-P POL ) in mixture with urea + NBPT (UR NBPT ), the best compatibility was observed between UR NBPT and P-AGR fertilizer, mainly in the 30% P proportion.Regardless of the presence or absence of the technology in phosphate fertilizers, the degradation of the NBPT occurs over the storage time, reaching a concentration below 20% in up to 168 h (Fig. 1).
For the nitrogen fertilizer (NF) UR NBPT and phosphate fertilizer (PF) mixture, the concentration of NBPT was linearly reduced according to the storage time.On average, the conventional fertilizers showed the NBPT degradation of 86% in the first 24 h for the mixture 70 PF:30 NF.In this group, the highest degradation in 24 h occurred with the Triple Superphosphate-P SSP (100%) and the lowest with Monoammonium Phosphate-P MAP (78%).The same fertilizers in the 30 PF:70 NF ratio showed a mean degradation of 66% in the first 24 h, with maximum degradation in Triple Superphosphate-P TSP (87%) and minimum in MAP (50%).
The degradation of the NBPT mixed with phosphate fertilizers with associated technologies was, on average, 53.7% for the 70 PF:30 NF ratio and 24.3% for the 30 PF:70 NF ratio.The highest degradation in the first 24 h occurred with the P POL fertilizer, both in the 70 PF:30 NF ratio (78.7%) and the 30 PF:70 NF (47.7%).The lowest NBPT degradation in both ratios occurred with the P AGR fertilizer (21.5% and 4.3%, for 70 PF:30 NF and 30 PF:70 NF, respectively).
Free-acidity of the phosphate fertilizers
The highest acidity was found in the conventional fertilizers (1.1, 0.39, and 0.29% for P TSP , P SSP , and P MAP , respectively).For the fertilizers with associated technologies, the free-acidity was 0.24, 0.15, and 0.14%, for P AGR , P PH , and P POL , respectively.
NBPT degradation in urease inhibitors formulations during storage
The urease inhibitors formulations showed a reduction in the concentration of NBPT during the 8 months of storage (Fig. 2).The values (in mg NBPT kg −1 ) found, respectively, before and after 8 months of storage were: 250 and 0 in Anvol; 460 and 115 in Limus; 600 and 7.5 on Nitrain; and 760 and 78 in SolLC, which is equivalent to reductions of about 250, 4, 80 and 9.7 times the initial concentration.
The daily rates of degradation of this same sequence of treatments were: 1.10, 1.89, 2.39, and 3.40 mg kg −1 .It was not possible to detect the NBPT called Duromide (present in Anvol) in liquid chromatography, as well as the quantification of NPPT in Limus.
In this study, the half-life of the technologies was based on the proportion of NBPT applied during the preparation of the solvents following the instructions of the manufacturer.Thus, the half-lives for each technology
Urease inhibition technologies and their relationships with the urease activity in soil and ammonia volatilization
Edaphoclimatic conditions during the experiment The volumes of accumulated precipitation 7 days after nitrogen cover application (DAA) were 28, 42, and 48 mm for the localities of Lavras, Ingaí, and Luminárias (Fig. 3; Table S1).There was no precipitation on the first DAA.Only in Ingaí the precipitation in the first 4 days was higher to 6 mm (22 and 3 mm on days 2 and 3, respectively).The temperature and air humidity in the first four DAA reached the extremes of 32.4 °C (Lavras) and 80.5% (Lavras and Ingaí).High volumes of precipitation were associated with high temperatures (≥ 26 °C) between days 5 and 7, reaching 27.6, 17, and 42 mm in Lavras, Ingaí, and Luminárias, respectively.
Urease activity and ammonia volatilization
There were significant differences for the urease activity and daily volatilization regarding the sources of N.Under field conditions, each inhibition technology reduced urease activity in the soil, and lowered the intensity of ammonia emission peaks compared to granulated urea without NBPT (U GRAN ) between 50 and 62%.
In Lavras, the initial urease activity (day 0), before the N fertilization, was 1.73 µg NH 4 + g of dry soil h −1 .Even on the first day after fertilization, the urease activity remained constant in all treatments (Table S4).However, the NH 3 losses were high, accounting 7% for U GRAN (Table S5) on the first day and expressive losses up to the fourth day.The maximum loss occurred in 1.73 days (23.4 kg ha −1 ), where the urease activity was the highest among the treatments (approximately 3 µg NH 4 + g of dry soil h −1 ) (Tables S3 and S5; Fig. 4).The daily losses in the plots that received urea treated with the technologies were below 1.5% of the N applied up to the 3 DAA.In this case, the urease activity was approximately 2 times higher for U GRAN .All formulations had maximum losses around the fifth DAA (7.5, 9.4, 3.8, and 4.8 kg ha −1 of NH 3 , for SolLC, Limus, Nitrain e Anvol, respectively) (Table S6).These losses coincide exactly with the highest values of urease activity where the urea treated with NBPT was applied and are mostly higher than U GRAN and the control (Table S4).In this area, the urease inhibitors were responsible to reduce the maximum daily losses-MDL in 3.8, 3.4, 3.8, and 3.4 days for SolLC, Limus, Nitrain e Anvol (Table S6; Fig. 4).
In Ingaí, the pattern was different from the other areas.The ammonia loss with U GRAN (23 kg ha −1 of NH 3 ) was maximum on the first day after fertilization, also coinciding with the highest activity of urease (1.37 µg NH 4 + g of dry soil h −1 ), while on the other treatments the average urease activity was 0.77 µg NH 4 + g of dry soil h −1 (Tables S3 and S6).The treatments that contained urease inhibitors had MDL between days 5 and 6 after fertilization, in particular for SolLC technology.However, the losses were inferior to 1 kg ha −1 , while the urease activity was less than 0.5 µg NH 4 + g of dry soil h −1 for most of the evaluated days.The formulations with urease inhibitors delayed the MDL by 3 to 5 days (Table S6; Fig. 5).
In Luminárias, U GRAN showed a maximum loss in 3.4 days (15.5 kg ha −1 ), although not associated with the highest urease activity.The urease activity in this area did not have expressive differences among treatments, but it was higher where U GRAN was applied and in the treatment without N application (Table S4; Fig. 6).However, we observed an increase in the urease activity in the fifth day in treatments with inhibitors, corresponding to 1.96 µg NH 4 + g of dry soil h −1 , in average, thus configuring the highest average activity up to the fifth day.This increase was associated with the maximum loss on treatments with inhibitors between days 5 and 6 (Table S6; Fig. 6).The losses accounted for 6, 11, 11, and 11 kg ha −1 , for SolLC, Limus, Nitrain e Anvol, respectively.Reductions of 1.8 to 3.5 days in the MDL in this area were observed in relation to U GRAN .
Agronomical results in the experiment with maize
There was an influence of the rate of N on all agronomical variables (Fig. 8).In the area of Ingaí, however, the effects of the treatments over the NAS varied according to the treatment.
Yields higher than 9200 kg ha −1 of grains were achieved even in the absence of N fertilization.In the treatment that did not receive N, the average yield for the areas of Lavras, Ingaí, and Luminárias were 12,030, 9243, and 13,867 kg ha −1 , respectively, when the N in the plants varied from 90 to 190 kg ha −1 .In addition, increasing rates of N linearly increased the NAS up to 249 kg ha −1 and yield to 14,808 kg ha −1 in Luminárias; yield to 11,781 kg ha −1 and N removal to 203 kg ha −1 in Ingaí; and N removal to 247 kg ha −1 in Lavras (Fig. 8).
In Lavras, the NAS decreased to less than 181 kg ha −1 when the rate of N exceeded 95.6 kg ha −1 .Despite the N fertilization, the highest yield in this area (13,772 kg ha −1 of grains) was achieved with the N rate of 100 kg ha −1 .Above this rate, results were gradually reduced.At a similar rate (100 kg ha −1 ), the application of Anvol reduced the NAS in Ingaí, while for the other technologies, each kilogram of N provided by U GRAN , SolLC, Limus e Nitrain increased NAS by 1.14, 0.32, 1.18, and 0.83 kg ha −1 , respectively (Fig. 8).
Storage compatibility of mixtures between urea treated with NBPT and phosphate fertilizers
The results showed in this work confirm the significant impact on the degradation of the urea treated with NBPT due to the release of phosphoric acid during storage time by phosphate fertilizers 17 .The mixtures in this work were not based on the P 2 O 5 concentration but rather on proportion of fertilizers.We observed that especially in the proportion 70% P (30 NF:70 PF), the reduction on the concentration of NBPT was more pronounced.It is known that such phosphate fertilizers are more acidic due to the manufacturing process 25 .Consequently, the higher free acidity, especially from the triple superphosphate (1.10%), drastically accelerated the degradation of the NBPT during the storage.
The coating of the Agrocote granules probably reduced the contact between MAP and urea treated with NBPT.Regarding the other fertilizers (Policote and Phusion), the granulation technologies probably did not coat the entire granules, thus leading to a higher degradation of the NBPT when compared to Agrocote.As there was less acidity in the phosphate fertilizers tech associated with technologies compared to conventional fertilizers 26 , there was a higher concentration of the inhibitor at the end of the storage.
NBPT degradation in urease inhibitors formulations during storage
The decrease in the concentration of NBPT along 8 months of storage was verified for all the NBPT formulations.This pattern can be explained as a consequence of storage time and the storage temperature (25 °C, considered an average for tropical climates), confirming that the stability of the NBPT is dependent on both factors 27 .Watson et al. 27 highlighted the influence of the conditions during the storage of urease inhibitors, which can cause future loss of efficacy in the field 17 .In addition, the authors emphasize that even with the application to the urea, the NBPT tends to degrade, limiting the lifespan of the additive in the fertilizer.
In addition to the temperature and storage time factors, we observed that the composition of the formulation also tends to be an important factor for the storage of the product before its application in the field.
Was observed that the stability of the fertilizers treated with NBPT formulations depends on the additive used.The positive results of Limus may be related beyond due to NBPT, by the higher concentration of the NPPT in the formulation, which is less soluble than the NBPT thus granting higher stability during storage 28 .Even at initial concentrations of NBPT lower than those of Nitrain, a better concentration of the Limus inhibitor was obtained at the end of the storage test (residual NBPT of approximately 40% at the end of the test).
In formulation Nitrain the associated additive did not promote efficient stabilization during the storage, reducing the residual concentration close to the concentration found in Anvol (< than 20%).NBPT concentrations close to what is usually commercialized before field application are important to improve the efficiency of the additive molecule in reducing NH 3 losses when treated urea is applied in the field.Similarly, the proper concentration of the inhibitor is important to achieve a half-life of the fertilizer during storage.We observed this pattern for SolLC.
However, it is important to highlight that the variation in the half-life of the formulations from this study does not determine the efficacy of the inhibitor in the field.So far, we only discussed the reduction of the NBPT concentration in the urea after the storage time.
Urease inhibitors formulations and their relationships with the urease activity in soil and ammonia volatilization
In this study, we observed significant impacts on N losses after the application of urea without the use of formulations capable of inhibiting soil urease activity.This probably occurred due to the rapid hydrolysis of urea favored by the wetter soil associated with high temperatures (especially in Lavras and Luminárias), which granted the highest urease activity 16,22,29,30 .
Considering the 150 kg N ha −1 supplied to the soil via cover fertilization, they were lost as NH 3 at the end of the evaluation cycle: between 3.4 and 22% with SolLC; between 3 and 33.6% with Limus; between 3.2 and 32.5% with Nitrain; and between 4.2 and 25.8% with Anvol.Overall, the formulations were efficient to reduce NH 3 losses in relation to U GRAN , mitigating losses between 50-62%.This is explained by the fact that the active sites of the NBPT molecule act in blocking the sites of urease activity in the soil blocking, for a period, the hydrolytic action of urea, consequently, there is a reduction of N to the atmosphere through the use of this formulation 31 .
Studies report that the period of inhibition of urease activity by NBPT comprises from 3 to 14 days, depending on the concentration of the inhibitor and edaphoclimatic conditions 16,27,[32][33][34] .Nitrain formulation showed the best performance to reduce the N loss in Lavras.We did not find reports in the literature using this formulation in the field.The ability of Nitrain, associated with a biocatalyst provided a good assessment of its effectiveness in the field to reduce NH 3 loss rates.This formulation reduced the NH 3 losses by up to 59.6%, in relation to U GRAN even when the climate conditions (humidity > 60% and average temperature > 20 °C) and urease activity were substantially favorable to volatilization during the first week after fertilization.
The occurrence of rains during the experiment in Ingaí was important to incorporate the urea into the soil, allowing greater reduction of NH 3 losses by the formulations when compared to other experimental areas 27,35 .In this área, SolLC was the formulation more efficient in delaying the period of greatest loss in relation to urea.In Luminárias, the same technology exhibited a higher reduction in NH 3 volatilization by up to 60% as well as to provide a better reduction in urease activity, compared to U GRAN .The results observed in both cases can be explained by the higher concentration of the inhibitor in the formulation, which provided better inhibition of the urease around the granules.When using high concentrations of NBPT (1500 and 2000 mg kg −1 ) the CDM can be reached around 6 days after application of N, however, this value is dependent on the environmental conditions during application 36 .Another aspect may be related to the rate of biodegradation of the inhibitor and its persistence in the soil, which was essential to prolong the day of maximum loss 16 .
The N accumulation in the maize was close (in Lavras) or higher (in Luminárias) to those required by the crop (180 to 200 kg ha −1 of N 37 ).In Ingaí, in absence of the fertilization initial levels of N in the plant were lower, and therefore, the supply of formulations (except for Anvol) resulted in significant increases after application of doses of N.
The difference observed in the accumulation of N in the plant among the U GRAN and formulations in Ingaí can be related to the intense rainfall that incorporated the urea into the soil immediately after the beginning of the NH 3 volatilization and to the higher N need in this area.
In addition, the maize cultivated in Ingaí was responsive to the application of N rates increasing in up to 28% with the maximum dose (150 kg ha −1 of N).This response is explained by the lower availability of N in the soil.Conversely, the absence of responses of urea with or without the associated formulations in most agronomic evaluations in the areas of Lavras and Luminárias is given by the reserves of organic and mineral N in the soils by a no-tillage system consolidated for years.This became evident when the control (without N application) surpassed 2.8 and 3.2 times the mean yield (4214 kg ha −1 ) for the state of Minas Gerais (Brazil) for the crop season of 2020/2021, this increase represents 131 and 161 bags of grains per hectare in Lavras and Luminárias, respectively, only relying on the constructed and consolidated soil fertility of these areas.The application of nitrogen, regardless of the NBPT formulation used, together with the nitrogen reserves in the soil, resulted in the accumulation of sufficient nitrogen to drive increased productivity.In addition, it is important to note that nitrogen losses by volatilization did not have a direct impact on the agronomic results analyzed in this study.However, without replacing the extracted N from the soil, through the adequate management of N and the use of technologies, the reserves constructed through years can be reduced in the medium and long term.Therefore, it is important to use formulations to reduce N volatilization and that contributes to the replenishment of N in the soil of cultivated area.
Future perspectives
In summary, our results indicate a reduction in the stability of NBPT when stored in a mixture with phosphate fertilizers, provided that these fertilizers have a good coating without causing any contact with the NBPT.However, even with the coating, it is important to store the fertilizer in optimal conditions for a time that does not reduce the efficiency of the NBPT after application in the field.
In addition, for a better choice of a formulation containing NBPT, we recommend the use of technologies that ensure the stability of the additive, or opt for a formulation that initially contains a higher concentration of NBPT.This will ensure that, when used by the producer, the formulation presents a minimum effective concentration to reduce nitrogen losses by volatilization.
In addition, our studies also indicate the contribution of different formulations of urease inhibitors to urea with the purpose of promoting the increase in the efficiency of the use of N as well as contributing to the longevity of the formulations during the storage and further stability in the field, although the performance is dependent on the bioedaphoclimatic conditions of the area.
We emphasize the importance of the half-life of the NBPT formulations during storage.This information should be provided to the consumer through the batch manufacturing information, so it would be known when the acquired technology will have its efficiency reduced by the degradation of the NBPT.With such information, consumers could opt for more efficient and stable technologies during storage.
Storage compatibility of mixtures between urea treated with NBPT and phosphate fertilizers
Two essays were conducted to quantify the concentration of N-(n-butyl) thiophosphoric triamide (NBPT) in urea treated with NBPT after its storage with phosphate fertilizers.The essays differed in the proportion of N fertilizer (% NF) and phosphate fertilizer (%PF) in the mixture.In essay I, the proportion was 30 NF:70 PF (granulated urea + NBPT and phosphate fertilizer), and in essay II, a 70 NF:30 PF proportion was used.Both essays were set up in a completely randomized design, with six replications, and a (6 × 5) + 1 factorial scheme.The factorial corresponded to the mixture of urea + NBPT (UR NBPT ) and six phosphate fertilizers (conventional: monoammonium phosphate, single superphosphate, triple superphosphate, and the technologies for phosphate fertilizers: Policote, Phusion, and Agrocote), and five storage times (0, 24, 72, 120, and 168 h), and additional treatment with only UR NBPT .
Urea treatment with urease inhibitors technologies
To standardize the urea granules, the samples were sieved to a diameter between 4 to 3.35 mm.Then, they were homogeneously split using a SONDATERRA fertilizer splitter.The technologies added to granulated urea (46% N) were provided by the manufacturers or developed by the Innovations for fertilizers Research Group.Except for the Anvol (ready-to-use formulation, commercially available), the entire mixing process with granulated urea was performed at the UFLA's Laboratory of Innovations for Fertilizers.The mixture was homogenized in a Super Bio Fast benchtop mixer for 5 min at 30 rpm.Specifications on the studied fertilizers are described below.SolLC technology: Prepared with a solvent developed by the InnovaFert-UFLA Research Group.After testing several mixtures of solvents, the complete dilution of the product was obtained in a single solvent, herein named SolLC.A known NBPT (P.A.) amount was added to SolLC solvent (used at approximately 20% concentration but can be raised to 40% without crystal formation).The mixture was then subjected to specific laboratory processes, which ensured the complete dilution of the powder.After this process, the product was colored (1% by weight of dye) and applied to urea at a rate of 3.04 kg t −1 .
Nitrain technology: According to the manufacturer, Nitrain's formulation ensures better nitrogen utilization efficiency, boosting crop growth.This formulation was applied at a rate of 1.9 kg t −1 .
Anvol technology: Developed and patented by Koch Agronomic Services, considered an industrial standard treatment for this study.The Anvol additive consists of 43% NBPT, of which 27% refer to Duromide [NBPT adduct and main active ingredient of the product and 16% to free NBPT, which would be an "immediately available" fraction of the additive 18 .The formulation was applied at a rate of 1.5 kg t −1 .
Phosphate fertilizers
Phosphate fertilizers (PF) were used in this study to compare their effects on the degradation of NBPT in urea after storage over time.The phosphate fertilizers used in the present study were purchased from a fertilizer store.The same procedures of item 1.1.2were applied to standardize the diameter of the fertilizer granules.The conventional fertilizers used were (1) Monoammonium Phosphate (PMAP), (2) Single Superphosphate (PSSP) and (3) Triple Superphosphate (PTSP).Fertilizers containing some type of associated technology were (4) Policote Phos (PPOL): MAP fertilizer with anionic polymers (Policote), containing 10% N and 49% P2O536, ( 5) Phusion (PPH): Fertilizer with 40% P 2 O 5 , macro and micronutrients, and incorporation of fulvic and humic acids 36 , ( 6) Agrocote E-max (PAGR): MAP fertilizer coated with polyurethane.It is coated with a polymer named E-max Release Technology™, consisting of 9% N and 47% P2O5 36 .
UR NBPT mixture with phosphate fertilizers
The mixtures between phosphate fertilizers and granulated urea treated with NBPT were performed as follows: Five grams of each proportion of the mixtures of URNBPT and phosphate fertilizers (70:30 and 30:70) were placed in glass vials (25 mL).The vials were closed, manually shaken, and stored in a BOD chamber at 25 °C.After 0, 24, 72, 120, and 168 h of storage, the vials were removed from the chamber.Then, urea granules and phosphate fertilizers were separated (Fig. S1).
UR NBPT dissolution after separation of phosphate fertilizers
After separating the granules, UR NBPT was dissolved in ultrapure water, considering the same proportion used for phosphate fertilizers.An aliquot of 1 mL of this solution was collected and stored in an amber vial.Then, the samples were stored in the freezer.Such procedure aimed to preserve the NBPT molecule in the solution until its quantification.
NBPT quantification in urea after storage
The quantification of the NBPT concentration was performed by liquid chromatography (HPLC), in an Agilent device model HP1100, with a diode array detector.The quantification followed the method described by the European Committee for Standardization 38 .
Calculation of NBPT longevity (half-life)
By definition, the half-life corresponds to the time required to reduce a certain concentration to 50% of the initial value 27 .This parameter was equalized between the treatments, isolating the time variable.Further descriptions of the method will be described in the statistical analysis section.
Determination of the free-acidity in phosphate fertilizers
Free-acidity was determined by titration, as described in ABNT NBR 5774:2010 39 .Two grams of PF were extracted with neutralized acetone and then titrated with a standardized sodium hydroxide solution (0.1 mol L −1 ), using alizarin as an indicator.The result was expressed as a percentage (PF weight) of phosphoric acid.
NBPT degradation of urease inhibitors formulations as a function of storage time
The NBPT degradation of additive formulations containing urease inhibitors mixed with granulated urea was monitored during 8 months of storage.Granulated urea (U GRAN without NBPT) was treated with four NBPTcontaining technologies: SolLC; Limus, Nitrain, and Anvol.The resulting mixtures were stored in sealed plastic bags, in triplicate, under controlled conditions of temperature (25 °C) and relative humidity (76%).The initial and residual NBPT concentration (every 30 days of storage) of each technology was determined by liquid chromatography (HPLC).The half-life calculation also followed the procedures described earlier.
Efficiency of the urease inhibition technologies under field conditions
Three field experiments were conducted to evaluate the efficiency of urease inhibitor formulations for urea and to verify their effects on urease activity, ammonia volatilization, plant nutrition, and maize yield.
Characterization of the experimental areas
The experiments were carried out simultaneously in different areas of Minas Gerais state, Brazil.To represent the geographical distribution of the study areas a map was generated using ArcGis software version 11.0 (Fig. S2).Experiments I and II were conducted in a Latossolo Vermelho (Oxisols, USDA) in Lavras (21°16′00″ S; 44°57′27″ W) and Ingaí (21°22′08″ S; 44°53′23″ W).As for experiment III, it was conducted in a Cambissolo Háplico (Inceptsols, USDA) in Luminárias (21°29′59″ S; 45°00′26″ W).Tables S2 and S3 list the characterization of the three soils.These locations have a Cwa climate type, subtropical with dry winters and rainy summers, with a mean temperature between 19.6 and 21 °C.Lavras and Ingaí are cultivated under a no-tillage system in consolidation.In turn, the soil in Luminárias was cultivated with eucalyptus, and was recently converted to a grain production area.The succession of soybean/corn or soybean/wheat is adopted in all areas.All areas were cultivated with soybean prior to the installation of the experiments.
Experimental design
The experiments were set in a randomized block design, with three replications, in a factorial scheme 5 × 4. The factorial consisted of five treatments [untreated, granulated urea without NBPT (U GRAN ), and urea treated with the urease inhibition technologies SolLC, Limus, Nitrain, and Anvol] and four N rates (0, 50, 100, and 150 kg ha −1 ).
Each experimental unit consisted of five sowing rows with 5 m-length (10 m 2 ).The three central rows were considered the useful plot area (4.5 m 2 , after discarding one meter at each ending).
Maize sowing and management practices during cultivation
Maize was sown in the first season of 2020 (November), using the AG8070 PRO3 hybrid.The sowing spacing was 0.5 m between rows, totaling 70,000 plants ha −1 .Potassium fertilization was conducted 15 days before sowing using 90 kg ha −1 of K 2 O as potassium chloride.At sowing, 300 kg ha −1 of 13-33-00 were applied.Nitrogen fertilization with urea and urease inhibitor formulations was performed in one topdressing application, after the development of the third pair of maize leaves (V3 vegetative stage, approximately 25 days after emergence).The fertilizers were manually applied at the 50, 100, and 150 kg N ha −1 rates.
All experimental areas received the same management practices.At 45 days after sowing (DAS), fungicide based on trifloxystrobin (100 g L −1 ) + tebuconazole (200 g L −1 ), and acaricide based on mancozeb at doses of 0.6 L ha −1 and 2 kg ha −1 , respectively, were applied.At 60 DAS, the second fungicide application was performed, with a product based on picoxystrobin (200 g L −1 ) + cyproconazole (80 g L −1 ) at a dose of 400 mL ha −1 .Applications were performed on the same day in the three experimental areas.
Monitoring of weather conditions
Weather information from Lavras was provided by the National Institute of Meteorology-INMET.Data from Ingaí and Luminárias were collected at private climatological stations installed in each experimental area.
Determination of urease activity, soil pH, and ammonia volatilization
Soil samples were collected at the 0-2 cm depth in the fertilization line, with a metal spatula.Samplings occurred in the 1st, 2nd, 3rd, 4th, 5th, 6th, 7th, 9th, 11th, 13th, 15th, 21st, and 30th days after the N fertilization in the plots that received 150 kg N ha −1 , and in the control without N.
The soil was sieved to 2 mm to standardize the samples and remove plant remains.Then, soil samples were stored in properly identified plastic bags and kept under refrigeration at 4 °C for up to 2 weeks before the analysis.
Urease activity: The quantification of urease activity in soil is based on the determination of the ammonia released after incubating a soil sample in urea solution 40 .It was determined using 5.0 g of soil, 9 mL of Tris THAM buffer (pH 9), and 1 mL of urea solution (0.2 M), incubated at 37 °C for 2 h.Then, 35 mL of KCl-Ag 2 SO 4 solution (2.5 M, 100 mg L −1 ) were added to stop the reaction.The solution was stirred and left standing for 5 min at room temperature.The volume was completed to 50 mL with KCl and Ag 2 SO 4 , followed by stirring.Then, 20 mL aliquots of the supernatant were transferred to digestion tubes, which received 0.2 g of MgO and were taken to the distillation process 41 .Finally, the solution was titrated with H 2 SO 4 solution (0.005 M), using a boric acid solution, methyl red, and bromocresol green as indicators.
Soil pH: Determined using a combined electrode inserted in a soil suspension and CaCl 2 solution (1:2.5 v/v), according to the methodology proposed by Embrapa 42 .Ammonia (NH 3 ) volatilization: The NH 3 losses by volatilization were quantified using the PVC semi-open collector method 43 .Three PVC tube bases (20 cm in diameter and 20 cm in height) were installed in each plot, at 10 cm from the maize sowing row.The respective treatments were applied inside the PVC bases, proportionally to the base area, without mechanical incorporation or by irrigation.Soon after, a collector made of PVC (20 cm in diameter and 50 cm in height) was installed on the first base.Two sponges (0.02 g cm −3 density) embedded with phosphoric acid (60 mL L −1 ) and glycerin (50 mL L −1 ) were placed in each collector.The sponge in the upper part of the collector aimed to prevent a possible contamination of the lower sponge (used to capture the volatilized ammonia).
At each sampling, the sponges were collected and sent to NH 3 quantification, and then replaced by new ones.The chambers were rotated within the three bases of the respective treatments.Such practice aimed to reduce the spatial variability of ammonia emission and the formation of a microclimate inside the collectors 44 .NH 3 losses were daily quantified in the first 7 days after fertilization with N-based technologies (DAFNT), when the NH 3 losses are often higher 22 .After that, on alternate days and pre-defined dates (coinciding with the soil sampling for quantification of urease activity).
The sponge solution was extracted by filtration using a Büchner funnel and a vacuum pump, after five sequential washes with 80 mL of deionized water.Aliquots of 20 mL were taken from the obtained extract to determine the N content by distillation).The result was converted to the NH 3 losses (%) per hectare.The accumulated losses were calculated from the sum of days 1 and 2, which was added to the losses of day 3 and so on, until day 30 41 .
N accumulation in plant shoot
At the flowering stage, three maize plants were collected per plot (one plant in each of the three central rows).Then, they were dried at 65 °C until constant weight, weighed, and ground.The material was newly ground in a Willey mill for better homogenization.In the laboratory, the N content in plant material was determined after sulfuric digestion.The standard reference material (NIST-1573A-Tomato Leaves) with a concentration of 29.2 g N kg −1 was used to evaluate the accuracy of the N determination method.The results of NIST analysis showed 100% recovery.
The N accumulation in maize shoot (NAS) was obtained by multiplying the N content by the mass of the three plants.Then, the obtained value was converted to kg ha −1 .
Grain yield
Maize cobs of fifteen plants were harvested from the useful area of all plots to estimate grain yield.The cobs were threshed, the grains were weighed, and moisture was quantified.Maize yield was then estimated using the number of plants per hectare.The sample moisture was used to calculate the correction to a moisture of 130 g of water kg −1 .
N removal by the grains
After yield determination, grain subsamples were oven dried at 65 °C until constant weight.They were ground in a Willey mill, and their N contents were determined by the Kjeldahl method with NIST validation (1573A-Tomato Leaves).The N removal by the grain was given by multiplying the N content by maize yield, and then the values were converted to kg ha −1 .
We confirm that the plant study complies with relevant institutional, national, and international guidelines and legislation.The seeds used were duly registered in the Brazilian Ministry of Agriculture, Livestock and Supply (MAPA) and were acquired in local trade authorized by the competent body RENASEM (National Registry of Seeds and Seedlings).
Statistical analyses
The NBPT degradation was calculated using a nonlinear regression model.This model is indicated to describe a decay pattern according to Eq. ( 1): in which Y is the NBPT concentration (mg kg −1 ); t represents the storage time (h) of the fertilizer granules; α corresponds to the initial condition of the plot, that is, the estimate of 100% of the applied amount of NBPT; k is the value indicating the NBPT degradation, which refers to the variation of NBPT losses over time; ǫ1 is the random error associated with the i-th observation.
The half-life was estimated based on the Henderson model, in which Eq. ( 1) was equaled to the half-life of urea treated with NBPT, and the time variable was isolated.Half-life was estimated according to equation: in which HL represent the NBPT half-life.The statistical analyses were performed using the R 3.3.1 software 45 .
Analysis of variance was performed for field data.When the effects of technologies were significant, they were compared using the Tukey test (P < 0.05).Linear or quadratic regression models were adjusted to evaluate the effect of N rates on N accumulation in the plant, grain yield, and N removal by the grains.All analyses were performed using the statistical program SISVAR version 5.7 and software R 3.3.1 45 .
The accumulated ammonia losses were subjected to nonlinear regression analysis using the logistic model Y i = [α/{1 + e k (b − daa i )}] + E i ), in which: Yi is the i-th observation of the accumulated NH 3 loss (%), and i = 1.2…n; daa i is the i-th day after application; α is the asymptotic value that can be interpreted as the maximum www.nature.com/scientificreports/accumulated value of NH 3 loss; b is the abscissa of the inflection point and indicates the day when the maximum volatilization loss occurs; k is the value that indicates the precocity index [the higher the k value, less time will be needed to reach the maximum accumulated loss value (α)]; E i is the random error associated with the i-th observation, which is assumed to be independently and identically distributed according to a norm of zero mean and constant variance, E ~ N (0, I σ2).To estimate the maximum daily loss, that is, to determine the curve inflection point, the equation was used: MDL = (k × α)/4.
Figure 1 .
Figure 1.Variation of NBPT concentration over the storage period for conventional phosphated fertilizers and associated technologies at the ratios of 30 PF:70 NF (A) and 70 PF:30 NF (B), recorded at intervals of 0, 24, 72, 120, and 168 hours of storage.
Figure 2 .
Figure 2. Concentration and residual NBPT in urea after 8 months of storage.
Figure 3 .
Figure 3. Climatic data of the experimental areas in the municipalities of Lavras (A), Ingaí (B), and Luminárias (C), during 30 days after the application of N to the soil.
Figure 4 .
Figure 4. Volatilization of ammonia and soil urease activity 30 days after the application of N fertilizer technologies in Lavras, Minas Gerais, Brazil.Vertical bars indicate the standard error of the mean (n = 3).
Figure 5 .
Figure 5. Volatilization of ammonia and soil urease activity 30 days after the application of N fertilizer technologies in Ingaí, Minas Gerais, Brazil.Vertical bars indicate the standard error of the mean (n = 3).
Figure 6 .
Figure 6.Volatilization of ammonia and soil urease activity 30 days after the application of N fertilizer technologies in Luminárias, Minas Gerais, Brazil.Vertical bars indicate the standard error of the mean (n = 3).
Figure 7 .
Figure 7. Accumulated losses of NH 3 by volatilization 30 days after the application of N fertilizers in Lavras (A), Ingaí (B), and Luminárias (C).Means followed by the same letter do not differ according to Tukey's test (P < 0.05).Vertical bars indicate the standard error of the mean (n = 3).
|
2023-12-22T06:17:56.429Z
|
2023-12-20T00:00:00.000
|
{
"year": 2023,
"sha1": "8bbcced5e1881efe24ee141bae829208a330a585",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-50061-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0e3f65f9dccbd81ad6840f3b34f648091f4fbb4",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
81761054
|
pes2o/s2orc
|
v3-fos-license
|
Good response to oral acitretin monotherapy in a case of verrucous carcinoma of sole
A 50-year-old woman presented with a slowly enlarging slightly painful mass over the right foot for the past 1 year. Her past medical history showed that she had diabetes for the last 10 years and was currently on oral hypoglycemic drugs with good glycemic control. She treated herself with over-the-counter topical creams and emollients with partial relief. In the last 3 months, she noticed pus discharge from the lesion and was prescribed topical and oral antibiotics with a partial clinical response. Although she noticed a decrease in the amount of pus drainage, there was no apparent decrease in the size of lesion. She consulted a surgeon who made a diagnosis of corn and got herself operated with local excision of the mass with primary closure. However, the lesion recurred again after 3 months. She visited another surgeon who did a second round of excision of her foot mass, thinking it to be recurrent corn. The excised mass was sent for histopathology and was reported as nonspecific chronic inflammation with no evidence of malignancy. However, the lesion recurred again and she visited a diabetic foot specialist who treated her with regular paring and dressing. A dermatologist opinion was sought, and a working diagnosis of verrucous carcinoma was made on history and clinical examination. Local examination showed a whitish macerated endophytic growth on the plantar surface along the lateral side [Figure 1]. Scars of previous operative procedures were seen. Previously excised specimen was reviewed, which showed parakeratosis, hyperkeratosis, and endophytic growth with bulbous rete ridges composed of defectively benign squamous epithelial cells with mild atypia [Figure 2]. These were surrounded by fibrous stroma containing ectatic blood vessels and inflammatory infiltrate predominantly composed of neutrophils On clinicopathological correlation, a diagnosis of verrucous carcinoma of sole was made.
A 50-year-old woman presented with a slowly enlarging slightly painful mass over the right foot for the past 1 year. Her past medical history showed that she had diabetes for the last 10 years and was currently on oral hypoglycemic drugs with good glycemic control. She treated herself with over-the-counter topical creams and emollients with partial relief. In the last 3 months, she noticed pus discharge from the lesion and was prescribed topical and oral antibiotics with a partial clinical response. Although she noticed a decrease in the amount of pus drainage, there was no apparent decrease in the size of lesion. She consulted a surgeon who made a diagnosis of corn and got herself operated with local excision of the mass with primary closure. However, the lesion recurred again after 3 months. She visited another surgeon who did a second round of excision of her foot mass, thinking it to be recurrent corn. The excised mass was sent for histopathology and was reported as nonspecific chronic inflammation with no evidence of malignancy. However, the lesion recurred again and she visited a diabetic foot specialist who treated her with regular paring and dressing. A dermatologist opinion was sought, and a working diagnosis of verrucous carcinoma was made on history and clinical examination. Local examination showed a whitish macerated endophytic growth on the plantar surface along the lateral side [ Figure 1]. Scars of previous operative procedures were seen. Previously excised specimen was reviewed, which showed parakeratosis, hyperkeratosis, and endophytic growth with bulbous rete ridges composed of defectively benign squamous epithelial cells with mild atypia [ Figure 2]. These were surrounded by fibrous stroma containing ectatic blood vessels and inflammatory infiltrate predominantly composed of neutrophils On clinicopathological correlation, a diagnosis of verrucous carcinoma of sole was made.
Her complete hemogram and routine serum chemistry was within normal range. X-ray of the foot showed soft-tissue involvement with no bony extension or any damage to tendons.
We treated her with oral acitretin at the dose of 25 mg bid for 2 months with 60% reduction in verrucous mass. Later on, the dose was reduced to 25 mg OD till complete resolution of the lesion which took another 2 months [ Figure 3]. Then, acitretin was tapered for 10 mg/day for 2 months. Later, it was stopped and the patient was advised to apply adapalene 0.1% gel locally twice a day. There is no recurrence for the next 6 months after stoppage of acitretin therapy. The patient tolerated the therapy well except for dry lips and generalized xerosis, which were managed symptomatically with liberal use of moisturizers and emollients. Serum transaminase and lipid profile remained within normal range during the treatment period.
Although acitretin is being used as a chemopreventive agent to keep a check on solid organ transplant-related skin malignancies, [2] Retinoids exert antiproliferative action on certain types of neoplasm. In our case, the neoplasm responded to monotherapy of oral acitretin roughly at a dose of 1 mg/kg body weight. Kuan et al. documented the use of oral acitretin in multiple verrucous carcinomas. [3] However, other workers disputed the diagnosis of multiple verrucous carcinomas and suggested that the case seems to be of verrucous psoriasis. [4] How retinoid exerts anticancer effect on cutaneous malignancies is still not clear. Retinoids do not seem to act on known cell cycle pathway as compared to traditional chemotherapy drugs. It is plausible that retinoid promotes differentiation and leads to clearance of malignant tissue. In conclusion, acitretin monotherapy seems to be effective in the management of cutaneous verrucous carcinoma. It is worth trying acitretin as a treatment for cutaneous verrucous carcinoma before painful excision and its therapeutic potential needs to be further explored in large case series.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
|
2019-03-18T14:00:45.598Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "a56c80c769bb2615864e4de371f2658a449c33d8",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijdd.ijdd_14_17",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "f57880d9985575a41b944daf5f55ce71cc8046da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15024644
|
pes2o/s2orc
|
v3-fos-license
|
Functional Conservation of DNA Methylation in the Pea Aphid and the Honeybee
DNA methylation is a fundamental epigenetic mark known to have wide-ranging effects on gene regulation in a variety of animal taxa. Comparative genomic analyses can help elucidate the function of DNA methylation by identifying conserved features of methylated genes and other genomic regions. In this study, we used computational approaches to distinguish genes marked by heavy methylation from those marked by little or no methylation in the pea aphid, Acyrthosiphon pisum. We investigated if these two classes had distinct evolutionary histories and functional roles by conducting comparative analysis with the honeybee, Apis (Ap.) mellifera. We found that highly methylated orthologs in A. pisum and Ap. mellifera exhibited greater conservation of methylation status, suggesting that highly methylated genes in ancestral species may remain highly methylated over time. We also found that methylated genes tended to show different rates of evolution than unmethylated genes. In addition, genes targeted by methylation were enriched for particular biological processes that differed from those in relatively unmethylated genes. Finally, methylated genes were preferentially ubiquitously expressed among alternate phenotypes in both species, whereas genes lacking signatures of methylation were preferentially associated with condition-specific gene expression. Overall, our analyses support a conserved role for DNA methylation in insects with comparable methylation systems.
Introduction
DNA methylation is an important epigenetic modification that plays a role in gene regulation in many organisms (Wolffe and Matzke 1999;Jaenisch and Bird 2003;Weber et al. 2007). Although DNA methylation occurs in all three domains of life, its genomic patterns show considerable variation among taxa (Hendrich and Tweedie 2003;Field et al. 2004;Suzuki and Bird 2008). For example, vertebrate genomes exhibit global patterns of methylation, but invertebrate genomes tend to display reduced or minimal levels of methylation (Suzuki and Bird 2008). Moreover, methylation of gene promoter regions in vertebrates leads to transcriptional repression (Wolffe and Matzke 1999;Jaenisch and Bird 2003;Weber et al. 2007;Zemach et al. 2010), but this relationship has not been observed in invertebrates. Instead, methylation primarily targets invertebrate gene bodies (Suzuki and Bird 2008;Xiang et al. 2010;Zemach et al. 2010). These contrasting patterns and effects have traditionally enforced the view that DNA methylation plays a fundamentally different role in vertebrate and invertebrate genomes.
The arrival of genome sequences from multiple insects now makes a greater understanding of the patterns and phenotypic consequences of DNA methylation more tangible (Honeybee Genome Sequencing Consortium 2006; Wang et al. 2006; The International Aphid Genomics Consortium 2010; The Nasonia Genome Working Group 2010; Walsh et al. 2010). Specifically, comparative genomic analysis can be used to determine whether targets of DNA methylation are conserved between taxa. Moreover, the inferred patterns of methylation can be used to test current hypotheses explaining the evolutionary persistence of DNA methylation . For example, it has been hypothesized that gene body methylation may act to minimize spurious transcription patterns (Suzuki et al. 2007;Maunakea et al. 2010), which could explain observations of dense methylation in functionally conserved genes and genes with ubiquitous expression among tissues ª The Author(s) 2010. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/ 2.5), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. in invertebrates (Suzuki et al. 2007;Foret et al. 2009;Xiang et al. 2010). It has also been suggested that DNA methylation persists in animals for genomic defense against transposable elements (Yoder et al. 1997, but see Regev et al. [1998]; Simmen et al. [1999]; Suzuki et al. [2007], and Xiang et al. [2010]). DNA methylation may also act as an important mechanism for genomic imprinting, which results in the differential expression of parental alleles (Reik and Walter 2001). Finally, de novo DNA methylation is hypothesized to play an important role in developmental responsiveness to environmental factors and the regulation of phenotypic plasticity, as is apparently the case in the honeybee (Jaenisch and Bird 2003;Kucharski et al. 2008;Maleszka 2008).
The purpose of this study was to determine whether DNA methylation plays a conserved role in divergent insects with comparable DNA methylation systems. We provided insight into this question by comparing and contrasting the evolutionary signatures of DNA methylation in the genomes of the pea aphid, Acyrthosiphon pisum, and the honeybee, Apis (Ap.) mellifera.
Acyrthosiphon pisum diverged from Ap. mellifera more than 300 Ma (Gaunt and Miles 2002;Honeybee Genome Sequencing Consortium 2006), a time frame roughly equivalent to the divergence of modern birds and mammals (Kumar and Hedges 1998). Developmentally, Ap. mellifera undergoes full metamorphosis and possesses morphologically distinct larval, pupal, and adult stages. In contrast, A. pisum develops gradually and does not undergo metamorphosis. However, A. pisum and Ap. mellifera both serve as important models for understanding the evolution and development of phenotypic plasticity (Evans and Wheeler 2001;Brisson and Stern 2006;Honeybee Genome Sequencing Consortium 2006;Brisson 2010;The International Aphid Genomics Consortium 2010).
Specifically, aphids have a complex life cycle that alternates between asexual and sexual development. Asexual females exhibit a wing polyphenism in which they produce either winged or unwinged morphs depending on environmental cues (reviewed in Mü ller et al. 2001). During the sexual portion of the life cycle, males also produce winged or unwinged morphs. However, morph determination is genetic in males, and thus male wing dimorphism is referred to as a polymorphism (Smith and MacKay 1989). Honeybees, on the other hand, are highly social and dwell in large, predominantly female, colonies (Wilson 1971). Individuals partake in a remarkable division of labor, with a single queen typically dominating reproduction and workers engaged in tasks related to brood rearing, foraging, and colony defense (Wilson 1971). Queen and worker castes are developmentally determined by nutritional factors and exhibit dramatically different anatomy and behavior (Wheeler 1986;Evans and Wheeler 2001). Importantly, both Ap. mellifera and A. pisum show evidence of widespread DNA methylation that is predominantly targeted to genes (Wang et al. 2006;Elango et al. 2009;Walsh et al. 2010). Consequently, patterns of genome methylation in A. pisum and Ap. mellifera can provide considerable insight into the function of gene methylation in insects, in particular, and invertebrates, in general.
In this study, we investigated the conservation of DNA methylation patterns in A. pisum and Ap. mellifera by first testing whether genes with similar functions are targeted by DNA methylation in both species. To achieve this aim, we examined patterns of functional enrichment among genes marked by relatively dense methylation and relatively sparse methylation. We further tested whether shared patterns of functional enrichment among DNA methylation targets are associated with conservation at the sequence level (Suzuki et al. 2007). Next, we examined whether A. pisum provided support for the hypothesis that genes with sparse methylation exhibit condition-specific gene expression (Elango et al. 2009;Foret et al. 2009). Finally, we synthesized our results with those from other recent investigations to advance a more comprehensive understanding of DNA methylation in insects. Overall, our results provide support for a remarkable level of conservation in gene methylation status and function over evolutionary time.
Normalized CpG Dinucleotide Content (CpG O/E )
We used CpG O/E as a measure of the level of DNA methylation of genes (Saxonov et al. 2006;Suzuki et al. 2007;Weber et al. 2007;Yi and Goodisman 2009). CpG O/E acts as a metric of levels of DNA methylation because methylation occurs predominantly on CpG dinucleotides in animals and methylated cytosines are hypermutable due to spontaneous deamination. This deamination causes a gradual depletion of CpG dinucleotides from methylated regions over time (Bird 1980). Consequently, genomic regions with relatively dense germline methylation have low CpG O/E and regions with little or no germline methylation maintain high levels of CpG O/E . It is important to note that CpG O/E could be influenced by either the number of methylated CpG sites or the proportion of cells incurring methylation at a given locus. In addition, somatic mutations are not transmitted to progeny and therefore cannot influence CpG O/E in and of themselves. However, CpG O/E has been linked to empirically determined levels of DNA methylation in somatic tissues in insects, suggesting that many genes are universally methylated in germlines and soma (Foret et al. 2009;Xiang et al. 2010).
CpG O/E was calculated as described previously (Elango et al. 2009), from the gene sets above. Only RefSeq model sequences were used for analyses involving CpG O/E in A. pisum (except in the case of gene expression analysis, described below) because RefSeq models were used for Ap. mellifera in our analysis. Sequences with CpG O/E values of 0 were removed from further analysis.
Bimodal distributions of CpG O/E have previously been reported in both Ap. mellifera (Elango et al. 2009;Foret et al. 2009;Wang and Leung 2009) and A. pisum (Walsh et al. 2010). In this study, we used the NOCOM software package (Ott 1979) to estimate means, standard deviations, and proportions of two components of the mixture of normal distributions of CpG O/E for both A. pisum and Ap. mellifera. These distributions were plotted using R (R Development Core Team 2010), and their intersections were used as cutoffs to divide low CpG O/E and high CpG O/E gene classes.
Orthology
Three-way orthologs between A. pisum, Ap. mellifera, and D. melanogaster were identified by first performing pairwise BlastP comparisons of complete protein sequence sets with a cutoff of 1 Â 10 À5 , next identifying pairwise reciprocal best hits, and finally identifying orthologs with shared best hits among all pairwise comparisons (Altschul et al. 1997;Stajich et al. 2002). Orthologs determined in this manner were used for comparisons of CpG O/E and evolutionary distance between orthologs from A. pisum and Ap. mellifera.
Pairwise orthologs shared between A. pisum and D. melanogaster were identified by performing BlastP comparisons of complete protein sequence sets with a cutoff of 1 Â 10 À5 and identifying reciprocal best hits. Only orthologs with RefSeq model proteins in A. pisum were retained.
Sequence Divergence
In order to compare the evolutionary divergence of low CpG O/E and high CpG O/E orthologs between A. pisum and Ap. mellifera, a total of 2,222 orthologous protein sequences were first aligned using ClustalW (Thompson et al. 1994). Confidently, aligned gap-free columns were then extracted using Gblocks with default settings (Castresana 2000), and only long alignments (!100 amino acids) were kept for analysis. PAL2NAL was used to convert protein sequence alignments to corresponding codon alignments (Suyama et al. 2006). Finally, PAML was used to calculate rates of synonymous (dS) and nonsynonymous (dN) substitution with the ''codeml'' method (Yang 2007). Because synonymous substitution rates were predominantly saturated (dS . 2), measures of dN and DNA sequence percent identity were used to assess sequence divergence.
Gene Ontology
Gene ontology (GO) annotations for D. melanogaster orthologs of A. pisum proteins were used to analyze enrichment of biological process terms (Ashburner et al. 2000). GO biological process term enrichment was determined by comparing orthologs of low CpG O/E and high CpG O/E genes separately with a background composed of both low CpG O/E and high CpG O/E orthologs using the DAVID bioinformatics database functional annotation tool . A Benjamini multiple-testing correction of the EASE score (a modified Fisher exact P value; Hosack et al. 2003) was used to determine statistical significance of GO term enrichment.
EST Mapping
Acyrthosiphon pisum expressed sequence tags (ESTs), previously used to characterize differential gene expression underlying developmental differences, sex differences, female wing polyphenism, and wing morph differences (Brisson et al. 2007), were mapped to the A. pisum official genes consensus set (OGS) to aid in assessing the relationship between the degree of differential gene expression among phenotypic classes and CpG O/E . EST sequences were compared with all OGS mRNA sequences by BlastN (Altschul et al. 1997). To be considered a match, EST query sequences were required to have .50% sequence alignment to an OGS hit, .95% identity of the aligned sequence, and reciprocal best hits resulting from BlastN analysis of the OGS query against an EST database. GLEAN as well as RefSeq gene models were accepted in this case to map a greater proportion of microarray data. Brisson et al. (2007) previously examined the gene expression differences underlying distinct phenotypes in A. pisum using cDNA microarrays (Wilson et al. 2006). Specifically, microarrays were utilized to determine the degree of differential gene expression in comparisons of 1) fourth instar juveniles versus adults (compared within unwinged males, within winged males, within unwinged asexual females, and within winged asexual females), 2) males versus asexual females (compared within winged fourth instars, within unwinged fourth instars, within winged adults, and within unwinged adults), 3) polyphenic winged versus unwinged females (compared within fourth instars and within adults), and finally, polymorphic winged versus unwinged males (compared within fourth instars and within adults).
Gene Expression
For the present study, we calculated the mean of the absolute value of log 2 -transformed ratios across multiple comparisons to measure the degree of differential gene expression. In this manner, we combined data from all pairwise comparisons of 1) development, 2) sex, 3) female wing polyphenism, and 4) male wing polymorphism. The mean of log 2 -transformed gene expression ratios across all 12 pairwise comparisons was also calculated. We further divided each of these measures into two bins at a mean jlog 2 expression ratioj value of 0.5, with genes below this threshold roughly corresponding to genes with similar expression between groups and genes above this value roughly corresponding to genes with differential expression between groups.
We also revisited analysis previously described and published by Elango et al. (2009), which demonstrated that high CpG O/E genes were overrepresented among genes that were differentially expressed between queen and worker castes (Grozinger et al. 2007). For the present manuscript, we analyzed NCBI transcript sequences rather than introns and exons combined, to remain consistent with our analyses of aphid gene expression.
Finally, Foret et al. (2009) previously used an oligonucleotide microarray representing the honeybee OGS (Honeybee Genome Sequencing Consortium 2006) to assess the expression breadth of genes among the following tissues in Ap. mellifera: antenna, brain, whole-body larva, hypopharyngeal gland, ovary, and thorax. They further demonstrated that low CpG O/E genes were vastly overrepresented among genes with ubiquitous expression (Foret et al. 2009). We expanded upon their analysis by splitting genes into six classes based upon the number of tissues with observed expression. To do so, we utilized lists of genes expressed in each tissue, along with a fasta file of sequences used to design the array. To map sequences with generic microarray identifiers to honeybee model RefSeq transcripts, we compared the sequences using BlastN (Altschul et al. 1997). To be considered a match, array query sequences were required to have .50% sequence alignment to a model Re-fSeq transcript hit and .98% identity for the aligned sequence. We then generated a numeric count of the num-ber of tissues in which each gene was expressed (integers from 1 to 6) and recorded the CpG O/E for each associated model RefSeq transcript. Data for expression breadth and CpG O/E were obtained in this manner for a total of 7,576 Ap. mellifera genes.
Additional Analysis
Statistical tests (rank sum tests and correlations) were performed using either R (R Development Core Team 2010) or the JMP statistical software package (SAS Institute Inc.). Proportional Venn diagrams were generated using the Venn Diagram Plotter available from Pacific Northwest National Laboratory (http://omics.pnl.gov).
Results
We divided genes into low and high CpG O/E classes based on the bimodal distributions of CpG O/E observed in A. pisum (CpG O/E cutoff 5 0.82; fig. 1A) and Ap. mellifera (CpG O/E cutoff 5 0.72; fig. 1B). These two classes of genes roughly correspond to genes incurring relatively dense versus relatively sparse methylation (Saxonov et al. 2006;Suzuki et al. 2007;Weber et al. 2007;Elango et al. 2009;Foret et al. 2009;Wang and Leung 2009;Yi and Goodisman 2009;Xiang et al. 2010).
To gain insight into the evolutionary maintenance of genes with different levels of methylation, we first investigated whether genes belonging to distinct CpG O/E classes showed differences in their conservation of CpG O/E status over evolutionary time. A total of 2,339 three-way orthologs were identified with nonzero CpG O/E values in A. pisum, Ap. mellifera, and D. melanogaster. By comparing the CpG O/E classification of orthologs in A. pisum and Ap. mellifera from this data, we found that genes with high CpG O/E exhibited considerably less conservation of CpG O/E status than genes with low CpG O/E ( fig. 2, table 1; Pearson's Chi-squared test with Yates' continuity correction P 5 0.0075). Thus, patterns of dense DNA methylation have been more conserved over evolutionary time than patterns of sparse DNA methylation in A. pisum and Ap. mellifera.
We next determined whether the differential conservation of low CpG O/E and high CpG O/E status was associated with differential conservation of nucleotide and amino acid sequence. We found that genes from the low CpG O/E class in A. pisum and Ap. mellifera both harbored significantly greater proportions of genes with detectable three-way orthologs than genes from the high CpG O/E class (table 2; Pearson's Chi-squared test with Yates' continuity correction P , 1 Â 10 À15 ). We also found that DNA sequence conservation was significantly higher between A. pisum and Ap. mellifera orthologs from the low CpG O/E class than orthologs from the high CpG O/E class (Kruskal-Wallis rank sum test P 5 0.0003; fig. 3A, supplementary table S1, Supplementary Material online). Both of these results suggested that densely methylated genes, as a whole, were considerably more conserved at the sequence level than sparsely methylated genes. However, in contrast to the results obtained from analysis of ortholog loss and DNA sequence identity, amino acid substitution rates among genes with detectable three-way orthologs were slightly higher among low CpG O/E genes than high CpG O/E genes (Kruskal-Wallis rank sum test P 5 0.0012; fig. 3B and supplementary fig. S1 and tables S1 and S2, Supplementary Material online). Furthermore, an alternate analysis, presented in our supplementary material, also found that densely methylated genes with detectable orthologs exhibited slightly higher rates of amino acid substitution than sparsely methylated genes.
To investigate whether genes with different levels of methylation were associated with specific functions, we next tested for enrichment of GO biological process terms in 4,404 A. pisum genes with D. melanogaster orthologs. We found that functions related to cellular metabolic processes were overrepresented among low CpG O/E genes (table 3). In contrast, functions associated with cellular signaling, behavior, and environmental stimulus were overrepresented among high CpG O/E genes (table 3).
We also found that six of the top ten enriched functional terms for A. pisum low CpG O/E genes were among the top ten enriched functional terms in Ap. mellifera low CpG O/E genes (table 3; Elango et al. 2009). In contrast, only two of the top ten high CpG O/E functional enrichment terms were in agreement between A. pisum and Ap. mellifera (table 3; Elango et al. 2009). Thus, the function of low CpG O/E genes appears to be relatively conserved over evolutionary history.
Finally, we investigated whether CpG O/E measures were associated with patterns of gene expression among distinct phenotypic groups in A. pisum using microarray data for 1,347 genes (Brisson et al. 2007). We analyzed the degree of differential gene expression between developmental stages (development; 4th instar vs. adult), between sexes (sex; male vs. asexual female), between environmentally sensitive asexual female wing phenotypes (female wing polyphenism; winged vs. unwinged), and between genetically determined male wing phenotypes (male wing polymorphism; winged vs. unwinged).
Our results suggested that genes with low levels of DNA methylation exhibited complex, condition-specific regulation of gene expression: differential gene expression, when combined for all pairwise comparisons of alternate phenotypes, displayed a significant positive correlation with CpG O/E in A. pisum (Pearson product-moment correlation P , 0.001; table 4, fig. 4A). This signal was primarily driven by development, sex, and female wing polyphenism, which each demonstrated that differential gene expression was significantly associated with high CpG O/E (table 4; fig. 4A). Differential gene expression between male wing morphs was not significantly associated with CpG O/E in A. pisum, although the trend was in the same direction as the other tests (table 4, fig. 4A).
We also reanalyzed data linking gene expression to methylation levels in Ap. mellifera to illustrate that differential gene expression between caste phenotypes (Elango et al. 2009) and gene expression breadth (Foret et al. 2009) were also each associated with CpG O/E ( fig. 4B and C). Specifically, genes with differential expression between Ap. mellifera queens and workers, and those expressed in few Ap. mellifera tissues, preferentially exhibited high CpG O/E . Overall, our results reveal that genes with condition-specific regulation are associated with higher CpG O/E and lower levels of DNA methylation than ubiquitously expressed genes in both A. pisum and Ap. mellifera.
Gene Evolution and DNA Methylation
We have reported distinct levels of conservation of DNA methylation status for orthologs with heavy methylation (low CpG O/E ) and sparse methylation (high CpG O/E ) in the pea aphid, A. pisum, and the honeybee, Ap. mellifera ( fig. 2, table 1). In particular, a greater proportion of orthologs maintain low CpG O/E status than high CpG O/E status over evolutionary time. Thus, genes that were presumably densely methylated in the ancestor of A. pisum and Ap. mellifera were more likely to remain methylated through evolutionary time, whereas genes with sparse methylation were less likely to maintain their low methylation status.
Furthermore, we found that heavily methylated genes had a greater number of detectable orthologs and exhibited greater DNA sequence conservation than genes with sparse methylation (table 2; fig. 3A). In line with these results, a prior study also found that genes with signatures of methylation were enriched among orthologs that could be identified between distantly related taxa (Suzuki et al. 2007). Thus, heavily methylated genes, overall, appear to be more conserved at the sequence level than sparsely methylated genes. This observation is particularly striking because DNA methylation increases the occurrence of mutations at CpG sites and might be expected to lead to rapid DNA sequence divergence (Elango et al. 2008). One possible explanation for the observed trend, however, is that orthologs with consistently low CpG O/E over evolutionary history have (Suzuki et al. 2009). Another possibility is that genes targeted by DNA methylation may be under greater functional constraint, as a class, than unmethylated genes.
Surprisingly, in contrast to our results from analysis of DNA sequence identity, we found that densely methylated genes with detectable orthologs may be under less constraint at the amino acid level than their sparsely methylated counterparts ( fig. 3B and supplementary fig. S1, Supplementary Material online). Apparently, A. pisum and Ap. mellifera high and low CpG O/E genes that do not retain detectable orthologs in D. melanogaster differ more from each other, in terms of evolutionary constraint at the protein level, than do high and low CpG O/E genes with detectable orthologs (table 2 and supplementary tables S1 and S2, Supplementary Material online; fig. 3 and supplementary fig. S1, Supplementary Material online). It remains unclear why this may be the case, but our results suggest that different classes of genes may behave differently with respect to the interaction between selective constraints or mutability and methylation status.
Gene Expression and DNA Methylation
In the present study, we add to the emerging view that genes with ubiquitous expression in insects are preferentially targeted by DNA methylation (Elango et al. 2009;Foret et al. 2009;Xiang et al. 2010). Specifically, genes with similar expression levels among phenotypic groups exhibit evolutionary signatures of significantly higher levels of DNA methylation than genes with differential expression between 4C; Foret et al. 2009) and the silkworm, Bombyx mori, even though B. mori possesses only a partial complement of DNA methylation enzymes (Xiang et al. 2010). By comparison, genes with tissue-specific expression in Ap. mellifera ( fig. 4C; Foret et al. 2009) and B. mori (Xiang et al. 2010), with castespecific expression in Ap. mellifera ( fig. 4B; Elango et al. 2009), and with differential expression between developmental stages, sexes, and polyphenic wing morphs in A. pisum, all exhibit lower levels of DNA methylation than their ubiquitously expressed counterparts ( fig. 4A). Thus, sparse levels of DNA methylation are associated with flexibility in gene expression, either between polyphenic forms or different tissues. Our results reveal that complex gene regulation is associated with low levels of DNA methylation in disparate insects. This finding may appear to contrast with the idea that DNA methylation plays an important role in the epigenetic regulation of phenotypic plasticity (Jaenisch and Bird 2003;Kucharski et al. 2008;Maleszka 2008). Indeed, our observations suggest that the primary targets of DNA methylation are those genes least likely to be implicated as leading to phenotypic variation. However, we cannot rule out the cooption of DNA methylation FIG. 4.-Ubiquitously expressed genes exhibit higher levels of DNA methylation than genes with condition-specific expression. (A) Genes with a high degree of differential expression between groups exhibit significantly higher CpG O/E than genes with ubiquitous expression in Acyrthosiphon pisum. This relationship also holds true for (B) differential expression between Apis mellifera queen and worker castes (adapted from Elango et al. 2009). (C) Similarly, genes with a high degree of tissue specificity exhibit significantly higher CpG O/E than genes with ubiquitous expression among tissues in Ap. mellifera (adapted from Foret et al. 2009). Significance values represent Wilcoxon signed-rank tests in panels A and B and a Kruskal-Wallis rank sum test in panel C. Means and 95% confidence intervals are plotted. Horizontal dashed lines represent the mean CpG O/E for all genes in a given panel. Vertical gray lines represent bin cutoffs for classification of genes according to mean jlog 2 expression ratioj. for complex regulatory roles operating on a smaller number of loci.
Steps toward a Unified View of Intragenic Methylation
Recently, a unified view of the functional role of intragenic (vs. intergenic or promoter) DNA methylation in vertebrates and invertebrates has begun to emerge. For example, methylation of gene bodies in many vertebrates and invertebrates is associated with moderate gene expression levels (Zemach et al. 2010). Our data, obtained from microarray analyses, do not directly address overall levels of gene expression but instead address expression breadth among tissues or alternate phenotypic classes. We find that genes with high CpG O/E measures possess an enriched aptitude for conditional expression associated with distinct tissues or alternate phenotypes. In contrast, genes with dense methylation exhibit a greater propensity for static levels of expression. A recent mammalian study revealed that intragenic methylation limits the generation of alternate gene transcripts by masking intragenic promoters (Maunakea et al. 2010). This mechanism may explain why broadly expressed genes are subject to the highest levels of methylation in invertebrates: broadly expressed genes may be preferentially targeted by DNA methylation due to enhanced negative effects associated with alternate promoters at such loci. Importantly, the proposed link between intragenic methylation and the regulation of alternate transcription (Maunakea et al. 2010) suggests that different levels of methylation in distinct tissues or developmental stages could have important phenotypic consequences.
Finally, we note that our results do not apply to insect taxa that have heavily diminished methylation systems (Urieli-Shoval et al. 1982;Field et al. 2004). Instead, we suggest that DNA methylation is one of many tools that can be co-opted for the purposes of gene regulation in organisms that have retained a complete enzymatic toolkit for mediating DNA methylation.
|
2016-03-14T22:51:50.573Z
|
2010-09-20T00:00:00.000
|
{
"year": 2010,
"sha1": "31daaee677b5b5a42e520369f02a03e05f72a2db",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/gbe/article-pdf/doi/10.1093/gbe/evq057/17483438/evq057.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e45f0e88c80bdc10ddf5b8000c3c9901889b870",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
119223038
|
pes2o/s2orc
|
v3-fos-license
|
Quark mass dependence of the QCD temperature transition in magnetic fields
Vacuum energy of quarks participates in the pressure balance at the temperature transition T_c and defines the dependence of T_c on m_q. We first check this dependence in absence of magnetic fields eB vs known lattice data, and then take into account the known strong dependence of the quark condensate on eB. The resulting function T_c(eB, m_q) is valid for all eB, m_q<\sqrt\sigma and explains the corresponding lattice data.
Introduction
The QCD matter is believed to undergo a temperature transition from the low temperature confining state to a deconfined state of quarks and gluons. On the theoretical side the most detailed information on this transition was obtained in numerical lattice studies, see [1,2,3] for reviews. On the experimental side only indirect data on this transition in heavy ion collisions are available [4,5], however the full theory is highly needed both for the experiment and for astrophysical applications, e.g. in the study of neutron stars, see e.g. [6]. Analytic studies of the QCD temperature transition still predominantly based on models, which can describe partial features of the transition [7,8,9], but the full analytic theory is still lacking.
In 1992 one of the authors has suggested a simple but internally consistent mechanism of the QCD temperature transition [10], based on a general nonperturbative approach in QCD, called the Field Correlator Method (FCM) which was formulated for the temperature theory in [11,12,13].
In FCM both perturbative and nonperturbative (np) dynamics are given by field correlators, which can be computed within the method itself [14], or taken from lattice data [15]. Moreover, the total thermodynamic potential (free energy) of a state includes the vacuum energy, which can be different in the confining and nonconfining states.
It was exploited in [10], that the vacuum energy of the confined state contains gluon vacuum energy ε g vac ≡ ε g vac (mag) + ε g vac (el), with the colormagnetic ε g vac (mag) and colorelectric ε g vac (el) parts, whereas in the deconfined state the (confining) colorelectric vacuum fields ε g vac (el) are absent. In this way the transition temperature T c was calculated from the equality of the confined state pressure P I and the deconfined one P II , P I = P II , at T = T c , where Taking for |ε g vac | the standard values of the gluon condensate [16,17] and assuming, that |ε (g) vac (mag)| is equal to one half of the total, as it is for zero temperature, and neglecting in the first approximation P hadr (T ), one obtains reasonable values of the transition temperature T c for different values of the number of flavors n f as it is shown in Table I, the upper part, in comparison with available lattice data. Note, that in this calculation the input parameters are |ε (g) vac | = 0.006 GeV 4 which is a basic parameter in QCD and the nonperturbative interaction V 1 (r, T ), generating Polyakov line average, , which is calculated from the field correlator [11,12,13]. The actual parameter, entering Polyakov loops is the same, as in [13], These calculations, however, have been done neglecting the quark vacuum energy, which is possible for zero quark masses, and can be a good approximation for n f = 2, but not for the (2 + 1) case, where m s (2 GeV) ≈ 0.1 GeV. Table 1: Transition temperature T c for massless quarks, n f = 0, 2, 3 (the upper part), and for different nonzero m q and n f (the lower part) in comparison with lattice data. Thus for a realistic case the quark vacuum energy (3) should be included in P I , P II , with the resulting difference ∆ε Here Z q is the factor, which takes into account the nonvanishing of | qq | in the deconfined region as a result of a slower decrease of | qq | with temperature due to the finite mass m q and a finite limiting value due to the finite mass m q . Indeed, as it was found in the lattice study of the n f = 2 + 1 QCD [20,21] the transition temperature, obtained from strange quark susceptibility is 10-15 MeV higher that from the light quarks.
In what follows we shall exploit the effective quark condensate Z q qq ≡ qq ef f to distinguish from the standard quark condensate qq st = (0.27 GeV) 3 . We shall omit below the subscript eff, keeping the notation qq st for the standard condensate of (0.27 GeV) 3 .
In principle one can check, whether this procedure of the ε (q) vac inclusion is correct, calculating the quark mass dependence of T c (m q ) for zero m.f. and comparing it with the lattice data [20,21,22]. As will be shown, indeed in both cases T c (m q ) grows with m q , and the concrete agreement is obtained for reasonable values of gluon and quark condensates, One important property of the vacuum quark energy is that the quark condensate qq grows with the increasing magnetic field (m.f.) when a constant magnetic field B is imposed in the system, see [23] for lattice data and [24] for analytic studies. It was shown recently in the accurate lattice studies [23,25,26], that T c (B) is decreasing with B, and the same result was ob-tained in [27,28] and in our latest work [29], where ε (q) vac was not taken into account, and T c (B) was slowly tending to zero at large B.
In the present paper, after checking the T c (m q ) dependence for zero m.f., but with ε (q) vac taken into account, we calculate T c (B) for the (2 + 1) case with physical quark masses and find that T c (B) is again decreasing with growing B, but is tending to a constant limit at large B when a reasonable (observed in N c = 3) dependence of | qq (B)| is accounted for. We also observe a quite different behavior of T c (B) in another case, when | qq |(B) is growing faster, than linearly, as it happens in SU(2), n f = 4 lattice data [30].
The paper is organized as follows. In the next section we assemble together all formalism necessary to calculate transition temperature for nonzero quark masses and m.f. B. In section 3 we calculate T c (m q ) and compare with available lattice data, fixing in this way the starting (B = 0) quark vacuum energy.
In section 4 the full derivation of T c (m q , B) is given for arbitrary m.f. and results of calculations are compared with lattice data. Discussion of results and prospectives are given in the concluding section.
General formalism
We are basing the contents of this section on the results of [10,11,12,13], adding to that the contribution of the quark vacuum energy ε (q) vac . Thus in the confined state one has where vac given in (3), while P hadr in absence of m.f. can be approximated by the free hadron gas expression, with degeneracy factors d i , and hadron energies ε i = k 2 + m 2 i . The minus and plus signs refer to mesons and baryons respectively.
Note, that Eq. (6) does not take into account high density and interaction corrections, as well as the decay width and off-shell effects.
In the deconfined state (E a i ) 2 ≡ 0, while the vector color electric interaction V 1 produces fundamental Polyakov loop [31,32], , hence the pressure is as in (2) with [11,12] 1 where For gluons the pressure contains adjoint Polyakov loops L n adj , L adj = L As a result the transition temperature can be obtained from the equality P I = P II in the form and ∆|ε vac | = ∆|ε (q) vac | + |ε (g) vac (el)|.
We now come to the exact form of the qq interaction V 1 (r) produced by np nonconfining correlator D E 1 (x), [11,12,31], which gives a one-quark contribution V 1 (∞, T ), It was argued in [12,31], that V 1 (∞, T ) at T around T c can be approximated by the formula The form (13) agrees approximately with the known lattice data [36,37]. In what follows we shall follow the reasoning of our previous studies [12,31] and use At this point one must take into account, that the interaction V 1 (r, T ) is able to bind the qq pairs into bound states, see [3] for a review and [31,32] for analytic studies. Moreover, as shown on the lattice in [33,34,35], the QQ interaction in the unquenched case changes smoothly with temperature around T c , where the confining interaction V conf (R, T ) is replaced by the nonconfining V 1 (R, T ). Therefore one can introduce the pressure of the bound pair and triple terms P bound and assume that the difference ∆P hadr ≡ P hadr (I) − P bound (II) is small quantity near T c , which can be neglected in the first approximation. Note also, that in the quenched case this transition from V conf to V 1 has a different structure, see e.g. [36]. As it is, we are not yet able to explain why this smooth transition of V conf to V 1 happens in the unquenched case and how it leads to the resulting QCD temperature transition, (work in this direction is going on).
As was discussed in [31,32] and known from the lattice data, see [3] for a review, the binding properties of V 1 (R, T ) are concentrated in a narrow region of temperatures around T c , while V 1 (∞, T ) gives a piece of selfenergy for each quark and antiquark, decreasing at large T . In this way the confined pairs and triples of quarks and antiquarks near T ≈ T c go over into pairs and triples connected by the interaction V 1 (r, T ) and isolated quarks and antiquarks with energies augmented by a constant piece V 1 (∞, T ). In some sense this transition is similar to the process of ionization of neutral gas at increasing temperature, which finally produces the ion-electron plasma in a smooth continuous way.
In what follows we shall consider ∆P hadr (T c ) as a small term as compared with |∆ε vac | and shall disregard it in the first approximation. We now can proceed with calculation of T c and we start with the case µ = 0, B = 0, when one can retain the first terms with n = 1 in the sums in (7) and (9).
The quark mass dependence of the transition temperature without magnetic fields
To check the influence of ε (q) vac ≡ q qq m q we first take it into account in the case of zero m.f., comparing T c (m q , eB = 0) and T c (0, eB = 0). Using the solution of (10) for T c in two cases, when the |∆ε vac | is 1 2 |ε (g) vac | and ∆ε vac = c , one obtains (using Eq. (14) from [12]) [22,20,21].
To check quantitatively the value of the effective condensate | qq |, or equivalently of the factor Z q = | qq | (0.27GeV) 3 , one can use the lattice numerical data of [22] for T c (m q ) in the n f = 3 case, presented in Table 2 and compare with our predictions for | qq | = (0.13 GeV) 3 . One can see a good agreement, which enables as choose this value of | qq | for our calculations for nonzero m.f. in the next section. Note also, that in the region m q ≥ √ σ ∼ 0.4 GeV the quark condensate may have a nontrivial dependence on m q , which influences the resulting values of T c (m q ), as can be seen in Table 2. (14) with | qq | = (0.13 GeV) 3 in comparison with the lattice data from [22]. In a similar way one can include ε (q) vac in the transition equation for nonzero chemical potential µ, neglecting the difference ∆P hadr (T c ) as before, one has an equality (10), which can be rewritten as where p q (µ) according to [12] is where ν = m q /T and and p gl to lowest order is independent of µ and is given by We are now in a position to turn on the external magnetic field.
Transition temperature with the quark vacuum energy in external magnetic field
In principle the magnetic field influences both phases of matter: 1) the quark vacuum energy ε (q) vac via the quark condensate qq (B), 2) the gluon condensate via internal quark pair creation, 3) hadron gas pressure, 4) quark gas pressure.
We shall disregard as before the hadron gas contribution ∆P hadr and start with the quark condensate. In this section we shall exploit the same mechanism of the temperature transition with the full (quark plus gluon) vacuum energy, as in the previous section, but now in the external constant magnetic field. To this end one can write the basic equilibrium equation where P (0) g ≡ p gl T 4 , and use the previously found P q (B) from [29], valid for all values of B and m q , e q ≡ |e q |, For large e q B one can writeP q = q P q (B) in the form We take into account, that | ss (B)| grows with eB in the same way as | d d (B)|, while for | ūu | the charge e u is twice as big, so that according to [24], one can write As a result the transition temperature can be found from the relation which yields T c (∞) ≈ 100 MeV. In this way the asymptotic behavior of T c (B) becomes more moderate and its negative slope decreases at large eB, again in agreement with lattice data of [25]. The resulting behavior of T c (B) with the account of quark vacuum energy is shown in Fig. 1 for different values of | qq |. Another illustration of quark mass influence is given in Fig. 2, where a comparison is presented of our results for the solution of Eq. (21) and the corresponding lattice data from [23,25] for two cases: n f = 2 + 1 with physical strange and light quark masses, and n f = 3, where all quarks have the masses of the strange quark (lattice data from [26] are given by two points at eB = 0 and 0.8 GeV 2 ). One can see a good (∼ 10%) agreement between two sets of results. One can see from Fig. 2, that the increasing role of ε (q) vac leads to the flattening of the resulting dependence of T c (eB). Note however, that we have neglected the possible dependence of |ε vac | on m.f., which should appear in higher orders of α s . One can expect, that |ε (g) vac (B)| should decrease for growing eB due to the fact, that α s (eB) decreases, since the quark loop contribution in the denominator of α s (eB) in m.f. is growing with eB, as shown in [38].
A decreasing behavior of ε (g) vac in m.f. is obtained within the framework of chiral perturbation theory in [39]. Therefore one can expect, that the net effect of m.f. on the |∆ε vac (B)| would be a more mild linear growth, which implies the partial cancellation of the effect of the quark condensate.
Discussion of results and prospectives
We have taken into account in the present paper the effect of the quark vacuum energy on transition temperature both with or without m.f. The case of no m.f. helps us to fix the starting value of the strange quark condensate. Its later growth with m.f. is predicted by the theory, based on the chiral Lagrangian, augmented by the quark degrees of freedom, as well as recent lattice data. This theory was developed before in [24] and the resulting behavior of the quark condensate was in good agreement with recent accurate lattice data in [23]. Comparing our present calculations with the previous ones in [29], one can see an important role of the strange quark vacuum energy (s.q.v.e.) m s | ss | at zero m.f. and even larger role for growing m.f. One can see in Fig. 1, that not only the asymptotics of T c (B) is changed by s.q.v.e. but also the slope in the region eB ≤ 1 GeV 2 becomes more flat, in better agreement with lattice data from [25].
One more consequence of the s.q.v.e. inclusion is that the phenomenon of temperature transitions is now strongly dependent on the admixture of strangeness in the matter, which can be important for the physics of neutron stars. As a general outcome, one can conclude, that the main features of the quark mass dependence of the transition temperature are accounted for by the vacuum quark energy ε (q) vac , and this holds both with or without m.f.
|
2013-12-15T18:33:03.000Z
|
2013-12-15T00:00:00.000
|
{
"year": 2013,
"sha1": "ffd83f99e8412775af0c670849d4d14244a7e541",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1312.4178",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ffd83f99e8412775af0c670849d4d14244a7e541",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.