id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
25305723
|
pes2o/s2orc
|
v3-fos-license
|
Acceptance and usability of a home-based monitoring tool of health indicators in children of people with dementia: a Proof of Principle (POP) study.
BACKGROUND
Large-scale cohort studies are needed to confirm the relation between dementia and its possible risk factors. The inclusion of people with dementia in research is a challenge, however, children of people with dementia are at risk and are highly motivated to participate in dementia research. For technologies to support home-based data collection during large-scale studies, participants should be able and willing to use technology for a longer period of time.
OBJECTIVE
This study investigated acceptance and usability of iVitality, a research platform for home-based monitoring of dementia health indicators, in 151 children of people with dementia and investigated which frequency of measurements is acceptable for them.
METHODS
Participants were randomized to fortnightly or monthly measurements. At baseline and after 3 months, participants completed an online questionnaire regarding the acceptance (Technology Acceptance Model; 38 items) and usability (Post-Study System Usability Questionnaire; 24 items) of iVitality. Items were rated from 1 (I totally disagree) to 7 (I totally agree). Participants were also invited to take part in an online focus group (OFG) after 3 months of follow-up. Descriptive statistics and both two-sample/independent and paired t-tests were used to analyze the online questionnaires and a directed content analysis was used to analyze the OFGs.
RESULTS
Children of people with dementia accept iVitality after long-term use and evaluate iVitality as a user-friendly, useful, and trusted technology, despite some suggestions for improvement. Overall, mean scores on acceptance and usability were higher than 5 (I somewhat agree), although the acceptance subscales "social influence" and "time" were rated somewhat lower. No significant differences in acceptance and usability were found between both protocol groups. Over time, "affect" significantly increased among participants measuring blood pressure fortnightly.
CONCLUSION
iVitality has the potential to be used in large-scale studies for home-based monitoring of health indicators related to the development of dementia.
Plain language summary
To confirm the relation between dementia and possible risk factors, it is important to conduct studies among people with dementia with long follow-up periods. However, including people with dementia is difficult. Therefore, people at risk of developing dementia such as children of patients with dementia, could be included since they seem to be highly motivated to participate in research. Technologies can support the monitoring of possible risk factors of dementia. Before such technologies can be included in studies regarding the relation between dementia and its risk factors, it should be investigated whether participants are able and willing to use such
Introduction
The number of people who suffer from dementia is expected to increase rapidly in the coming years. 1 Despite increased understanding of the causes of dementia, no cure or effective preventive interventions are available yet. Previous research suggests that interventions that aim to influence risk factors, such as uncontrolled blood pressure (BP), low mental and physical activity (AC), and obesity, could play a role in preventing dementia. 2 Large-scale cohort studies with long-term follow-up are needed to confirm the relation between these risk factors and the onset of dementia. However, research which aims to study the preventive strategies that influence these risk factors needs to start before the onset of dementia and needs to include very large samples of older adults. This is a challenge in the field of research, including people with dementia. The inclusion of people with dementia in long-term follow-up studies is difficult due to deteriorating prognoses or even death. Furthermore, selecting a sample from the general population would require a large number of participants due to their relatively "low" risk of developing dementia. Therefore, we chose to recruit children of people with dementia, who have an increased risk of developing hypertension and dementia. 3,4 Moreover, they are highly motivated to contribute to research about the prevention of dementia, because of their direct experiences with the impact of dementia. 5,18 Technologies such as internet, smartphones, computers, sensors, and home-based monitoring devices can be used to support data collection during large-scale clinical studies. Such technologies are also increasingly used by middle-aged and older adults which provides opportunities to include them in research. 6 If these technologies can facilitate participant recruitment and data collection, this can contribute to the development and study of evidence-based preventive strategies and treatment for the ageing population, including people with dementia.
iVitality is a research platform, consisting of a website, a smartphone-based application, and sensors which are connected to the smartphone. iVitality can be used for homebased long-term monitoring of several health indicators, ie, BP, AC, cognition (C), and lifestyle factors, that are associated with dementia, as shown in previous research. 5,[7][8][9] iVitality is intended to be used in the PROBE (PReservation Of Brian function in the Elderly) study, a large-scale trial on these health indicators and their relationship with dementia. In order to support such large-scale clinical studies regarding the etiology of dementia and potentially relevant prevention strategies, participants of such studies should be able and willing to use the platform for a longer period of time. Factors such as the usability of the platform, clearness of its interface, and its functional/technical adequacy might influence participants' willingness to use the platform. 10 Furthermore, the frequency of health indicator measurements might influence participants' intention to use the platform. 11 Therefore, the objectives of this Proof of Principle (POP) study are to gain insight into the long-term acceptance and usability of iVitality according to children of people with dementia, and to find out which frequency of measurements is acceptable for them.
Design and participants
The POP study had 6 months of follow-up. Potential participants were recruited via posters and flyers in memory clinics to reach children of people with dementia, who accompanied their parent to the memory clinic. Furthermore, potential participants were recruited via advertisements in the magazine and on the website of the Dutch Alzheimer Association. Participants were eligible for inclusion if they: 1) were children of people with late-onset dementia diagnosed as Alzheimer's disease, vascular dementia or mixed dementia, 2) were aged between 45 and 75 years old, 3) had no prior diagnosis of hypertension, and 4) were in possession of a smartphone with iOS or Android software (version 2.3.3. or higher). Children of people with dementia who wanted to take part in the POP study registered via the iVitality website. This website provided information about the study and after reading this information 195 participants registered online to participate. This study was approved by the Medical Ethical Committee of LUMC, the Netherlands (P11.131).
Procedures and measurements
A baseline (T1) assessment with a nurse practitioner or medical doctor from one of the participating memory clinics was scheduled with 151 participants who provided informed consent and were included in the study, based on the inclusion criteria. During this appointment, participants' office BP was measured and basic demographic characteristics, medication
1319
home-based monitoring tool of dementia health indicators use, and medical history were recorded. Also, participants received an explanation about how to download the iVitality application on their own smartphone and how to use iVitality during the POP study. If necessary, they practiced this with the nurse practitioner. A BP measurement instrument was provided to all participants for the duration of the study. After participants downloaded the iVitality application they were randomly assigned to one of the two measurement protocols. Randomization was stratified for gender and performed in a 1:1 manner. Table 1 shows the measurement sequences of BP monitoring, AC monitoring, and C tests, and lifestyle questions (Q) of protocol 1 and protocol 2. For both protocols, AC, C, and lifestyle were measured on 4 consecutive days in the first and final week of the study. In addition, C and lifestyle were measured for 1 day each week in-between the first and final week. Differences between protocol groups concerned the frequency of BP measurement, which was measured monthly (on 2 consecutive days) for protocol 1 and fortnightly (on only 1 day) for protocol 2. A 1-day BP measurement consisted of two consecutive measurements in the morning and two in the evening. Via the iVitality application, participants received notifications with regard to these measurements. Participants used iVitality for 6 months. All data regarding BP, AC, C, and Q were automatically uploaded to a central database which was password protected. If hypertension was diagnosed (average BP of 135/85 mmHg based on multiple measurements) the study doctor received an automated notification and informed the participant to visit his/her general practitioner (GP).
All participants received an online questionnaire at T1 and after 3 months of follow-up (T2) which contained questions regarding the acceptance and usability of iVitality. In addition, participants who participated between 3 and 6 months were invited to take part in an online focus groups (OFG) interview (T3) to gain insight into their experiences with the iVitality research platform.
Online questionnaires
An online questionnaire was used to measure the acceptance and usability of iVitality at T1 and T2. The questionnaire used to measure acceptance of iVitality was the Technology Acceptance Model (TAM) and its extensions developed by Venkatesch et al. 12 The questionnaire consisted of 38 items divided over eight subscales: motivation (13 items), performance expectancy (five items), effort expectancy (four items), social influence (two items), affect (four items), trust (four items), self-efficacy (five items), and time (one item). According to the TAM, these concepts influence a person's intention to use a new technological innovation and by that the actual use in daily life. 12 The complete acceptance questionnaire was included in the online questionnaire at T1 and T2. An adapted version of the Post-Study System Usability Questionnaire (PSSUQ) was used to measure usability of iVitality. 10 This questionnaire consisted of 24 items divided over three subscales: system usefulness (nine items), information quality (eight items), and interface quality (eight items). These 24 items were included in the online questionnaire at T1 and T2. All items of the online questionnaires regarding acceptance and usability were rated on a scale from 1 (I totally disagree) to 7 (I totally agree). Higher scores indicated higher acceptance and usability. In addition, a "not applicable" answer category was added to all items.
OFgs
OFGs are feasible tools for collecting qualitative data. 13,14 Two OFGs were conducted at T3 to collect user experiences with iVitality: one with participants, who were randomized to measurement protocol 1 (OFG 1), and one with participants, who were randomized to measurement protocol 2 (OFG 2). A web browser was used which could run on an MS Windows/ Web server platform. The OFGs took place in the last half of October 2014. All participants who had been using iVitality for at least 3 months at that moment were invited to take part in an OFG. Participants registered themselves and received a login and password from the moderator of the online platform (who was part of the research team) with which they could enter the OFG. Participants had access to the OFG platform for 2 weeks. During these 2 weeks, ten statements (one new statement on every weekday) regarding
1320
Boessen et al the use and experiences with iVitality were posted by the moderator (Table 2). Participants were invited by the moderator to respond to these statements and engage in an online discussion with each other. Participants could respond to all statements during the 2-week period at a time and place that was convenient for them. Consequently, communication between the participants was asynchronous. Participants were instructed not to mention any names for the sake of anonymity.
Analyses Online questionnaires
Descriptive statistics were used to provide information on the T1 characteristics of the groups of participants assigned to measurement protocols 1 and 2. If participants filled out none of the items of the acceptance or usability questionnaire on T1, T2 or filled out all items with "not applicable", they were excluded from the analyses. If participants filled out at least one question of both questionnaires, they were included in the analyses. In that case, missing items or "not applicable" answers on the questionnaire were imputed by the mean score of that item of all participants of the relevant measurement protocol at the particular time point. Cronbach's α was calculated for the subscales of the acceptance and usability questionnaire. Cronbach's α of the following subscales was below 0.7: motivation, social influence, affect, self-efficacy, and time. Deleting items for the subscales motivation and affect did not result in substantial improvements in the alphas and therefore no items were deleted. No items could be deleted for social influence and time, since both subscales consisted of only one or two items. Deleting items for the subscale selfefficacy did result in improvements in the alphas, however no items were deleted since mean scores of the subscale remained significantly unchanged after deleting items. Mean scores (SD) were calculated for subscales of the acceptance and usability questionnaires for both protocol groups separately. Independent samples t-tests were conducted to compare whether acceptance and usability was rated differently between both protocol groups. Paired samples t-tests were conducted to compare whether acceptance and usability was rated differently between measurement points (T1 and T2). All analyses were performed using SPSS version 23.
OFgs
The moderator of the OFGs analyzed the data of the OFGs using a directed content analysis approach. Data were analyzed per statement for the two OFGs separately, to be able to detect differences in the experiences of participants who followed different measurement protocols.
Online questionnaires Participant characteristics
In total, 151 participants were included in the POP study and randomly assigned to two measurement protocols: 66 participants were randomized to measurement protocol 1 and 85 were randomized to protocol 2. Sixteen participants were excluded (four protocol 1 and twelve protocol 2), since they completed none of the questions of the TAM or PSSUQ at both measurement points. The resulting 135 participants were included in the analyses of the online questionnaires at T1 and T2 (62 protocol 1 and 73 protocol 2). T1 characteristics of these participants are provided in Table 3.
Missing items on the questionnaires of the included participants were imputed. For the acceptance questionnaire, 2.9% of the scores of all participants were imputed and 1.9% for the usability questionnaire. Overall, 2.5% of scores on the online questionnaires were imputed. Table 4 shows the mean scores (SD) on the acceptance (motivation, performance expectancy, effort expectancy,
1321
home-based monitoring tool of dementia health indicators social influence, affect, trust, self-efficacy, and time) and usability subscales (system usefulness, information quality, interface quality) at T1 and T2 for the two protocol groups separately. Table 4 also shows the t-scores and P-values of the independent samples t-tests which were conducted to compare the mean scores of both protocol groups at T1 and T2. The two-sample/independent t-tests revealed that there were no significant differences in acceptance and usability between both protocol groups. Overall, the mean scores on the acceptance and usability subscales were higher than 5 (I somewhat agree). The mean scores on the social influence and time subscale were somewhat lower for both protocol groups. Mean scores of both protocol groups were also compared over time, eg, between measurement points T1 and T2. Table 5 shows the t-scores and P-values of the paired samples t-tests which were conducted. The paired t-tests revealed that scores on the effort expectancy subscale and social influence subscale significantly increased at T2 compared to T1 for both protocol groups. Mean scores on the affect subscale significantly increased between T1 and T2 for protocol group 2, meaning that only participants measuring BP fortnightly showed more affect toward using iVitality at T2.
OFgs Participant characteristics
In total, 32 participants registered for the OFGs and received a login and password. Eventually, 26 of them actively participated in an OFG: eleven in OFG 1 and 15 in OFG 2. Characteristics of these participants are provided in Table 6.
Participant activity
In OFG 1 participants posted 71 reactions in total during the 2-week period and the number of reactions per participant varied between two and eleven. In OFG 2 participants posted 118 reactions in total during the 2-week period and the number of reactions per participant varied between three and eleven. The information in Table 7 shows that eight Table 5 comparison of mean acceptance and usability scores between baseline (T1) and after 3 months of follow-up (T2) per protocol group
Participant experiences with iVitality
Overall, participants in OFG 1 and OFG 2 agreed on most of the ten statements. Participants agreed that iVitality could be incorporated in their daily lives, although they preferred more flexible measurement moments and measurements outside their houses were difficult: "iVitality only fits use at home. I did not want to carry the BP device outdoors, which has led to some missing values" (protocol 1). Participants in both focus groups indicated that they had sufficient skills to use iVitality without help, although help being available was perceived as pleasant. Using iVitality encouraged participants to think about their health and lifestyle: "iVitality made me more aware of the importance of a healthy lifestyle, including being and staying active" (protocol 1). Gaining information on one's own health was equally or more important than contributing to scientific research for participants in OFG 1, although participants in OFG 2 indicated the contribution to scientific research as more important (or equal to personal health information). All participants believed that their privacy was guaranteed by iVitality and that the results displayed were accurate. Furthermore, participants in both focus groups agreed that they would contact their GP if iVitality indicated a high BP and some actually did this: "Via the iVitality application, I received the notification that my BP was too high. So, I went to the doctor, but luckily no further action was required" (protocol 2). Participants in both focus groups had a somewhat negative view on the usability of iVitality. Participants indicated that the application sufficed for this study, but that improvements are needed in technical aspects (connection between BP measurement and smartphone, smartphone battery, restarting) and in the interface attractiveness. In OFG 1 half of the participants would have liked to continue their use of the iVitality application, while in OFG 2 none of the participants preferred this. Participants in OFG 2 would only continue their use of iVitality if feedback on the tests and games would be provided: "An added value would be to receive feedback regarding completed measurements. I would like to know in what way my own results compare to the standard" (protocol 2).
Discussion
This POP study evaluated the long-term acceptance and usability of iVitality according to children of people with dementia. It may be concluded that children of people with dementia accept iVitality after long-term use (6 months) and evaluated iVitality as a user-friendly, useful, and trusted technology, despite some technical and other suggestions for improvement. At T1 and T2, the level of acceptance and usability of iVitality did not differ between participants measuring health indicators monthly or fortnightly. When comparing acceptance and usability over time, participants conducting fortnightly health measurements showed a higher level of affection toward using iVitality at T2 compared to T1. The level of affection for iVitality of participants conducting monthly measurements did not change over time.
The results of this study are in line with the preliminary results of van Osch et al, 5 who explored the usability of iVitality in four children of people with dementia, and showed the potential acceptance and usage of iVitality in larger user groups such as in this POP study. This finding is becoming more common in the light of the popularity of technology use and the increased uptake of innovative technologies by middle-aged adults. 15 Middle-aged adults are getting used to technology and adopt and accept such technologies in health care settings more easily. This supports the potential of monitoring health indicators at home to prevent health problems, such as dementia. For example, some participants in this study contacted their GP when iVitality indicated a high BP and indicated that feedback on health data was very important. A suggestion for improvement was receiving feedback on the results displayed by iVitality and to be notified when further action is required. Such feedback is suggested to have the potential to influence patients' attitudes and health behavior as well. 16 Attitude and behavior changes are of utmost importance for improving one's health. In this light, the type of motivation for the use of self-management tools plays a role in the actual outcomes of using such tools. Intrinsic motivation has been associated with positive health outcomes. 17 People's own choice, insight into personal health data, and contributing to research were important reasons to use iVitality. Subjective norms of important others did not seem to play a role in the decision to participate. This suggests that the intrinsic motivation to use iVitality was high among participants, while controlled motivation was low. Van Osch et al 5 reported that the motivation to contribute to research might be a result of the unknown relation between dementia and hypertension, and indicated that addressing this relationship might stimulate self-monitoring. In addition to these motivations for self-monitoring, Wijsman et al 18 monitored the adherence of participants of the POP study to the ascribed protocols. Overall, adherence to iVitality was acceptable (64%), although it was slightly better in participants measuring fortnightly (71.4%) compared to participants performing monthly measurements (64.3%). This rate of adherence is in agreement with previous research suggesting that motivation is the key to adherence with self-monitoring protocols. [19][20][21] This study showed that participants measuring health indicators fortnightly showed a somewhat higher level of affection toward using iVitality over time. This is in line with the higher adherence rates among participants with fortnightly health indicator measurements reported by Wijsman et al. 18 This finding should be put into perspective since all other acceptance and usability subscales did not differ between both measurement protocols, which indicates that acceptance and usability are fairly equal among participants measuring health indicators monthly and fortnightly. However, the slight preference for fortnightly health indicator measurements might be a result of participants developing a habit. Participants performing measurements more often (eg, fortnightly) may have been used to the procedure and may have experienced less burden due to the single-day measurements in comparison to the 2-day monthly measurements. This might especially apply to the elderly, who often have to cope with forgetfulness. A strength of this study was that the monitoring and feedback system was tested in the daily environment of the participants, which makes the results more realistic and provides more accurate and detailed information into the experiences and problems that can occur. With regard to the methodology, credibility and confirmability were increased by data triangulation. As participants were selected based on their presence in memory clinics at a certain time point, this may have introduced some selection bias. In addition, we did not measure information technology (IT) competences of participants, which might have influenced the acceptance and usability of iVitality. However, only participants in possession of a smartphone were included in the study. The findings also show that participants were highly motivated to participate, which may have influenced the results. Furthermore, response to the statements of the OFGs was disappointing and the asynchronous aspect of the OFGs led to little communication between participants, which is considered as one of the limitations of asynchronous OFGs compared to traditional focus groups (TFGs). Better instructions or fixed time periods of response might have contributed to an increase in participant activity and interaction. However, the asynchronous aspect and absence of time pressure is often valued for its convenience, since participants are unconstrained by time and place. 13,14 Moreover, OFGs provide benefits to the researchers, since lower recruitment costs and travel expenses are required and researchers save time due to automatic capture of data. Pitfalls of OFGs compared to TFGs are potential sampling bias due to computer-illiteracy or misinterpretation of information due to the lack of non-verbal signals. 13,14 The findings of this study in the light of previous research suggest that iVitality has the potential to be used in largescale clinical studies for home-based monitoring of health indicators related to the development of dementia, such as the PROBE study. The deployment of such a technology platform might contribute to the long-term monitoring of health indicators in children of people with dementia, the relation of these health indicators with dementia, and therefore the prevention of dementia. Furthermore, iVitality might be used for homebased monitoring among other patient groups for whom largescale studies with long follow-up periods are needed to show relations between health indicators and a disease. However, in order to realize the potential of iVitality in large-scale studies, a few issues should be addressed. Important suggestions for improvement were more flexible measurement moments and receiving feedback on the results displayed in the application. Furthermore, some technical shortcomings influenced the perceived usability of iVitality.
|
2018-04-03T04:22:59.303Z
|
2017-08-01T00:00:00.000
|
{
"year": 2017,
"sha1": "2a463148da2c13cbe8b7742cb76596e1cdbc6b97",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=37744",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a463148da2c13cbe8b7742cb76596e1cdbc6b97",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246749655
|
pes2o/s2orc
|
v3-fos-license
|
BMP4 preserves the developmental potential of mESCs through Ube2s- and Chmp4b-mediated chromosomal stability safeguarding
Chemically defined medium is widely used for culturing mouse embryonic stem cells (mESCs), in which N2B27 works as a substitution for serum, and GSK3β and MEK inhibitors (2i) help to promote ground-state pluripotency. However, recent studies suggested that MEKi might cause irreversible defects that compromise the developmental potential of mESCs. Here, we demonstrated the deficient bone morphogenetic protein (BMP) signal in the chemically defined condition is one of the main causes for the impaired pluripotency. Mechanistically, activating the BMP signal pathway by BMP4 could safeguard the chromosomal integrity and proliferation capacity of mESCs through regulating downstream targets Ube2s and Chmp4b. More importantly, BMP4 promotes a distinct in vivo developmental potential and a long-term pluripotency preservation. Besides, the pluripotent improvements driven by BMP4 are superior to those by attenuating MEK suppression. Taken together, our study shows appropriate activation of BMP signal is essential for regulating functional pluripotency and reveals that BMP4 should be applied in the serum-free culture system. Supplementary Information The online version contains supplementary material available at 10.1007/s13238-021-00896-x.
Supplementary Figure legends
(I) Confirmation of the indicated all-ESC mice by simple sequence length polymorphism (SSLP) assay. Genomic DNA of C57 and 129 mice were used as positive controls and genomic DNA of DBA/2 mice was used as a negative control.
Data are represented as the mean ± SEM in (A), (G) and (H), the mean ± SD in (C).
(D) ChIP-qPCR analysis of indicated histone modifications occupancy at promoter regions of Ube2s and Chmp4b in N/2i-and N/2i+BMP4-mESCs. Two and one potential regulator regions of Ube2s and Chmp4b were detected, respectively.
Relative enrichment was normalized to IgG ChIP signals at the same regions. A male mESC line #S2 was tested with 2 independent experiments. Data are represented as the mean ± SEM. Statistical analysis was performed using a two-tailed unpaired Welch's t-test. *p < 0.05. (H) Induced knockdown of Ube2s or Chmp4b in mESC line #S2 for 5 passages caused an increased resorption rate in chimera assay. The control-, ishUbe2s-and ishChmp4b-mESCs were cultured under N/2i+BMP4 condition with Dox for 5 passages before microinjection.
(I) Validation of overexpression of Ube2s (left) and Chmp4b (right) in N/2i-mESCs. mESC line #S2 was used in this test. Hprt was set as an endogenous control. n = 3 replicates.
(I) Karyotyping validation of S-, N/2i-and N/2i-S-mESCs. N/2i-S means that N/2i-mESCs were switched back to S condition for 15 days' culturing. Note that N/2i-S mESCs showed a high proportion of aneuploidy similar to that of N/2i-mESCs.
The mESC line #S2 was used in this test. More than 40 mitoses phases were counted for each group. n=3 biological replicates.
Data are represented as the mean ± SEM in (H) and (I). Statistical analysis was performed using a two-tailed unpaired Welch's t-test. *p < 0.05; *** p < 0.001, n.s., not significant.
|
2022-02-12T06:23:45.140Z
|
2022-02-11T00:00:00.000
|
{
"year": 2022,
"sha1": "1e83eda92554973b0dca6f9d2bfdf78b1befc5d5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13238-021-00896-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86f376048bb7d6e8c1f94538216af91c2ea41bf7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251890167
|
pes2o/s2orc
|
v3-fos-license
|
A Rare Case of West Nile Virus-Associated Cardiomyopathy
A 68-year-old man presented in late summer 2021 with fever, myalgias, generalized weakness, dizziness, and headache. Past medical history included rheumatoid arthritis treated with infliximab, congestive heart failure with preserved ejection fraction, and recent travel to Alaska. He was febrile, tachycardic, and tachypneic on admission. Physical exam and admission labs were overall unremarkable. On day 4, he complained of shortness of breath and central chest discomfort. Troponin was mildly elevated, electrocardiogram was unremarkable, and echocardiogram showed new global wall motion abnormalities and ejection fraction of 40%, which was 55% months prior. Serum West Nile IgM antibodies resulted positive near the end of hospitalization. Testing for SARS-CoV-2, influenza as well as multiple other viral, bacterial, and fungal organisms was negative. Overall, the patient recovered clinically including improvement in ejection fraction on echocardiogram with conservative management. West Nile virus (WNV) is associated with a myriad of symptoms and complications, most notably, neuroinvasive disease. However, cardiomyopathy secondary to WNV as illustrated in this case has been infrequently described. Clinicians should be aware of this potential rare complication in patients with WNV to improve rapid detection and treatment of myositis, associated cardiomyopathy, and related complications.
Introduction
West Nile virus (WNV) is a mosquito-borne member of the Flaviviridae family endemic to North America, Europe, Africa, the Middle East, South Asia, and Australia [1]. First isolated in Uganda in 1937, a series of outbreaks in the Middle East and Europe in the 1990s eventually introduced the virus to the Western Hemisphere in 1999. WNV is sustained in nature in a cycle between birds and mosquitoes. Humans and other affected mammals are dead-end hosts. While the main mode of transmission to humans is through a mosquito bite, other modes such as intrauterine transmission between infected mothers and fetuses, through breastfeeding, blood transfusions, and organ transplants have also been previously documented [2].
While 80% of infected individuals are asymptomatic, patients who have symptoms primarily experience a mild flu-like illness, termed West Nile Fever (WNF), after an incubation period of 2-15 days. A very small subset, about 1%, of symptomatic individuals progress to neuroinvasive disease; at risk are those with underlying risk factors such as older age, hematological malignancies, and immunosuppression [1][2][3]. Myocarditis is a very infrequent complication that is not well-documented; subsequent cardiomyopathy is even rarer.
Case Presentation
A 68-year-old immunocompromised Caucasian man presented in late summer 2021 with fever, myalgias, generalized weakness, dizziness, and headache for 3 days. Past medical history included rheumatoid arthritis, for which he received infliximab infusions every 6 weeks, and congestive heart failure with preserved ejection fraction. He also reported recent travel to Alaska and possible tick exposure. On admission, he was febrile to 101.6°F, tachycardic, and tachypneic. Physical examination was otherwise unremarkable. Labs were notable for a white blood cell count of 2.4 K/uL (reference range 4.0-11.0 K/uL) and aspartate aminotransferase of 69 U/L (reference range 0-35 U/L). He was initially started on broad-spectrum antibiotics with vancomycin and cefepime due to concern for sepsis secondary to a bacterial source. COVID-19, influenza virus, ehrlichiosis, anaplasmosis, and Lyme disease were also a part of the initial workup, and all returned negative ( Table 1). Blood and urine cultures were also negative. He continued to have fevers and infectious disease was consulted. Vancomycin was subsequently discontinued, and the patient was initiated on doxycycline for coverage of possible tick-borne illness. Additional infectious workup included Babesia, Bartonella, Coxiella, Brucella, Legionella, cytomegalovirus, and fungal serology, all of which returned negative ( Table 1). 1 1, 2 1, 2 On day 4 of hospitalization, the patient reported increased shortness of breath and central chest discomfort. Chest X-ray did not show any focal consolidation, pleural effusion, or pneumothorax. Troponin was mildly elevated, peaking at 0.048 (reference range 0.000-0.033 ng/mL), and B-type natriuretic peptide was 333 (reference range 0-100 pg/mL). The electrocardiogram did not show acute ischemic changes; however, a transthoracic echocardiogram (TTE) showed new global left ventricular hypokinesis compared to one performed 2 months prior (Video 1).
VIDEO 1: Transthoracic echocardiogram showing new global left ventricular hypokinesis and ejection fraction of 40%
View video here: https://vimeo.com/706258093 His ejection fraction also decreased from 55% to 40%. The patient had clinical signs of volume overload including shortness of breath, elevated brain natriuretic peptide (BNP), basilar crackles, and bilateral lower extremity edema, which in combination with his echocardiogram findings was consistent with acute decompensated heart failure with reduced ejection fraction. The patient's shortness of breath was resolved with diuretic therapy.
The patient's fevers resolved over the course of his hospital stay, and eventually, results for West Nile IgM antibodies returned positive. After marked clinical improvement, a repeat echocardiogram was performed a week later. It showed a return of the ejection fraction back to his baseline of 55%. No pericardial effusion was noted, and regional wall motion abnormalities had also improved; therefore, additional testing for myocarditis, including myocardial biopsy and cardiac magnetic resonance imaging (MRI), was not performed (Video 2). Of note, his neurological exam was normal throughout the entire hospitalization, and headache and dizziness symptoms resolved, making significant WNV neuroinvasive disease unlikely. The patient improved throughout his hospitalization and was discharged to home with home health.
Discussion
Based on the troponin elevation, decreased ejection fraction with subsequent improvement, and positive WNV IgM antibodies, the patient's presentation was suggestive of West Nile virus-associated cardiomyopathy. Currently, very few cases of West Nile virus-associated myocarditis or cardiomyopathy have been reported. WNV infection is diagnosed via the detection of IgM antibodies in serum or CSF specimens using an enzyme-linked immunosorbent assay (ELISA) test with sensitivity and specificity both greater than 95% [4]. Diagnosing viral myocarditis is more difficult, as the only definitive test is through post-mortem examination with findings of myocardial necrosis and lymphohistiocytic myocarditis. Given the improvement in his ejection fraction and wall motion abnormalities, additional testing such as cardiac MRI was not pursued in this case. Therefore, we are unable to determine if the patient had definite myocarditis. One prior case reported by Kushawaha et al outlined a case of neuroinvasive disease associated with myocarditis. Cardiac pathology in that case was consistent with myocarditis, showing left ventricular scarring as well as lymphocytic and histiocytic inflammatory infiltrates [5]. Whether an indirect immune response to the WNV infection or direct viral infiltration of cardiac myocytes and associated inflammation is the primary pathology in WNV-associated myocarditis, remains to be determined.
Additional cases described have shown evidence of cardiomyopathy similar to our patient without definite findings of myocarditis. Pergam et al. described a 69-year-old man presenting with diffuse weakness, fever, headache, neck stiffness, diarrhea, and nausea. The patient had progressively worsening troponin and an eventual decrease in ejection fraction at 45-50%. WNV serology eventually resulted positive for IgM antibodies. Unfortunately, the patient deteriorated further, and support was withdrawn. A post-mortem examination was declined; however, the authors felt the atrial flutter/fibrillation, new global hypokinesis, reduced ejection fraction, and elevated troponin to be consistent with diffuse myocardial damage secondary to WNV-associated myocarditis [6]. Another case published by Khouzam was associated with more severe cardiomyopathy. The author described a 42-year-old woman with diffuse weakness, body aches, fatigue, low-grade fever, and dyspnea upon mild exertion for a few weeks. Serum WNV antibodies were positive, and her presentation, along with mild cardiomegaly incidentally noted on the computed tomography (CT), prompted further investigation. TTE showed global hypokinesis without regional wall motion abnormalities and an ejection fraction of 25-30% with negative cardiac enzymes and decreased BNP [7]. Compared to the cases outlined above, the current case was relatively mild, with a quick reversal of symptoms. However, clinicians should still be aware of the uncommon complications of WNV infection, including myocarditis or cardiomyopathy for prompt recognition and management.
Aside from the unusual manifestation of cardiomyopathy, the patient in our case generally had symptoms classically associated with WNV. The incidence of WNV infection in the United States is highest in the Midwest during the summer months of July to September, which was when and where our patient presented [3]. His recent travel to Alaska was likely a confounder in this case given WNV is uncommon in Alaska. While 80% of infected individuals are asymptomatic, patients who are symptomatic primarily experience a mild flu-like illness, termed WNF, after an incubation period of 2-14 days, similar to our patient in this case. Low-grade fever, headache, myalgias, nausea, vomiting, and fatigue are common symptoms, although a transient rash may also appear and usually lasts less than a day [2,3]. A very small subset, about 1%, of symptomatic individuals progress to a neuroinvasive disease, which may be meningitis, encephalitis, or acute flaccid paralysis from anterior myelitis [1][2][3]. Though our patient did have some findings suggestive of possible neurologic manifestations with headaches and dizziness, he did not have significant neurologic involvement. Those with underlying risk factors such as older age, hematological malignancies, and immunosuppression are at greater risk for neuroinvasive disease. Other WNV-associated complications include rhabdomyolysis, hepatitis, and pancreatitis, which were not present in this case [1].
To date, there are no human vaccines or effective antivirals to prevent or treat WNV infection. The only WNV vaccines licensed for use in the United States are for horses [1,2]. Pharmacological agents such as the antiviral ribavirin, interferon alpha-2b, and WNV-specific neutralizing monoclonal antibodies, among others, have been previously studied. Directed therapies would be particularly helpful for patients such as our current patient given his immunosuppressed state, but studies have been limited by an insufficient number of patients or have only shown efficacy in animal models but not in humans [2,3]. The current standard of care is the supportive treatment of fluid balance, antiemetics for nausea and vomiting, in addition to analgesics for myalgias and arthralgias. However, it is still important to promptly recognize the complications of WNV infection, such as arrhythmias or signs of volume overload that can point toward myocarditis or congestive heart failure, as this will lead to overall better management of these patients.
Importantly, preventative measures such as community mosquito control programs and personal protective gear are the primary methods of prevention against WNV infection [1,2]. Limiting outdoor activities during high mosquito activity times of dawn and dusk, wearing long sleeves and pants, and using insect repellents are individual approaches to prevention. Community control programs include targeting breeding sites and spraying for adult mosquitoes [1]. As local and regional outbreaks are unpredictable, clinicians should consider WNV infections, especially during summer months and when the history suggests exposure to mosquitoes through outdoor activities, when approaching patients with complaints of fever, headache, myalgia, and fatigue.
Conclusions
The current case represents a rare example of WNV-associated cardiomyopathy. Testing for WNV in endemic areas with findings classic of viral illness would be suggested. Additionally, it is important to be mindful of WNV-associated complications and act quickly to appropriately discover potential cardiac pathology through testing such as echocardiography. Rapid treatment of complications, such as arrhythmias, will lead to better management and care of these patients.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2022-08-29T15:02:52.594Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "11d3efdcdea28258bab01150f378d0b6b2db3106",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/92699-a-rare-case-of-west-nile-virus-associated-cardiomyopathy.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "763d0e7835a05b785ec57ed758cb2b199cc47e93",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216637766
|
pes2o/s2orc
|
v3-fos-license
|
Outbreak Investigations
The aim of outbreak epidemiology is to study an epidemic in order to gain control over it and to prevent further spread of the disease. Generally outbreak means a “sudden occurrence,” while in the epidemiological sense an outbreak is defined as a sudden increase in the disease frequency, related to time, place, and observed population. Thousands of outbreaks among humans and animals have been reported and investigated during the last two centuries, among them the most numerous being outbreaks of cholera, plague, malaria, smallpox, influenza, SARS, measles, salmonella, chikungunya, and various foodborne outbreaks. Traditionally, outbreak investigations are an essential part of infectious disease epidemiology. During the 18th and 19th century, epidemics of different diseases were widespread in Europe. Epidemiologists like Edward Jenner (1749–1823), who, as a country doctor, had observed the devastating epidemic of smallpox in England in the late 18th century, based on his observation had introduced a preventive vaccination against it, and John Snow (1813–1858), who had found contaminated water to be the cause of the cholera outbreak in London in the 1850s, undoubtedly created the fundamentals of modern outbreak investigations (Gordis 2009). Today there are new challenges for studies of infectious diseases. On the one hand, due to global and regional changes in the environment, industry, food processing, transportation of goods and food, and behavioral changes, new infectious diseases emerge. On the other hand, people are confronted with already forgotten diseases, which are no longer considered to be a danger to public health (Dwyer and Groves 2001). Chapter 3 provides a comprehensive overview of emerging and reemerging infectious diseases. Furthermore, the increasing density of populations, growing megacities in the developing world, an increasing number of subpopulations at risk, and other socio-demographical factors influence the way communicable diseases spread (see Chapter 2). Considering the changing
nature of modern infectious diseases, outbreak investigations play a crucial role in understanding their nature and subsequent control.
This chapter provides information on the objectives and the use and planning of outbreak investigations as well as on methods of conducting and reporting an outbreak. In addition, we provide simple examples of how to apply different study designs to investigate an outbreak.
Defining an Outbreak
The term "outbreak" is most commonly associated with a number of cases significantly higher than the background expected number of cases in a particular area over a given period of time. Beyond a simple increase in the number of cases, there can be an indication of an outbreak when the same exposure (risk factor) causes a cluster of cases (two and more cases simultaneously) with the same disease; the number of disease cases in a cluster must not necessarily be higher than expected (Ungchusak 2004). For instance, a cluster of five cases with hemolytic uremic syndrome (HUS) was identified in one community in southwest France in 2005. The outbreak investigation showed that all patients had consumed one brand of frozen beef burgers in the week before the onset of symptom. Escherichia coli O157:H7 (E. coli 0157:H7) was identified as the cause of the disease (King et al. 2008). An outbreak investigation should also take place even if only one case of an unknown or an unusual disease occurs and if this disease is life threatening [e.g., avian flu and severe acute respiratory syndrome (SARS)] (Timmreck 1994).
Typical for any outbreak is that it occurs suddenly and requires direct measures to be taken. A well-conducted outbreak investigation may serve several aims. First of all, an outbreak investigation serves the detection and elimination of a potential epidemic's cause and provides postexposure prophylaxis to affected individuals. Next, outbreak investigations often result in discovering new infections and diseases. The last quarter of the 20th as well as the first years of the 21st century was rich regarding the discovery of new etiologic agents and diseases, among which were Legionella spp. and legionellosis, toxic shock syndrome associated with tampon use, E. coli 0157:H7 -potentially causing fatal hemolytic uremic syndrome, Ebola virus (which was sensationalized in the news media) -causing viral hemorrhagic fever, and severe acute respiratory syndrome (SARS), just to name a few (see Chapter 3 and Weber et al. 2001;Dwyer and Groves 2001;Hawker et al. 2005;Towner et al. 2008;Oxford et al. 2003). The recent outbreak of influenza A (H1N1), which started in Mexico in April 2009, led to the raise of the highest level of influenza pandemic alert (phase 6) by the World Health Organization.
Outbreak analysis may deliver information about the spread of a well-known pathogen to new geographical areas. Infectious agents may be introduced into new areas with immigrants, tourists, imported animals, and contaminated food and goods (Weber et al. 2001). Successful outbreak investigations contribute to the development of knowledge about infectious diseases by identifying new modes of transmission. For example, E. coli O157:H7 infection had previously been associated with eating undercooked hamburger meat; however, numerous outbreak investigations registered E. coli O157:H7 transmission via unpasteurized cheese and apple drinks, swimming pools, lakes, municipal water, and person-to-person transmission (Center for Disease Control and Prevention 1993;Cody et al. 1999;Honish et al. 2005;Bruneau et al. 2004;Belongia et al. 1993;Weber et al. 2001).
Finally, outbreak investigations serve as a basis for the development of public health regulations and prevention guidelines. Scientific knowledge makes it possible to draw general conclusions, detect new trends, and show ways to new prevention measures. The study of outbreaks is therefore an important component of public health practice.
The investigation of an outbreak makes simultaneous use of epidemiological, microbiological, toxicological, and clinical methods in order to develop and test hypotheses about the causes of the outbreak. In the following sections, the most important methodological aspects of planning and conducting an outbreak investigation are described and explained using examples.
Suspicions of an Outbreak and Risk Communication
Outbreak investigations differ from other types of epidemiological studies, particularly in the way that they often start without clear hypotheses and require the use of descriptive analysis in order to analyze the situation in terms of time, place, person, and scope of the problem (Brownson 2006).
An outbreak can be suspected if data from several cases display common characteristics (e.g., occurrence of many cases of a disease in the same period of time, in the same area, and with similar manifestations). In order to assess the existence of an outbreak, diagnosis of the suspected cases should be confirmed and then the number of detected cases should be compared with the baseline rate for the disease and setting. Possible biases which can influence the evaluation of an outbreak must be taken into account; first of all, changes in reporting practices, changes in population size, improved diagnostic procedures or screening campaigns (detection bias), or increased interest of the public and media in certain diseases (Gerstman 2003). Often it might also be helpful to interview several representative cases. That can help to understand the clinical picture of the disease and obtain additional information about the affected individuals. The collection of epidemiological data is important for the development of prevention and control measures. Based on the initial information, an epidemiological investigation can be planned and control measures can be implemented immediately to stop further transmission of a disease (Dwyer and Groves 2001).
In case of a confirmed outbreak, the relevant public health authorities should be notified immediately and all important findings should be shared with involved individuals and parties. It is important to carefully record data and maintain both internal and external communication. Internal communication concerns the team of outbreak investigators, whereas external communication concerns selection and presentation of the information to the news media as well as the contact of stakeholders. Investigators should avoid unnecessary speculation and identify key points to communicate and provide relevant background information of the epidemic as well as methods of its evaluation and control (Weber et al. 2001).
General control and prevention measures can already be implemented at the initial stage of the outbreak investigation. For instance, suspicious foods can be taken out of the trade, sick individuals who commercially have to deal with manufacturing or processing of groceries are restricted from their respective activities, or the population can be informed about risk-bearing products.
Descriptive Analysis
The main components of an outbreak investigation are summarized in the flowchart in Fig. 9.1. These steps need not necessarily be performed in the described sequence. Moreover, several steps, as many authors emphasize, often occur simultaneously (Gerstman 2003;Weber et al. 2001). The sequence and completeness of these steps would most likely depend on the urgency of the situation, the availability of human and other resources, and the process of obtaining data (Dwyer and Groves 2001).
In outbreak investigations, descriptive epidemiology is given one of the key roles. It illustrates an outbreak using the three standard variables, time, place, and person, and makes it possible to set up specific hypotheses about causes and sources of
Case Definition
It is essential to establish a simple and workable case definition for both the description of an outbreak and a possible analytical investigation. In the present context, the epidemiological case definition also includes orienting variables related to time, place, and person. This is in addition to clinical and, where appropriate, laboratory medical criteria. The case definition must be applied equally to all cases under investigation from the beginning. Obviously, early or preliminary case definitions can be based only on information about signs and symptoms of a disease or an infectious agent. For example, a primary definition of a foodborne outbreak can be formulated as follows: A case of illness is defined as any vomiting, diarrhea, abdominal pains, headache, and fever that developed after attending an event X.
This definition does not imply any common risk factors for affected individuals, and thus emphasizes the sensitivity to detect disease cases. However, as the investigation goes on, the case definition should be reviewed and refined to increase specificity. The previous case definition of a foodborne outbreak may then be reformulated as: A case of illness is defined as vomiting or diarrhea with onset within 4 days (96 hours) of consuming food served at the event X.
Here the definition has higher specificity and aims to exclude cases of gastroenteritis or other illnesses (Dwyer and Groves 2001).
Investigators can sometimes divide cases into "definite" (e.g., confirmed in a laboratory), "probable" (e.g., cases who have objective signs and symptoms contained in the case definition), and "possible" ("suspect") (e.g., cases who have subjective signs and symptoms contained in the case definition) (Weber et al. 2001).
The following definition was formulated for "possible" cases in the outbreak of the influenza A (H1N1): "Defined as an individual with an acute febrile respiratory illness (fever >38 • C) with onset of symptoms: • Within 7 days of travel to affected areas; or • Within 7 days of close contact with a confirmed or a probable case of influenza A (H1N1)." One of the definitions for "probable" cases of the influenza A (H1N1) ran as following: "An individual with a clinically compatible illness or who died of an unexplained acute respiratory illness that is considered to be epidemiologically linked to a probable or a confirmed case.
"Definite" case of the influenza A (H1N1) would be "an individual with laboratory confirmed Influenza A (H1N1) virus infection by one or more of the following tests:
Finding Cases and Collecting Information
Usually investigators know only about a part of the cases which occur during an outbreak. The main reasons for that are the following: • Not all sick individuals visit a physician. Many of them feel no need to do so.
• Physicians do not always send a sample to a laboratory for microbiological analysis. • Laboratory investigations do not always succeed in identifying a causal pathogen. • Not all positive findings are reported to the public health department.
• Some patients avoid being reported.
Thus, in addition to the cases already known, there are cases which might have been missed or overlooked, and investigators should search for them. Only then the extent of an outbreak can be objectively estimated and the outbreak population defined. Hence, active search for cases might be carried out using certain case-finding techniques, for example: • Searching in surveillance data and laboratory data (e.g., summaries of illnesses, morbidity reports from local health departments) • Surveying physicians, personnel of clinical microbiological laboratories, and hospitals to check logs about diseases or diagnoses typical for the current outbreak • Questioning known outbreak cases to find secondary cases (e.g., based on guest or participant lists of an event), public announcements in the local press, radio, and other mass media (More about surveillance systems in Chapter 8).
After all the cases are identified, comprehensive information about them is collected. The individuals can either be interviewed personally (or per telephone) or given a standardized questionnaire to fill in. Regardless of the type of disease, the following basic information is necessary to describe the general pattern of it to the population at risk (Gerstman 2003;Dwyer and Groves 2001): • Case identification (name, address, etc.), • Demographical background, • Clinical information (disease onset, time of exposure to the infectious agent, signs, manifestation, laboratory test results), and • Potential risk factors (exposure or factors that might influence the probability of disease).
Following the collection of this information on cases, it is possible to structure the data in terms of time, place, and person. The goal of the descriptive epidemiology here is to find answers to the following questions: What do the patients have in common? Is there any increasing frequency in relation to sex, age groups, occupation as well as to demographical or geographical variables and variables related to time? In order to simplify answering these questions, it is often helpful to present the collected data in diagrams, tables, and maps and to calculate the attack rate.
Time: Epidemic Curves of Outbreaks
For the purpose of graphical description of cases by time of onset of illness, an epidemic curve can be drawn in which the occurrence of cases is shown over an appropriate time interval. Graphically, such a curve is constructed by putting the number of cases on the y-axis and the date of onset of illness on the x-axis. An epidemic curve helps to keep track of the time course of the events and gives clues about ways of transmission, exposure, and incubation period of the investigated disease. Disease cases, whose time course strongly deviates from that of the other cases ("outliers"), can give important clues to the source of infection (Gordis 2009). An epidemic curve can also help in distinguishing between common and propagated source epidemics.
Four examples of typical epidemic curves are given in Fig. 9.2a-d, modified from Checko (1996). Examples A and B represent an epidemic curve for a propagated (continuing or progressive) source outbreak. Propagated outbreaks depend on transmission from person to person or continuing exposure from a single source (Gerstman 2003). Curve A illustrates an outbreak (e.g., measles, influenza, or chickenpox) with a single exposure and index cases (index cases are those that first come to the attention of public health authorities) (Friis and Sellers 2004). Curve B shows the incidence of secondary and tertiary cases, typical, for example, for hepatitis A (secondary cases are those who acquire disease from contact with primary cases and tertiary cases are those who acquire disease from contact with secondary cases). In such a propagated outbreak, as it is shown in part B, there first occurs an increase in cases after exposure, then a fall in the incidence of cases; later there occurs a second increase in cases eventually infected by person-to-person transmission from primary cases. Curves C and D of Fig. 9.2 are examples of common source outbreaks. In such outbreaks most cases are exposed to one risk factor. Part C is a possible example for an outbreak when the number of cases rises suddenly and then slowly falls again. This is characteristic of a common source outbreak with a point exposure when the population at risk is exposed simultaneously within a short period of time. In this instance the epidemic ends, unless secondary cases occur, which is typical for foodborne outbreaks. Another example of the point source outbreak is Legionnaires' disease, which broke out among people who attended a convention of the American Legion in Philadelphia in 1976 (Arias 2000). In part D there is a continued (intermittent) exposure of individuals; cases of disease occur suddenly after the minimum incubation period, but do not disappear completely, because more individuals continue to be exposed to the source of infection.
Place: Spatial distribution
The spatial description of an outbreak can provide useful evidence about the geographical distribution of the cases, the size of an outbreak, and under special circumstances about the underlying source. For example, this might give information about specific locations within closed environments (e.g., a hospital), sites of routine activities (e.g., fast-food restaurant, public pools), or the place where affected individuals live (Weber et al. 2001). It is practical to present geographical information in the form of maps, for example, dot density maps and choropleth maps. Dot density maps may serve to graphically present the geographical extent of the problem and provide information on clustering. Probably the most famous dot density map was drawn by John Snow, showing the cholera deaths near the Golden Square in London (where the outbreak occurred) in 1854 (McLeod 2001). From his map, one could recognize the connection between clustering of cholera cases around the Broad Street pump, thus, the water-borne nature of the infectious agent (Gerstman 2003). However, the disadvantage of dot density maps is that they do not provide any information concerning the number of people at risk in a mapped area, which can be confusing when populations of these areas are unequal in size. Another option is to build a map, which shows area-specific disease rates, for example, disease attack rates per 100 inhabitants showing epicenters of an epidemic.
In any case, visual representations are beneficial to understand more about the spread of an outbreak of disease. In addition to the above mentioned, there are more complex methods [e.g., Geographic Information Systems (GIS)], which combine both geographical and other information. For advanced treatment of these methods, please see Chapter 10.
Person: Portraying the Outbreak Population
Person-based variables can be used for portraying the outbreak population. An increasing frequency of cases in a certain population group can point to groups at high risk (for example, increased occurrence of cases among workers in a certain part of a factory or among visitors of a local restaurant). Person-based factors include demographical characteristics (age, sex, ethnicity), marital status, personal activities (occupation, habits, leisure activities, knowledge, attitudes, and behavior), genetic factors, physiological conditions (nutritional status, distresses, pregnancy, etc.), current diseases, and immune condition (Gerstman 2003). Furthermore, investigations of specific diseases, like STDs or HIV/AIDS, require the use of variables related to sexual behavior, sexual practices, number of sexual partners, and in specific cases also intravenous drug use.
Exhibit 9.1 Use of mathematical methods in outbreak investigation
The elementary analysis of data as sketched above is meant to detect a possible outbreak but does not lead to a definitive statement about its existence. We suspect an outbreak if the epidemic curve looks unusual, in particular, if we find incidences that are significantly higher than expected if there is no outbreak. Such a purely qualitative judgment may suffice in a relatively simple and clear-cut setting as in the following examples of food poisoning, especially if supported by an a posteriori epidemiological analysis of the kind made there. In many situations, however, given the consequences of actions to be taken depending on the result of the investigation, a more precise decision rule will be necessary. We have to state what we mean by "significantly higher than expected." If we base our conclusion exclusively on the epidemic curve, which amounts to disregarding the spatial component of the data on cases, the problem may be formulated as follows: how can we determine a "threshold value" t such that, in the absence of an outbreak, an incidence exceeding t for a given period has a "very small" probability. We will then declare that there is an outbreak if the epidemic curve passes to values above t. What we mean by a "very small" probability needs to be defined in advance, depending on the risk we are willing to face for overlooking an outbreak. Mathematically, this approach bears some similarity with the so-called theory of dams.
There we are interested in the probability that a dam built to contain water in a reservoir, e.g., for an electrical power station, will overflow during a given period of the future, given data from the past. Some research along these lines was indeed done within the framework of outbreak investigations but has not gained much importance because it became increasingly clear that the larger part of relevant information is usually contained in the spatial component of the data on cases. This led to the so-called cluster analysis, both for noninfectious and infectious diseases. The basic idea is similar to the one formulated before, namely to describe in a rigorous quantitative way what kind of clustering is, in the absence of an outbreak, still to be considered as "normal" and arising purely by random effects. We cannot enter into details here; some of the methods are presented in Chapter 11. There is an introductory text by Waller and Gotway (2004). For an advanced treatment, see the book by Lawson and Kleinman (2005), especially its chapter by Kulldorff on "Scan Statistics for Geographical Disease Surveillance: An Overview."
Analytical Epidemiology
To remind the reader, the goal of an outbreak investigation is fundamentally not only to identify and describe the causative agent but also, more specifically, to find a pathogen source of the disease and modes of transmission in order to develop control and prevention measures. In outbreak investigations, analytical studies are applied mainly in order to assess the centre, source, and cause of infection independently from laboratory methods. The first important and probably the most difficult step in the analytical epidemiology is formulating and testing hypotheses. A formal testing of hypotheses can under certain circumstances be omitted, provided all the collected information clearly supports the generated hypotheses. In case some important issues remain unclear, further investigations are needed.
It is characteristic of analytical epidemiological studies to use a comparison group that allows quantifying a possible association between specific exposures and the disease under investigation. The two most frequently used study designs are case-control studies and cohort studies. Methodological aspects of these and other types of epidemiological studies are presented in Chapter 11.
Formulating a Hypothesis
Based on the findings of the descriptive analysis of the cases, the laboratory analysis, inspections carried out on site, and clinical investigations, the researchers are able to set up qualified hypotheses about the cause of infection, possible source of the pathogen, modes of transmission, and specific exposures. After developing the first hypotheses, a list of potential risk factors related to the infection can be developed. For instance, collected information may strongly suggest that members of a certain community supplied by a specific water system are at high risk to get ill or visitors of some event may report a disease with common manifestations (Gregg 2002).
Assessing Risks: Historical Cohort Studies
The choice of an appropriate study design may depend on various factors, like timing of the investigation, available resources, experiences of investigators, the size of the affected population, the exposure prevalence, and disease incidence (Gerstman 2003). If an outbreak occurs in a limited, closed population group (for example, participants of a celebration, a party, or patients of a hospital), the historical cohort study can be preferred to other study designs. In such a study the total population is divided into persons who were exposed to the potential risk factor and persons who were not exposed to the risk factor. After that the risk-specific attack rates are calculated and compared in both groups. The risk-specific attack rate is normally presented as a percentage: No. of cases in the population at risk Total N of people at risk × 100 The attack rate does not explicitly take a time variable into account, but as soon as the period from the exposure to the onset of most cases is known, the time is implicitly included in the calculation of the attack rate (Gordis 2009).
An example of a hypothesized foodborne outbreak is given below. A foodborne disease outbreak (FBDO) is defined as an incident in which two or more persons experience a similar illness resulting from the ingestion of a common food (Center for Disease Control and Prevention 2008). The example provides the calculation of attack rates and the identification of food or drink items which could possibly have caused the outbreak. In case of such an outbreak, first, investigators list all food and drinks served at the dinner. Next, they divide guests into those who consumed a certain food or drink and those who did not. After that the attack rate in each of the groups is calculated using the formula for attack rate given above. The next step is to find a difference in attack rates between the two groups. The food or drink items which show the biggest differences in attack rates can be responsible for an outbreak of disease (Friis and Sellers 2004). Exhibit 9.2 summarizes the steps in the reporting of a foodborne outbreak, as recommended by the US Centers for Disease Control and Prevention (Center for Disease Control and Prevention 2008).
Exhibit 9.2. Guidelines for reporting in investigations of a foodborne outbreak
Investigation of a foodborne outbreak: reported information and guidelines 1. Report type (final or preliminary report during an outbreak) 2. Number of cases (laboratory/confirmed and presumptive cases; if necessary estimated number of cases) 3. Dates (dates where the first and the last known case patients got ill; dates of the first and the last known exposure; attached epidemic curve) 4. Location of exposure (use of country-specific cities' name abbreviations) 5. Approximate percentage of cases in each age group (identification of patterns of age distribution, age groups most affected) 6. Sex of cases 7. Investigation methods 8. Implicated food(s) • The contaminated ingredient(s) • Reasons for suspecting the food (e.g. laboratory analysis) • Methods of preparation 9. Etiology (identification of bacterium, virus, parasite, or toxin, according to the standard taxonomy) 11. Contributing factors [evidence of contamination, proliferation (increase in numbers), and survival factors responsible for the outbreak] 12. Symptoms, signs, and outcomes (number of patients with outcomes) 13. Incubation period (the shortest, the median, and the longest incubation period measured in hours or days) 14. Duration of illness (the shortest, the longest, and the median duration of illness measured in hours or days) 15. Possible cohort investigation (report of attack rate with formula) 16. Location of food preparation 17. Location of exposure (where food was eaten) 18. Traceback (if any traceback investigation) 19. Recall (recall of any food product related to the outbreak) 20. Available reports (if any additional reports) 21. Agency reporting the outbreak (contact information) For advanced reading and downloading the reporting form for foodborne outbreaks, please refer to the Center for Disease Control and Prevention electronic materials, available at http://www.cdc.gov/foodborneoutbreaks/toolkit.htm
Example of a Cohort Study in a Hypothetical Foodborne Outbreak
After participating in a wedding dinner, many of the guests became ill with symptoms like nausea, vomiting, and diarrhea. All 150 persons who participated in the wedding meal were asked about the food and drink they had consumed and whether they got sick after that. The investigators suggested that some food or drink could have been contaminated with staphylococcal bacteria. Using the case definition the attack rates for specific food (for example, food X) was calculated and compared (Table 9.1).
Out of a total of 85 individuals who ate food X, 45 got sick (attack rate 45/85=53%). The attack rate of those who did not eat food X was 5/65 or 7.7%. Food X was assumed to be a possible risk factor for the disease, because of the following reasons: • Food-specific attack rate among those who ate food X was high (53%).
• Food-specific attack rate among those who did not eat food X was low (7.7%), and therefore the difference ("risk difference") between the attack rates was high (45.3%). • The majority of the cases ate food X (45/50 or 90%).
In addition, the relative risk (RR), i.e., the ratio of attack rates, can be calculated: RR = Attack rate ate food X Attack rate did not eat food X = 53 7.7 = 6.9 A relative risk of 6.9 indicates that individuals who ate food X had a 6.9 times higher probability to get ill than those who had not eaten that food. Statistical significance tests can be used to assess that this association was not found due to chance exclusively (see also Chapter 12).
Secondary Attack Rate
When a disease spreads from the initial case to other persons, for example, to family members, the secondary attack rate can be calculated. It generally refers to the spread of disease in a family, household, dwelling unit, or another community or group. Here we would like to emphasize the use of definitions of initial cases. If a few cases of a disease occur at about the same time after an exposure, then the first case which gets the attention of the public health authorities is referred to as an index case, while the other ones are called coprimary cases (Friis and Sellers 2004). A coprimary case is by definition very close in time to an index case; therefore it is considered to belong to the same generation of cases. Therefore a secondary attack rate is defined as follows (Friis and Sellers 2004): Secondary attack rate = Number of new cases in group − initial cases Number of susceptible persons in the group − initial cases × 100 For instance, three cases of measles occurred in a group of 17 children in a summer camp, and it was assumed that exposure took place outside of the camp. Out of these three cases the first one registered by the camp health authorities was considered to be the index case and two other the coprimary cases. Ten days after the first measles symptoms were noticed in the initial cases, further 11 children in the group got ill.
Thus the secondary attack rate was
Case-Control Study
A case-control study should be the preferred study design in an outbreak investigation under at least the following three circumstances (Dwyer et al. 1994). First, if the initial population is very large and only a part of the population at risk can be sampled. Second, if the initial population at risk is not defined well enough to determine a cohort to be followed. Finally, a nested case-control study can be applied within a studied cohort when additional hypotheses should be tested. In a case-control study the distribution of exposures in the group of cases is compared with that in a group of healthy individuals (controls). The aim of case-control studies is to find differences in the risk factors to which two examined groups (cases and control persons) were exposed in the past. The questionnaire used to interview persons is identical in both groups.
Example of a Case-Control Study in a Hypothetical Foodborne Outbreak
We now look at the above example (Table 9.1) from the angle of a case-control study. This means, in particular, that the two groups of "cases" and "controls" involved had been sampled from larger populations of unknown size. Ninety percent of all cases ate food X, compared to only 40% of the control persons (Table 9.2). This suggests that consumption of food X is associated with the disease. We compare the odds of the food consumption in the group of the cases (45/5) to the odds of the food consumption in the group of the control persons (40/60). The odds ratio is therefore equal to 45 5 40 60 = 45 × 60 5 × 40 = 13.5 An odds ratio of 13.5 hints at a strong association between falling ill and having consumed the food X. Similar to cohort studies it is possible to calculate the potential influence of chance with the help of statistical tests.
Proving Evidence for Causal Associations
A statistical association asserted on the basis of an analytical epidemiological study does not mean a causal association.
The likelihood of a cause and effect relationship increases if the following statements are true: • The exposure preceded the illness. The suspected causation is biologically plausible; in other words, it is consistent with modern biological knowledge. • The results correspond to those from other investigations, established and known facts about the disease. • The value of risk or chance (measured by relative risk or odds ratio) is high, which increases the probability of causal association. • There is evidence which reveals a dose-response association (the risk increases with the consumed quantity of the suspected infectious cause) (For more about postulates for causation, see Gordis 2009 and Hill 1965).
Control Measures and Reporting
As has already been mentioned in Section 9.1, the main goal of an outbreak investigation is to stop a current outbreak and to avoid future outbreaks or epidemics. In order to stop an outbreak, the infectious source must be removed or transmission ways should be blocked. To avoid further spread of the infection it is necessary that the conditions that caused the outbreak are eliminated with the help of suitable long-term measures and structural changes. The investigation cannot be considered as completed until the preventive measures have been taken and they have been proven to be effective. Specific measures that can be implemented to control the infectious source are, e.g., callback of contaminated products, closing of a manufacturing plant, cleaning or disinfection, removing persons from the infective source, treating the infected persons. In order to block the transmission, measures such as vaccination or improvement of hygiene, interruption of animate or inanimate environmental transmission, information and educational campaigns can be taken.
Obviously, any outbreak investigation should be completed by the writing of a report and dissemination of results to the involved parties. Detailed guidelines for writing reports of outbreak investigations can be found elsewhere; however, the following remarks should be taken into account (Ungchusak 2004;Arias 2000;Gregg 2002).
First, the results of the investigations should be carefully documented and sent in the form of a detailed interim and final report to all authorities involved as well as to the administrative staff of the affected facility and the infection control center or committee (Weber et al. 2001;Arias 2000). Study findings should also be reported in the form of oral briefings or reports to all informants, interested local, state, and federal public health departments. In addition, the community of people where the outbreak occurred and study participants should be given feedback about the outcome of the investigation; the public can be informed through the news media. The scientific content of the investigation should be made accessible to specialists through publications in scientific journals and bulletins so that everyone can profit from the experience and insights gained.
During the pandemic of influenza A (H1N1), the World Health Organization provided the guidelines document for preparedness and response encouraging not only governments but also communities, families, and individuals to take active part in mitigating the global and local effects of the pandemic. During an outbreak, civil society groups should play a mediator role between government and communities, taking part in health communication and raising awareness. Taking preventive measures at the level of families and individuals such as regular hand washing, covering sneeze and cough, and isolating ill individuals is crucially important as well. It is furthermore necessary that each household takes care of its own safety in terms of access to precise and update information, medicines, water, and food. Recovered individuals should make use of their illness experience and reach out to other affected people to provide them with information and support (WHO 2009).
Conclusions
In the light of increasing frequencies of epidemics and outbreaks, a systematic and targeted action is needed in order to collect evidence and support decision-making processes. An outbreak investigation requires an application of methods of descriptive and, where appropriate, analytical epidemiology. The outbreak investigation and management includes several steps, among them most important are the establishment of the case definition and case-finding techniques, collection of data, and description of cases in terms of time, place, and affected person. Usually an analysis is required. Although associations found in an outbreak investigation cannot automatically be considered as causal, the simultaneous use of a well-planned epidemiological investigation and clinical and laboratory evidence will almost always provide valid information about causes and modes of transmission of diseases which will be helpful for decision making.
|
2019-08-18T13:01:51.908Z
|
2009-07-28T00:00:00.000
|
{
"year": 2009,
"sha1": "0ce4c2682749d145f94bf24470f085361a34b71c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "29282e699969bd6fda8dd1e783c0b01f69a5284e",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
236692340
|
pes2o/s2orc
|
v3-fos-license
|
WADA Time to Choose a Side: Reforming the Anti-Doping Policies in U.S. Sports Leagues While Preserving Players’ Rights to Collectively Bargain
If you were to ask any sports fan whether performance enhancing drugs (“PEDs”) are prevalent in any of the major U.S. sports leagues, the answer would likely be a resounding “yes.” From Barry Bonds to Lance Armstrong, the specter of doping has hung over American sports for decades, and there has been consistent pressure to ramp up efforts to both deter and catch offenders. Yet, while the major U.S. sports leagues—such as Major League Baseball (MLB), the National Basketball Association (NBA), and the National Football League (NFL)—have updated their drug policies, they have not signed on to the World Anti-Doping Agency’s AntiDoping Code. To outside observers, the question arises: If American sports leagues are truly serious about catching athletes who use PEDs, then why not sign on to join the world’s largest anti-doping agency?
The International Olympic Committee established the World Anti-Doping Agency (WADA) in 1999, in response to the drug scandal that occurred at the 1998Tour de France. The Agency’s Anti-Doping Code (the “WADA Code” or the “Code”) is designed to be extremely strict and punitive in order to properly deterathletes from doping and affecting the fairness of competitions. The Code has drawn the ire of many athletes and has implicated privacy concerns, but remains in place, governing the Olympics, international sporting competitions, and even the Ultimate Fighting Championship.
MLB, the NBA, and the NFL, despite past pressure from Congress and WADA officials, have continued to monitor their own athletes and collectively bargain with their players’ unions to develop drug testing policies that walk a fine line between ensuring effectiveness while minimizing invasiveness. Collective bargaining has been seen as a weakness among proponents of the WADA Code. Proponents argue that collective bargaining fails to address the players’ incentives to negotiate toothless drug policies and the leagues’ incentives to ensure that their star players are not implicated in any scandals. However, these concerns from WADA officials and the American public are overblown.
This Note argues that while U.S. sports leagues have some work to do in order to properly combat doping, the WADA Code is far too draconian and overly punitive to be implemented in American sports. As they stand, the U.S. sports leagues’ policies are largely sufficient and should not become any more punitive than they currently are. However, the conflicts of interest involved when leagues and unions develop their own anti-doping policies should be addressed; specifically, the creation of these policies should be entrusted to an independent agency to ensure their unbiased development and implementation. Part I examines the WADA Code, as well as the current anti-doping policies of the NBA, the NFL, and MLB. Part II argues that the major U.S. sports leagues would be ill-advised to adopt the WADA Code to govern themselves because the WADA Code includes significant drawbacksthat place unacceptable burdens on athletes’ privacy and autonomy, the difference in effectiveness is not significant enough justify the imposition of WADA’srestrictions, and doping is not a significant enough problem overall to justify WADA’s many drawbacks. Part III suggests that striking a balance between current U.S. sports league policies and the WADA Code by establishing an independent agency that liaises with each sport’s players’ union and enlists sponsors in the fight against doping. This solution would serve to address some of the issues levied at MLB, the NBA, and the NFL while avoiding the overly punitive and invasive aspects of the WADA Code.
INTRODUCTION
If you were to ask any sports fan whether performance enhancing drugs ("PEDs") are prevalent in any of the major U.S. sports leagues, the answer would likely be a resounding "yes." 1 From Barry Bonds to Lance Armstrong, the specter of doping has hung over American sports for decades, and there has been consistent pressure to ramp up efforts to both deter and catch offenders. 2 Yet, while the major U.S. sports leagues-such as Major League Baseball (MLB), the National Basketball Association (NBA), and the National Football League (NFL)-have updated their drug policies, they have not signed on to the World Anti-Doping Agency's Anti-Doping Code. 3 To outside observers, the question arises: If American sports leagues their players' unions to develop drug testing policies that walk a fine line between ensuring effectiveness while minimizing invasiveness. 11 Collective bargaining has been seen as a weakness among proponents of the WADA Code. Proponents argue that collective bargaining fails to address the players' incentives to negotiate toothless drug policies and the leagues' incentives to ensure that their star players are not implicated in any scandals. 12 However, these concerns from WADA officials and the American public are overblown. 13 This Note argues that while U.S. sports leagues have some work to do in order to properly combat doping, the WADA Code is far too draconian and overly punitive to be implemented in American sports. As they stand, the U.S. sports leagues' policies are largely sufficient and should not become any more punitive than they currently are. However, the conflicts of interest involved when leagues and unions develop their own anti-doping policies should be addressed; specifically, the creation of these policies should be entrusted to an independent agency to ensure their unbiased development and implementation. Part I examines the WADA Code, as well as the current anti-doping policies of the NBA, the NFL, and MLB. Part II argues that the major U.S. sports leagues would be ill-advised to adopt the WADA Code to govern themselves because the WADA Code includes significant drawbacks that place unacceptable burdens on athletes' privacy and autonomy, the difference in effectiveness is not significant enough justify the imposition of WADA's restrictions, and doping is not a significant enough problem overall to justify WADA's many drawbacks. Part III suggests that striking a balance between current U.S. sports league policies and the WADA Code by establishing an independent agency that liaises with each sport's players' union and enlists sponsors in the fight against doping. This solution would serve to address some of the issues levied at MLB, the NBA, and the NFL while avoiding the overly punitive and invasive aspects of the WADA Code.
I. COMPARING THE WADA CODE AND THE BIG THREE
MLB, the NBA, and the NFL (collectively, the "Big Three") are the three most viewed sports leagues in the United States. 14 While other U.S. sports leagues would also be affected if their league adopted the WADA Code, this Note will focus on the effects such adoption would have on the Big Three. Currently, the Big Three utilize a Collective Bargaining Agreement system, in which players' unions, on behalf of the players, negotiate the terms of the anti-doping policies with the leagues. 15 Because of the current collectively bargained system, adopting the WADA Code-& ENT. L.J. 207, 219-20 (2006) ("The current doping situation in professional baseball is so pervasive that Congress is reviewing the sport's drug testing policy." which requires leagues to retain unilateral power to alter their anti-doping policieswould have a massive impact on American sports. As such, Part I begins by examining the anti-doping measures of the WADA Code and comparing them with the policies of the Big Three to determine whether these U.S. sports leagues' policies are actually significantly less effective at deterring doping than sports leagues that have adopted the WADA Code. 16 This Part also explores significant criticisms of the WADA Code's enforcement regime and considers whether these criticisms outweigh any advantage that would come with the Big Three's adoption of the WADA Code.
A. THE WADA CODE
When the World Anti-Doping Agency was founded in 1999, one of their "first major tasks was standardizing the anti-doping policies within the Olympic Movement." 17 The goal of standardization led to the creation of the WADA Code, which now governs the Olympic Games, FIFA, other international competitions, and many non-U.S. national sports leagues. 18 The Code, which first came out in 2003 (with revisions in 2009, 2015, and 2021), has continued to be incredibly "comprehensive, including instructions for how the code is to be implemented, how doping control is to be conducted, how testing and investigations should take place, and how results are to be analyzed and managed." 19 While WADA is the governing body responsible for the implementation of the Code, the Code's signatories "are responsible for the implementation of applicable Code provisions through policies, statutes, rules, regulations and programs according to their authority and jurisdiction." 20 One of the most important aspects of the Code is that the athlete is subject to "strict liability." 21 As defined in the Code, "strict liability" means that an anti-doping organization need not establish intent, fault, negligence, or knowing use of prohibited substances on the athlete's part in order for the athlete to be subject to discipline under the Code. 22 Instead, if a substance is detected in an athlete's body, the athlete has the burden of proof to rebut the presumption or "establish specified facts or circumstances." 23 A similarly strict liability standard is also present in U.S. sports leagues, meaning that one of the most important aspects of the Code is already present in U.S. sports leagues. 24 However, although the NFL and MLB require the 16. Gandert & Ronisky, supra note 12, at 815. 17. Gandert, supra note 7, at 280. 18. Code Signatories, supra note 9. 19. Gandert, supra note 7, at 280. 20. Code Compliance, WORLD ANTI-DOPING AGENCY, https://perma.cc/52KD-V2YB (last visited Dec. 29, 2020). 21. Gandert, supra note 7, at 298. 22. WADA CODE, supra note 6, at 94. 23. Id. at 17. "[S]pecified facts or circumstances" would entail mitigating factors, such as accidental or unintentional ingestion, which could lessen the sanctions, but would not allow the athlete to escape punishment altogether. Id. 24. The appeals process in the NFL and MLB "is meant to forgive tests that resulted in a false positive generated by a procedural error." Ryan Dunleavy, Why Suspended Giants' Golden Tate Is league to prove that any adverse test was obtained through proper protocols, 25 the WADA Code places the burden of proof on an athlete alleging improper testing protocols. 26 One aspect of the WADA Code that differs sharply from U.S. sports league policy is the length of the punishments. Under the WADA Code, an athlete who fails a doping test and is found to have intended to cheat is subject to a period of ineligibility of four years. 27 An athlete acts "intentionally" when they "engage in conduct which they knew constituted an anti-doping rule violation or knew there was a significant risk that the conduct might constitute or result in an anti-doping rule violation and manifestly disregarded that risk." 28 The punishment does not take into account which illicit substance the violation involved. 29 If the violation is deemed to be unintentional, the period of ineligibility is two years, unless the athlete can show No Unlikely To Win His Appeal, NJ.COM (July 28, 2019), https://perma.cc/D4G5-2RK7. The zero-tolerance policy means that generally regardless of the circumstances, unless there was an error in procedure, the suspension will stand. Id. Golden Tate, an NFL player, self-reported an unintentional violation due to a fertility treatment and was still suspended for four games. Id 27. WADA CODE, supra note 6, at 35. 28. Id. If an athlete tests positive for a banned substance, the burden of proof is on the athlete to prove that their offense was unintentional. In order to do so, the athlete generally must demonstrate to the hearing panel (from the appropriate national doping organization or the Court of Arbitration for Sport) how the Prohibited Substance entered their system unintentionally. WADA CODE, supra note 6, at 34. This is fact-dependent, and the ultimate determination of intent relies on how the hearing panel sees the facts. Id. It should also be noted that the Court of Arbitration for Sport, which handles all positive doping test appeals, does not have a stare decisis doctrine, so it is hard for athletes to predict whether or not their case will be successful on appeal to CAS, as each case may have a different outcome depending on the composition of the panel. Because the WADA Code has such lengthy suspension baselines, even mitigated punishments end up taking years out of an athlete's career, surpassing even the lengths of full punishments handed down by the Big Three U.S. sports leagues. 38 Because the length of each punishment applies equally to all athletes, the actual harm 30. The WADA Code defines "No Significant Fault or Negligence" as: "The Athlete or other Person's establishing that any fault or negligence, when viewed in the totality of the circumstances and taking into account the criteria for No Fault or Negligence, was not significant in relationship to the antidoping rule violation." Cf. infra note 35 (defining "No Fault or Negligence"). The athlete must also establish how the substance entered their system. WADA CODE, supra note 6, at 92. 31. Id. 32. The WADA Code defines "Specified Substances" as: "substances and methods which are more likely to have been consumed or used by an Athlete for a purpose other than the enhancement of sport performance." Id. at 19. If an athlete tests positive for a Specified Substance, the burden is on the doping prosecutors to prove that the athlete acted with intent. It should be noted that an athlete who takes a Specified Substance must still establish they had No Significant Fault or Negligence in order to reduce their punishment. Gandert, supra note 7, at 305. 33. A contaminated product is most often a supplement that contains a prohibited substance. If an athlete takes a supplement that is contaminated with a prohibited substance, the athlete must prove they tested positive as a result of taking a supplement containing a substance "that is not disclosed on the product label or in information available in a reasonable Internet search." WADA CODE, supra note 6, at 90. The athlete must then also prove that they had No Significant Fault or Negligence. No Significant Fault or Negligence has almost never been applied in cases involving contaminated substances. Id caused by these suspensions will vary by sport. For example, endurance runners and gymnasts have short athletic primes; for them, a two-year suspension for an accidental ingestion could effectively mean the end of their career. This whole punishment regime is extremely athlete-unfriendly, but the drafters of the WADA Code consider these punishments necessary to properly deter doping; by completely disregarding any extenuating circumstances, the Code theoretically encourages all athletes to remain constantly vigilant. 39 But, as this Note argues, the actual deterrent effect is currently impossible to measure; therefore, leagues should err on the side of protecting athletes from such overly punitive actions.
WADA's random testing is also much more invasive than that of U.S. sports leagues. The WADA Code sets out that "[a]n athlete may be required to provide a Sample at any time and at any place by any Anti-Doping Organization with Testing authority over him or her." 40 Unlike in U.S. sports leagues, "[t]here is no limit to the number of times an athlete can be tested each year including in-competition, out of competition, random and target testing." 41 There is also no distinction between urine or blood samples; either may be collected at any time or place for analysis. 42 Additionally, the "highest-priority athletes," as established by international federations or national anti-doping agencies, are included in a registered testing pool, which means that for each quarter of the year, they must provide the agency detailed location information, known as "whereabouts information." 43 These controversial "whereabouts" rules include filing information on "regularly scheduled activities and a one-hour window each day where [they] must be available for testing. The activities and testing window must be kept updated." 44 If an athlete in a registered pool is not where they said they would be during the one-hour window, that is considered a missed test, and three missed tests within a 12-month period are deemed an anti-doping rule violation. 45 Commentators have noted that "surprise and nonotice testing serve as the cornerstone of [the WADA Code]." 46 Whereabouts testing has been controversial due to the fact that it subjects elite athletes to near constant monitoring and the fact that three missed tests could result in a two to four year suspension. 47 It has also led to legal challenges on the basis of privacy violations and led many high-profile athletes to "openly voice[] their disdain and contempt for WADA's 'whereabouts' rule through the media." 48 This sort of monitoring is not a part of U.S. sports leagues' anti-doping policies and is one of the starkest differences between those policies and the WADA Code. WADA also allows athletes to obtain Therapeutic Use Exemptions ("TUEs") to use prohibited substances for medical purposes, 49 and as with the NFL and MLB, they must be obtained in advance and cannot be obtained retroactively, absent exceptional circumstances. 50 There is criticism regarding the practice of allowing athletes to gain these exemptions for therapeutic use, with some arguing that athletes taking an otherwise prohibited substance under a TUE are "engaged in doping, but [it's] just that they have permission to do so." 51 As it stands, TUEs remain an important part of both the WADA Code and U.S. sports leagues anti-doping policies.
B. THE NBA AGREEMENT
The NBA's anti-doping policies are outlined in the NBA-NBPA Collective Bargaining Agreement ("NBA Agreement"). The NBA can test any player, as long as the league has reasonable cause to believe the player is engaged in the use, possession, or distribution of a prohibited substance. 52 Additionally, the NBA has a random testing program, in which any randomly selected player will undergo testing for prohibited substances; but no individual player will be selected for testing more than four times in a given season and no more than two times in an offseason. 53 Players are also subject to random human growth hormone ("hGH") blood testing no more than two times during a regular season and no more than one time in an offseason. 54 Unlike the WADA Code, under which blood testing can occur at any time, 55 the NBA Agreement specifies that if a player is selected for random testing on a day he is scheduled to play in a game, any blood testing must occur after the 49. To obtain a TUE, an athlete must show, on the balance of probabilities that: (1) the prohibited substance "is needed to treat a diagnosed medical condition supported by relevant clinical evidence"; (2) the prohibited substance "will not . . . produce any additional enhancement of performance beyond what might be anticipated by a return to . . . normal state of health"; (3) the prohibited substance "is an indicated treatment for the medical condition, and there is no reasonable permitted Therapeutic alternative"; and (4) the necessity for the use "is not a consequence, wholly or in part, of the prior use" of a prohibited substance or method. WORLD game has concluded. 56 The NBA conducts a maximum of 1,525 total tests during the season and a maximum of 600 total tests in the offseason. 57 A test is considered "positive" for a Steroid and Performance Enhancing Drug ("SPED") if: (1) the test is confirmed by laboratory analysis; (2) a player refuses to submit to a random test or cooperate fully with the testing process, without a reasonable explanation; (3) the player refuses to submit to a scheduled test, without a reasonable explanation; (4) the player attempts to substitute, dilute, or adulterate a specimen sample or in any other manner alter a test result; or (5) the test is positive for a diuretic, confirmed by laboratory analysis. 58 There is a provision of the NBA Agreement that allows an athletes to offer "a reasonable explanation" after the test, which could potentially include a medical explanation for using the prohibited substance, but the provision does not make this clear. 59 The NBA also allows players to offer "clear and convincing" evidence of no significant fault or negligence that led to the presence of the SPED in their system, in which case an arbitrator may reduce or rescind the penalty otherwise applicable. 60 No significant fault or negligence would be found in "the unusual circumstance in which the Player did not know or suspect, and could not reasonably have known or suspected, even with the exercise of considerable caution and diligence, that he was taking, ingesting, applying, or otherwise using the . . . SPED." 61 For a first positive test, a player is suspended for twenty-five games and entered into the SPED Program. 62 The SPED Program is run by the SPED Medical Director and includes a number of discretionary measures such as random testing for SPEDs and diuretics. 63 For a second offense, the player is suspended for fifty-five games, and if he is not still in the SPED Program, he is required to enter it. 64 For a third offense, the player is "immediately dismissed and disqualified from any association with the NBA and any of its Teams . . . for a period of not less than two (2) years." 65 These suspensions are much shorter than those in the WADA Code; the NBA's harshest punishment matches the WADA Code's minimum punishment for unintentional doping. Still, the first-offense suspension amounts to 31% of the 56. NBA AGREEMENT, supra note 38, at 428. This is likely done to address concerns that drawing blood may affect athletic performance. 57. Id. at 433. 58. Id. at 429-30. 59. There is no provision for TUEs or medical exemptions explicitly laid out in the NBA Agreement, but this excerpt could be read to suggest that a medical exemption may be granted retroactively: "If the Medical Director or SPED Medical Director (as applicable) determines, in his professional judgment, that there is a valid alternative medical explanation for such positive test result, then the test shall be deemed negative. " NBA's 82-game season. To date, the NBA has not had to suspend an athlete for a second SPED offense. 66 In fact, the two most recent suspensions in the NBA involved allegedly contaminated supplements and unintentional ingestion of a diuretic, 67 which are quite distinct from deliberate SPED use. While the majority of the NBA's anti-doping policy is comparable to other U.S. sports leagues, it has often been criticized as being comparatively weak and insufficient because the NBA tests much less frequently than either MLB or the NFL. 68 However, the relatively low number of tests is likely due to the fact that the NBA has been implementing anti-drug policies since 1983, when the main target was rampant cocaine use; and it is likely that the longevity of the program has helped to establish an effective anti-drug culture among current players. 69 The focus of the NBA program is directed "most centrally at drugs of abuse," so, "[u]nlike the WADA Code, which is entirely punitive, the relevant NBA program ha[s] significant treatment and counseling elements." 70 Additionally, "[i]n the past, including in testimony before Congress in 2005, NBA officials have made the case that performance-enhancing drugs are unlikely to be effective in basketball." 71 This backdrop may explain why the NBA's anti-doping policies seem more lax than those of other U.S. sports leagues. However, this is not to say that the NBA's program is actually insufficient or lagging behind the other U.S. sports leagues. WADA has in fact commended the NBA for instituting blood testing for hGH, and the testing program has yielded very few positive results. 72 Therefore, while the NBA's antidoping policies may not seem as stringent as the WADA Code or even other U.S. 70. Haagen, supra note 4, at 841. 71. Abbott, supra note 68. It is unclear whether the assertion that performance-enhancing drugs are less useful in basketball is completely accurate, but it is regularly put forth as a reason for the relatively few positive tests. As noted, "SPEDs sometimes trigger side-effects that would be detrimental for playing basketball. While they may help a player become more muscular and recuperate faster, they can also rob a player of quickness and coordination and make him more susceptible to injury, muscle cramps and dehydration. sports leagues, it is not clear that dramatic changes are needed to effectively address doping within the NBA.
C. THE NFL POLICY
The NFL began steroid testing "for informational purposes" in 1987, and started suspending players for steroid use in 1989. 73 The NFL were early adopters of such testing; for example, the MLB only began testing in 2002, 74 and the NBA only began testing in 1998. 75 The NFL has long prided itself on having the "most stringent testing policy among the major American sports leagues" and has continued to adapt its policies throughout the years to address performance enhancing drug scandals as they emerge. 76 The NFL anti-doping policy is contained within the NFL Policy on Performance Enhancing Substances ("NFL Policy"). The NFL testing policy follows a similar testing regime to that of the NBA, but the tests occur with more frequency. 77 Every player in the NFL will be tested at least once per league year. 78 Each week during the preseason and regular season, ten players from every team in the NFL will be randomly selected for urine testing. 79 During the postseason, ten players from every team in the playoffs will be tested each week for as long as that team remains in the postseason. 80 Much like in the NBA, urine samples may be collected at any time, but the collection of blood specimens is "prohibited on game days unless the player's day off is scheduled for the day following a game day, in which case blood collections may occur following the conclusion of the game." 81 Unless a player is in "reasonable cause testing," 82 they will undergo no more than six blood tests in any calendar year. 83 Each week in the regular season, eight teams are randomly selected, and within each of those teams five of the ten selected players are randomly selected for serum 73 blood testing in addition to urine testing. 84 During the postseason, five of the ten players selected for testing each week will receive serum blood testing in addition to urine testing. 85 During the offseason, an independent administrator for the NFL will randomly assign ten percent of each team's selected players each week to receive serum blood testing in lieu of urine testing. 86 Specimen collections generally occur at a team facility, stadium, or scouting combine venue. Upon being notified that they have been selected for testing, a player must submit a specimen within three hours and may not leave the premises for any reason. 87 If specimen collection is to occur away from one of these facilities, the independent administrator responsible for collecting the specimen may contact the player to notify them that they have been selected and schedule a collection time within twenty-four hours at a site not more than forty-five miles away. 88 This is an attempt to ensure that players are not aware of when they will be tested, but without having to constantly monitor their daily movements, as the WADA Code seeks to do with its whereabouts testing.
The first time a player violates the NFL Policy, they are suspended without pay pursuant to the following schedule: (1) two regular or postseason games for a positive result for stimulants or diuretics; (2) six regular or postseason games for a positive result for an anabolic agent; or (3) eight regular or postseason games for "the manipulation and/or substitution of a test and the use of a prohibited substance." 89 A second violation for stimulants or diuretics results in a five-game suspension, and a second violation for anabolic steroids results in a sixteen-game suspension. 90 The third time a player violates the NFL Policy, they will "be banished from the NFL for a period of at least two seasons." 91 Considering the sixteen-game length of an NFL regular season, these punishments can result in a large impact on both the team's success and the player's salary.
Like the WADA Code, the NFL grants TUEs for specific medications that are appropriate for treatment of a corresponding medical conditions. 92 These exceptions must be granted on the front-end, and will be granted retroactively only "if emergency use of the prohibited substance is necessary to avoid morbidity or mortality of disease or disorder." 93 The NFL also allows a player to present evidence establishing that the presence of the prohibited substance was not due to his fault or negligence; but otherwise, it does levy punishments to all players found to be in violation of the policy, regardless of the circumstances. 94 As detailed above, the NFL's policies are stringent and punitive. The NFL has made consistent efforts to close loopholes and address issues to ensure that its antidoping policy remains the most comprehensive in American sports. 95 The addition of hGH blood testing, the continued dedication to randomized testing throughout the season, and the increases in punishment length demonstrate that the NFL continues to take its anti-doping policies seriously. 96
D. THE MLB PROGRAM
Though Major League Baseball has a long history with doping, its current antidoping policy can be traced to the BALCO scandal of the early 2000s. In 2002, federal agents began investigating the Bay Area Laboratory Co-Operative (BALCO), a California lab suspected of supplying performance enhancing drugs to MLB players and other athletes. 97 As the BALCO scandal continued to implicate athletes such as baseball stars Barry Bonds and Jason Giambi, it "forced Major League Baseball to strengthen its steroid-enforcement policy." 98 Facing pressure from Congress to implement stricter steroid policies, MLB began "survey testing" in 2002 to gauge how many players were doping. 99 Five to seven percent of the players tested positive, and mandatory testing was implemented in 2004. 100 In 2006, MLB hired former Senate Majority Leader George Mitchell to conduct an investigation into PEDs in the league, which culminated in the Mitchell Report, 101 a "409-page report that identified more than 85 current and former baseball players." 102 The Mitchell Report also recommended three ways MLB could improve its policies: "(1) by vigorously investigating the use of performance-enhancing drugs through non-analytical evidence, enhancing cooperation with law enforcement authorities, and establishing a department of investigations; (2) by improving the player education program; and (3) The MLB Commissioner subsequently adopted these recommendations and MLB has continued to vigilantly police doping. 104 The current MLB anti-doping policy can be found in Major League Baseball's Joint Drug Prevention and Treatment Program (the "MLB Program"). Currently, MLB tests every player at least three times a year: Each player is subject to an unannounced urine specimen collection upon reporting to Spring Training, and will be randomly selected for a urine specimen collection once during the season and once during the offseason. 105 In addition, there are 4,800 urine specimen collections from randomly selected players during each season (at least 200 of which are conducted during Spring Training) and 350 urine specimen collections during each offseason. 106 Each player will also be randomly selected once for a blood specimen collection during the season to test for hGH (collected post-game, as with the NFL and NBA); in addition to those tests, 500 blood specimen collections of randomly selected players will occur during the season and 400 during the offseason. 107 A test is considered "positive" for PEDs if: (1) laboratory analysis confirms a substance; (2) a player refuses or, without good cause, fails to take a test; or (3) a player attempts to substitute, dilute, mask, or adulterate a specimen or in any other manner alter a test. 108 MLB's punishment for a positive test for a PED is as follows: (1) for a first offense, an 80-game suspension; (2) for a second offense, a 162-game/183-days-ofpay suspension; and (3) for a third offense, permanent suspension from Major League and Minor League Baseball, with a chance to apply for discretionary reinstatement after a minimum period of two years. 109 These punishments constitute at least a large portion of a 162-game MLB season, or potentially more, and are doled out frequently, with four suspensions for PEDs levied in just the first three months of 2020. 110 Like the NBA and NFL, MLB also allows players to obtain TUEs prior to taking any substance. 111 It also gives players the opportunity to provide evidence of no significant fault or negligence, but like WADA, and unlike the NFL, it provides explicit limits on how much a punishment can be mitigated. 112 In the case of no significant fault or negligence, the arbitration panel that hears the evidence may reduce the suspension, subject to the following limitations: (1) The panel may not reduce the penalty for a first-time offense to fewer than thirty games; (2) the panel may not reduce the penalty for a second-time offense to fewer than sixty games; and (3) the panel may not reduce the penalty for a third-time offense. 113 Proponents of the WADA Code would argue that the Code is far more effective than U.S. sports leagues at detecting and deterring doping. However, while the NBA, NFL, and MLB policies are not as punitive and do not require testing as frequently as the WADA Code, it is not clear that these polices are actually less effective than the WADA Code at deterring doping. Admittedly, there is some room for improvement to the anti-doping policies of the Big Three, particularly with the issues of conflict of interest and transparency. However, as explored further in Part II, WADA is not the correct solution for these problems.
II. THE REASONS TO ADOPT THE WADA CODE ARE NOT PERSUASIVE
When considering whether or not U.S. sports leagues should adopt the WADA Code, it is important to consider all of the drawbacks that would come with its adoption. The WADA Code has harsher requirements and harsher punishments than the U.S. sports leagues' policies, but proponents of the Code argue that its harshness is justified in order to prevent doping. 114 However, these proponents underappreciate the effects that the Code's harshness has on athletes. The WADA Code implicates serious privacy concerns, and its lengthy minimum punishments are overly punitive, particularly as they also apply to athletes who unintentionally ingest a prohibited substance. As explained in this Part, the arguable benefits of the WADA Code are not worth any of the likely harms that may accompany its implementation.
Additionally, the question of whether the WADA Code is actually more effective than the NBA, NFL, and MLB policies at catching doping violations is difficult to answer because the testing procedures are always playing catch-up with the actual drugs and methods used. 115 This is best illustrated by looking at famous cases of long-term evasion of punishment, such as Lance Armstrong, 116 the BALCO 113. Id. 114. Halt, supra note 8, at 278 ("While acknowledging that the requirements place a heavy burden upon athletes, the IAAF believes that the new 'whereabouts' rule strikes a proper balance between the need to locate cheaters and the rights of clean athletes."); Stewart, supra note 10, at 225 ("They see the shift [to the WADA Code] as a necessary evil in maintaining drug-free competition."); Gandert, supra note 7, at 299 ("While some athlete representatives find [the penalties] unfair, the strong penalties are an asset of the system, helping to keep cheaters out and providing strong disincentives to cheat."); Haagen, supra note 4, at 846 ("The WADA Code explicitly makes a series of trade-offs in determining how to combat performance-enhancingdrugs [sic], and those trade-offs place heavy burdens on participating athletes.").
115 here doping is concerned, the arms race has outrun the cold war." 119 So, in order to properly justify U.S. sports leagues adopting the harsher policy embodied in the WADA Code, it should, at the very least, be proven to be a dramatically more effective regime.
A. WHEREABOUTS TESTING AND LONG MINIMUM PUNISHMENTS ARE OVERLY INVASIVE AND PUNITIVE
As noted above, the WADA Code requires whereabouts testing and strict minimum punishments, and the invasiveness and punitiveness of these policies are strong reasons for the Big Three to decline to follow that regime. The WADA Code mandates that every athlete subject to whereabouts testing provide "one specific 60minute time slot where he/she will be available at a specific location" every day, seven days a week, 365 days a year. 120 While athletes subject to the WADA Code have largely come to tolerate whereabouts testing out of necessity, the testing remains unnecessarily restrictive and has major privacy implications for athletes. It has been criticized as "effectively turn[ing] athletes into prisoners" 121 and as being unnecessarily opaque in its procedures. 122 Considering the collective bargaining that currently underlies the Big Three's policies, it is extremely unlikely that any of the players' unions would agree to such invasive practices. 123 117. Sydney Lupkin, Why Drug Tests Can't Catch Doping Athletes, ABC NEWS (Aug. 6, 2013), https://perma.cc/FA6A-NRFW ("Trevor Graham, who coached track and field athletes for the Olympics, anonymously turned in a dirty syringe to the USADA, revealing that athletes were getting a then-unknown drug from the Bay Area Laboratory Co-operative.").
118. Cuffey, supra note 115, at 666 ("WADA identified Grigory Rodchenkov, the director of Russia's anti-doping laboratory, as a major actor in the doping cover-up. Rodchenkov would later admit to developing a three-drug mixture of banned substances that he provided to dozens of Russian athletes, replacing thousands of PED contaminated urine samples and passing these samples through a circular hole cut through the wall, concealed by a cabinet during the day."); Gandert, supra note 7, at 309 ("According to these allegations, over one thousand Olympic and Paralympic athletes participating in thirty sports benefitted from the Russian cheating conspiracy, which dated back to 2011." The allegations were first publicized in December 2014 Additionally, the long minimum sanctions under the WADA Code are overly punitive and fail to take into account other deterrent factors. The two-year minimum suspension, along with WADA's strict liability policy, puts immense pressure on athletes to closely scrutinize everything they ingest on the off chance that they may accidentally ingest a prohibited substance. Considering the fact that many of the prohibited substances on the list may not actually have any performance-enhancing effects, this is much more daunting than it may seem. 124 Additionally, these lengthy punishments do not take into consideration other factors that may deter doping, such as loss of sponsorships, reputational damage, and increased fines, which are discussed further in Part III. WADA takes an approach that seems to be based on the idea that the length of a suspension is the only factor that can properly deter athletes from doping, whereas, in reality, there are many other factors to consider. Despite any concerns one might have with the Big Three's anti-doping policies, these drawbacks alone would lead to dramatic changes to their current regimes and are unlikely to be tolerated by any players' union.
B. THE NUMBER OF DOCUMENTED VIOLATIONS UNDER EACH POLICY ARE NOT DRAMATICALLY DIFFERENT
Indeed, even if the drawbacks to the Code were justified by the promise of more effective doping prevention, the Code should at least be demonstrably more effective than the Big Three's policies at catching athletes who dope. To know how effective each drug testing policy is, one would have to know the number of athletes that are actually doping and how many of those are actually caught. This would allow for a true empirical comparison of the effectiveness of each policy. However, this data does not exist. In the absence of empirical data, it is instead worth looking at the number of doping violations from WADA and from the major U.S. sports leagues to see whether there is a dramatic difference in violation rates that would suggest increased effectiveness. This comparison is an imperfect one and does not tell the entire story, but that is precisely the point. These numbers cannot give us any true indication of effectiveness-a fact that renders any assertion that the Code is more effective somewhat toothless. Nevertheless, it is worth looking at some of the available numbers to note the lack of any dramatic disparity between athletes caught doping under the Code and the Big Three's regimes.
The most recent data available from WADA is from 2017, when 245,232 samples were collected by anti-doping organizations and analyzed by WADA-accredited laboratories. 125 Of those samples, 1,459 were confirmed to be anti-doping rule violations ("ADRVs"). 126 This does not include the samples that were found to have adverse analytical findings, 127 but only those samples that ended up as a violation for decided to try doping this year and were subsequently caught. 139 On the other hand, the NBA tests much less frequently than the other two major sports leagues, and this may be the reason for the low number of PED violations it has uncovered. Though hard conclusions are difficult to draw, what we can know from the evidence is that, of all the tests conducted in 2019 season, only four violations were reported. Again, based on these numbers, it is difficult to ascertain whether one is any more effective than the other. 140 WADA has a higher percentage of anti-doping rule violations discovered, but WADA also covers a much larger number of athletes. To try to make the comparison more analogous to the U.S. leagues, one option is to compare the WADA violations for each sport to the U.S. sports leagues' violations. For international games of American football, 141 WADA took 622 samples and had 13 ADRVs (2.1%); for baseball, WADA took 1,039 samples and had 8 ADRVs (0.8%); for basketball, WADA took 5,697 samples and had 24 ADRVs (0.4%). 142 These numbers indicate that there are certainly more violations caught by WADA, but the numbers do not present a violations rate appreciably different than the U.S. leagues, and certainly nothing in the data affirmatively indicates widespread doping in U.S. leagues. In fact, some of the basketball samples taken are likely from U.S athletes participating in the Olympics or international competitions, and there were no violations by U.S. athletes, which may indicate that the NBA's current policies are working to deter PED use among players.
It is quite difficult to draw any clean conclusions about effectiveness from the data that is available, but assuming that these various data sets are analogous enough to allow for comparison, the data we have conflicts with the idea that WADA is objectively more effective at discovering doping violations than U.S. sports leagues. 143 If the percentage of violations caught under the WADA Code were drastically different, perhaps that would indicate the increased effectiveness of the WADA Code, but the differences are simply not that appreciable. Additionally, there 139. It should be noted that "[t]hree players getting popped in three months is, in the words of NBA executive turned ESPN analyst Bobby Marks, 'unprecedented.'" Devine, supra note 67. It is unclear whether this will be an ongoing trend, as "[m]aybe the NBA's testing has improved [or] maybe a few more players than normal have been trying to gain an edge, and are paying the price for it." Id. Only time will tell. However, before the NBA season was suspended due to COVID-19, no other players had tested positive for SPEDs in the 2019-20 season. NBA Fines & Suspensions 2019-2020 Season, supra note 140.
140. Complaints regarding the U.S. sports leagues' anti-doping policies largely boil down to not enough monitoring and needing to collectively bargain to add prohibited substances. However, these claims of ineffectiveness are impossible to prove objectively because one cannot know the percentage of total violations caught. It should be noted that most of the major doping scandals were not discovered by testing, but instead by whistleblowers or criminal investigations. Haagen, supra note 4, at 846 ("WADA is not definitively more effective in ridding sport of performance-enhancingdrugs [sic], which warrants a serious debate about whether the costs associated with this program are worthwhile in the context of American professional sports.").
141. "The International Federation of American Football (IFAF) is the international governing body for the sport of American football and is responsible for all regulatory, competition, performance and development aspects of the game on a global level." The IFAF is a WADA Code signatory. About, INT'L FED'N OF AM. FOOTBALL, https://perma.cc/3NS4-WWPN (last visited Oct. 28, 2020).
is the argument that an effective policy would find that there are almost no violations. As former Congressman Henry Waxman noted in 2005, "the percentage of NFL players who test positive for steroids is very low. . . . Is this because the policy is working, or is this because players have figured out how to avoid detection?" 144 This gets to the core of why it is so difficult to assess the effectiveness of the various policies, because it is unclear whether a perfect policy would catch more offenders or catch none.
Essentially, without knowing exactly how many athletes are actually doping, it is impossible to determine the effectiveness of the differing polices, because there is no way to compare the percentage of doping athletes being caught under each regime. What we are able to see from the data is that the violation percentages are not dramatically different, and while this may indicate the effectiveness of the current U.S. league policies, it could just as easily indicate that athletes under these policies are able to escape detection at higher rates. While it is always helpful to consider the data that is available, in the end, the high uncertainty surrounding the reported violation rates under each policy makes this type of data comparison a poor way to assess the overall effectiveness of anti-doping policies in the U.S. sports leagues.
The current controversy over TUEs illustrates this point well. The current view that some critics hold is that TUEs provide a Code-approved method of engaging in doping. Perhaps this is an indictment of the WADA Code, a loophole in what is considered the gold standard, but at a minimum, it goes to show that regardless of how unimpeachable an anti-doping regime may seem, there will always be criticisms that athletes are skirting the rules. Ultimately, if there is any justification for the adoption of the WADA Code in U.S. sports leagues-as WADA proponents argue there is-it cannot be made based on claims of "effectiveness" that cannot be proven empirically. Thus, if such a justification exists, the argument for that justification must be based on other factors than the reported data on PED policy violations.
C. THE JUSTIFICATIONS FOR STRONGER DOPING REGULATIONS AND THEIR FLAWS
Perhaps the question we should be asking is: Do we find the specter of sports doping so unacceptable that Americans would accept athletes being constantly monitored and harshly punished in exchange for a "level" playing field? 145 The common justifications put forth for stronger policies like the WADA Code are that weak doping regulations affect fan interest, that doping creates an unfair playing field, and that performance enhancing drugs harm athletes' health. 146 However, as discussed in this Section, although these justifications do help illuminate ways in which U.S. sports leagues could address concerns about their anti-doping policies, these justifications do not warrant the adoption of the WADA Code in particular. 146. Hard, supra note 7, at 545.
Fan Interest
As Professors Preston and Szymanski have said, one of the primary reasons given for doping regulation is that doping undermines interest in the sport and damages a sport's reputation. 147 However, there is research that suggests that violations of doping regulations actually do not have a significant impact on fan interest. 148 In fact, in the case of MLB, one study found that the announcement of a PED violation does have an initial home game attendance reduction of 8%, but this reduction "fades quickly to the point of being statistically insignificant 12 days after [the] announcement"; and it has only a small negative impact on the game attendance of other MLB teams. 149 This finding conflicts with the popular assumption that the MLB's well-publicized doping scandals negatively affected viewership, as well as the contention by certain baseball purists that steroids ruined the game or completely negate certain players' stats. 150 Despite Barry Bonds's steroid use, which was suspected long before he was caught, fans still regularly reminisce about his dominance of the game and how exciting he was to watch. 151 Mark Johnson, author of a book about the history of sports doping, noted that "Major League Baseball fans' attitudes to doping in sport might be equivalent to how a Rolling Stones fan does not judge Keith Richards for his fondness for drugs . . . . He delivers an astonishingly great entertainment product that moves people, so fans don't hold him to account for the substances he takes." 152 This doping ambivalence isn't limited to just MLB; on the contrary, "[t]he more interested people [are] in sports, the more liberal . . . their attitudes towards doping." 153 These studies and articles from commentators illustrate that despite popular belief, fans will not withdraw their support if doping is not eliminated in sports. Because it doesn't seem as though doping has a significant negative effect on fan interaction in the first place, 154 it seems unlikely that more strictly regulating doping in sports by adopting the WADA Code would help maintain viewership, attendance, or fan interest. If few people are tuning out because of the prevalence of doping, then the argument that strictly regulating doping would bring back fans who stopped watching because of doping is irrational. Thus, the increase in regulation and stricter punishments for a violation seem unlikely to improve the viewing experience or offer significant benefit to the general viewing public, or even the most invested sports fans. 155 Considering the decidedly negative feedback the WADA Code whereabouts testing has received from elite athletes, 156 the increased negative publicity the NBA, the NFL, and MLB would receive from athletes opposing these policies would be overwhelming. Players are used to having some degree of control over doping regulations through their players' unions, and "[t]here are substantial differences in how drug testing policies can be implemented in sports that are subject to collective bargaining and those that are not." 157 In fact, the regulations would actually be more likely to have a negative effect on fan interest, as athletes would be prone to air out their displeasures; and if a star athlete was suspended, a minimum suspension of two years would almost certainly be enough to make at least some fans lose interest. As a New York Times article aptly put it: "The pull of the home team is much stronger than indignation over a scourge that we don't truly comprehend. Fan sensibilities have not been offended as much as they've been anesthetized." 158 It is worth noting that cycling saw a dip in its popularity (measured by Tour de France viewership) in 2018, 159 which, at first glance, could be argued was due to the pervasive doping during Lance Armstrong's record-setting Tour de France run. 160 However, it also could be due to the simple fact that TV audiences may be shrinking across all sports. 161 Additionally, Lance Armstrong was a star during that era, and 154. Cisyk & Courty, supra note 148. 155. One study found that the more interested people were in sports, the less likely they were to agree with the idea that TV stations should stop broadcasting events that had repeated doping exposure. This is perhaps due to the fact that "the desire to watch sport was stronger than the reluctance toward whether he was beloved or hated, it's clear that his notoriety drew in viewership, whereas the Tour's current lack of a star as famous as Armstrong could be hindering its viewership. 162 Either way, even if the reputation of cycling has been permanently tarnished, "[t]he Tour . . . remains a cash cow and is a major driver of the estimated 45 million euros per year in profits for ownership company ASO." 163 Higher ratings did bounce back for the Tour in 2019 as well. 164 So, despite the fact that there are reports that doping in cycling remains a pervasive issue, the Tour continues to generate profits, and its TV viewership is not definitively linked to the doping scandals. 165 As noted by sports ethicist Jan Boxill, "human beings hold contradictory views. They're outraged at Barry Bonds, yet they want to see home runs." 166 Ultimately, this contradiction makes it fair to conclude that fan interest does not justify increased doping regulation and the implementation of the WADA Code.
Establishing a Level Playing Field and "Spirit of Sport"
There is also the issue of fairness in sports, which on its face is a defensible justification for increasing doping regulation in general and adopting the WADA Code as a means to effectuate that goal. However, proponents of this argument overlook the fact that unfairness is baked into every aspect of professional sport. These proponents fail to explain why there is no real push to ensure a level playing field in any aspect of the game outside of doping. It is unclear why doping is treated differently than other enhancements. 167 In baseball, there are a number of teams that regularly outspend their competitors, but that is seen as just one of the benefits of being a large market team. 168 Similarly, in football, at the collegiate level there are clear advantages that come with being a well-known Division 1 football team. These advantages end up affecting their athletes' level of play, but this is not seen as much of a concern. 169 Fans tolerate inequalities of nature, such as Michael Phelps' body COLUMBIA JOURNAL OF LAW & THE ARTS [44:2 being uniquely adapted to swimming, 170 and inequalities of opportunity, such as being from a rich country with the resources to obtain legal performance enhancing methods, yet some fans claim to draw the line at certain performance enhancing drugs and methods. 171 Yes, sports should ideally have a completely level playing field, but as shown above, it is essentially impossible to create such an environment. So, why is doping treated differently? 172 One argument is that this difference can trace its roots back to the fan interest justification and the idea that organizations want to secure fair competition because fans are not interested in seeing unfair competition caused by doping. However, because it seems as though doping violations have a difficult to quantify (but likely insignificant) effect on fan interest, this justification for strict doping regulations boils down to maintaining the "spirit of sport." 173 WADA defines the spirit of sport as: "Ethics, fair play and honesty, health, excellence in performance, character and education, fun and joy, teamwork, dedication and commitment, respect for rules and laws, respect for self and other participants, courage, and community and solidarity." 174 This definition has been characterized as "a collection of undefined terms associated with moral norms" and criticized as problematic for "amount [ing] only to the forced introduction of private moral values into an important area of public activity." 175 Furthermore, the invocation of the "spirit of sport" is selective; WADA's proponents are not equally as concerned with mitigating unfairness in its many other forms, including team-to-team spending disparities, athletes' economic disadvantages, and inequitable access to technology. It seems fair to say that the ambiguity associated with the "spirit of sport" renders it a somewhat weak justification for the constant monitoring of elite athletes or the extremely strict sanctions that come with a violation.
Protecting Player Health
Finally, perhaps the best justification for the adoption of the WADA Code as a way to more effectively regulate doping in U.S. sports leagues is maintaining player health. It is important to note first that there are a number of competitive sports actively detrimental to athlete health that remain incredibly popular. football with an increased risk of long-term neurological conditions, 176 yet football remains the most popular sport in America. 177 Rugby leads to a number of specific physical injuries as well, 178 but it remains a popular sport worldwide. 179 This is not to say that fans don't care about player health at all, but it is clear that this concern is selectively invoked. The health of athletes can be raised as a concern against doping, but "fans can be outraged that football has left some former players with severe brain trauma yet still slip on their favorite team's jersey and watch 'Monday Night Football.' With entertainment, fans don't often let morality ruin their fun." 180 Nevertheless, the "reality of modern high performance is that athletes make choices about, and give consent to, high risk decisions oriented towards driving their bodies to the limits of physical capacity" and athletes may choose to play football, but may not want to be forced to add risk by using PEDs. 181 Thus, if the rationale for regulating doping is truly to support player health and mitigate any additional health risks, then that should be the primary goal of the regulations. Without a doubt, there are PEDs that are detrimental to player health, but there are also a number of banned substances that do not have harmful effects and may barely even affect performance. 182 One suggestion for a better approach is to consider the adoption of "a simplified matrix [that] may consider prohibiting drugs/methods likely to increase social and individual harms (e.g. strongly performance-enhancing/performanceenhancing and potentially health-damaging drugs) in a specific sport." 183 This method would make it so that a substance would never be included on the prohibited list solely because it is misused, but only if there is scientific evidence available that it "potentially significantly enhances performance and at the same time may possess a health risk for athletes." 184 This would mean that the WADA Code would no longer solely consider whether "taking [a] drug goes against the spirit of sport," but would adopt a scheme that better prioritized scientific testing to see if the drug really does pose significant health risks to athletes and also enhance performance. 185 If implemented, this would ensure that the doping regulations really are designed to protect athletes, rather than just a list of substances that may or may not affect performance or player health. There is the claim that this would disadvantage athletes who were unwilling to take PEDs, even if there is no scientific evidence of harm. But the current regime permits variations in training schedules and certain athletes choose to engage in legal performance enhancing practices that other athletes do not. 186 There is a forceful argument that the regulations should not get into policing what every athlete is willing to do, but should only address the health of athletes. This would prove that the driving motivation behind the regulations truly is for the benefit of the players, as opposed to vague notions of the "spirit of sport." Yet, concerns over player health still does not justify the WADA Code's whereabouts testing or the mandatory minimum of a two-year suspension. While the goal of improving player health is important, a sport's anti-doping policies should be testing to make sure that athletes are not taking substances that could harm their health. It should not rise to the level of constant policing and invasive testing, as the WADA Code requires. Currently, athletes are willing to take immense risks in order to better their performance and increase their chances at excelling in competition. In a 1980's study, researcher Bob Goldman asked top performing athletes "whether they would take a drug that guaranteed them a gold medal but would also kill them within five years. More than half of the athletes said yes. When he repeated the survey biannually for the next decade, the results were always the same. About half of the athletes were quite ready to take the bargain." 187 If the athletes are this dedicated to finding a substance that can improve their chances at winning, even at the expense of their health, it seems likely that only a system that monitors athletes 24/7 would deter them, and even WADA's "whereabouts" testing does not rise to this level of invasiveness. Such a policy is undesirable for the reasons explored earlier in this Note, and over-policing to catch doping would perhaps be effective but would also necessitate even further intrusions into athlete privacy. In addition to a system that monitors for and bans only healthadverse drugs, an education program that alerts athletes to exactly why certain drugs are banned and the health risks that come with them-rather than just generally opining on the dangers of PEDs or painting in broad strokes as the current WADA education program does-may be more effective. 188 There may be some athletes who still choose to take the drugs that carry immense health risks, but even constant policing and the strictest sanctions are unlikely to properly prevent this. Athlete health is incredibly important, but so is respecting their right to privacy and autonomy. Constant policing to ensure athlete health unnecessarily infantilizes and robs athletes of precious freedoms, and the potential benefits of such a regime are not worth the costs. Striking a balance between respecting athletes' reasonable expectations of privacy, properly communicating the real risks that come with prohibited substances, and monitoring for athlete health is still a better way to address this concern than the WADA Code's current enforcement regime.
III. ALTERNATE METHODS AND DETERRENTS TO STRENGTHEN U.S. ANTI-DOPING POLICIES
As shown above in Part II, the drawbacks to the adoption of the WADA Code outweigh any benefits it would bring to increased doping deterrence in the U.S. sports leagues. Yet, there are still ways that U.S. leagues can strengthen their anti-doping policies and address concerns of conflicts of interest. A common criticism levelled at U.S. sports leagues is that "a truly effective anti-doping policy is not in [their] interest and thus, they are motivated to install a lenient and porous policy with many loopholes." 189 The crux of this argument is that, because both owners and players theoretically benefit from doping, there is no real incentive for U.S sports leagues to vigorously enforce their anti-doping policies. Additionally, U.S. sports league athletes are unionized, and the National Labor Relations Act (NLRA) requires the employers and unionized employees to bargain "in good faith with respect to wages, hours, and other terms and conditions of employment." 190 The National Labor Relations Board (NLRB) has found that drug policies that require testing of employees is a subject that must be bargained over between the employer and a representative of the unionized employees. 191 This means that any changes to the anti-doping policies of the U.S. sports leagues must be bargained over with the players' unions. This system stands in contrast to WADA, which "is an independent agency with the ability to unilaterally regulate doping without having to bargain with its members." 192 The ability for players to help shape the same anti-doping policy to which they are subject is consistently seen as a weakness of U.S. sports leagues' policies; critics insist that because the policy requires mutual agreement with the players, it must not be as strong as a unilateral policy.
However, while conflicts of interest can naturally arise with regard to doping policy, collective bargaining should not be seen as a weakness because there is evidence that most U.S. athletes have a vested interest in a clean playing field. 193 Still, the idea that a few athletes or the league may seek to subvert proper anti-doping policies, whether to avoid detection or to avoid punishment for a superstar player, is one that should be addressed. In fact, there is a way to address the conflict of interest in properly enforcing anti-doping policies while also continuing to adhere to the 189. Gandert NLRA's requirements and taking athlete concerns into consideration. That solution is the creation of an independent American anti-doping agency to monitor the Big Three U.S. sport leagues.
A. AN INDEPENDENT AMERICAN AGENCY
There are several reasons why the adoption of an independent American antidoping agency makes sense. Having an independent organization "make anti-doping rules is the logical way to remedy the conflict of interest that afflicts the American anti-doping system." 194 This claim is generally advanced to support the idea of U.S. sports leagues joining WADA and adopting the WADA Code to combat doping. 195 However, because of the NLRA and the requirement that employers collectively bargain with the unions, adopting the WADA Code is logistically difficult and not the ideal way to combat doping. Instead, the U.S. sports leagues could silence the criticisms of WADA Code proponents while showing they take anti-doping policy seriously by taking it upon themselves to establish an American independent agency that would monitor the leagues, without any input or oversight by league officials. This would continue to respect athlete privacy and be appropriately punitive, unlike the WADA Code, as it would maintain the collective bargaining system currently in place and avoid any dramatic alterations to current anti-doping policies. The agency would handle testing and conduct a sport-specific risk assessment to determine what the appropriate testing frequency should be. In addition, the agency would interface with the players' unions, just as the leagues currently do, to set the prohibited substances list, the length of punishments, and what testing would be used. As mentioned previously, there is the concern that allowing players' unions to voice their concerns will lead to weak doping sanctions and numerous loopholes. 196 But these concerns are misplaced; in fact, there are studies that show that "the majority of MLB players do not support the use of performance enhancing substances and feel cheated by those who use them." 197 It is likely that that this sentiment carries over to the other U.S. sports leagues and that even if individual players have incentives to dope, these incentives do not necessarily carry over to the players' unions as a whole. Thus, it is still in players' unions' best interest to ensure that the anti-doping regulations are robust without being overly punitive or implicating privacy concerns.
There is also the criticism that collective bargaining ensures that anti-doping policies will always be one step behind, consistently unable to catch the newest forms of doping. 198 One of the commonly cited examples of this is the slow implementation of hGH testing by the U.S. sports leagues. 199 The U.S. sports leagues were regularly criticized for not testing athletes' blood for hGH, 200 but they cited the invasive nature of the test and concerns about the utility of the test as a reason for their slow implementation. 201 Yet, animating player's concerns with hGH testing was that blood testing is much more invasive than urine testing. Because the U.S. leagues give players a voice at the table, players were able to properly weigh in on this and find a solution that addressed their concerns. Blood testing was eventually integrated into the NBA, the NFL, and MLB. As noted above, the nature of combatting doping means that the regulations are always a step behind. In light of this, it is preferable to allow athletes to have a say in these policies instead of allowing an agency to unilaterally impose restrictions and punishment in order to attempt the Herculean task of keeping up with doping technology. By continuing to allow athletes to have a say in anti-doping policies, we treat them as individuals deserving of privacy and bodily autonomy, instead of subjecting them to unilateral doping policies like the WADA Code, which athletes have decried as "[i]ntolerable harassment" and "very invasive." 202 This independent agency could also interface with WADA to ensure that they benefit from the institutional knowledge that WADA has and to keep up with new developments in doping. Although the drawbacks of the WADA Code outweigh the benefits for the U.S. leagues, WADA still has a wealth of information that an independent agency could benefit from, and U.S. sports leagues should look to WADA for guidance regarding their prohibited substance list and new technologies. For example, the agency can look at WADA's sport specific analysis to help inform their decisions regarding testing frequency and the substances for which they should be on the lookout. 203 Additionally, the agency should incorporate other deterrent factors when crafting the suspension length that should come with a doping violation. Unlike WADA, which generally subjects athletes to an extremely harsh punishment of two to four years, the agency should consider an appropriate punishment for each sport. For example, the NFL-which has the lowest guaranteed money for its players-would likely have a different suspension length than MLB or the NBA, which also have more games per season. 204 This would help to ensure that doping sanctions are not overly punitive and also theoretically maintain the deterrent effect that sanctions are generally believed to have. Additionally, it is important to think about the context in doling out these punishments, including the extent to which an athlete's decision to dope was coerced-as evidenced in the Russia cheating scandal, where "[a]thletes not initially willing to use pharmaceuticals were coerced to do so." 205 Ultimately, in order to develop a comprehensive anti-doping policy, the proposed independent American agency should recognize that sanctions by themselves are likely not deterrent enough to prevent athletes from doping in all circumstances. As discussed below, the independent agency should also identify potentially more effective avenues to help supplement the overall goal of deterring doping.
B. OTHER DETERRENTS TO CONSIDER
Another criticism of WADA is that the strict liability system is simultaneously too harsh and not actually deterrent enough. Indeed, "[a]lthough bans have severe consequences that may lead to the sudden end of a career, studies show that these possibilities are not assessed as highly probable by athletes." 206 This means that the strict liability system really ends up hurting athletes who unintentionally ingest contaminated substances but who do not fall under the category of No Significant Fault or Negligence, which is almost all of the athletes. 207 The U.S. sports leagues similarly apply a system of strict liability, but WADA's lengthy suspensions combined with the strict liability are what make the Code particularly unfair to athletes. Whether athletes who are not as diligent as possible in monitoring their supplements should be punished is certainly a debatable topic; but it seems clear that even the threat of harsh sanctions under the WADA Code is not as much of a deterrent as WADA proponents would hope. Certainly, some proponents of more stringent anti-doping policies believe that "even the two-year mandatory penalties for a first doping offence [is] not a large enough of a deterrent." 208 However, the answer should not be to enact harsher penalties for the sake of enacting harsher penalties, but rather, to look at the potential deterrents and whether those suffice. One such non-suspension related deterrent is the monetary fine-an economic deterrent. There are currently strong economic deterrents to doping, but they are inconsistently applied. As such, in order for the independent American agency to establish an optimal economic deterrent, it must craft a policy that is consistently applied, and severe enough to deter doping.
One postulate of game theory is that athletes will cheat when the payoff from cheating exceeds the penalty if caught, multiplied by the probability of being caught. 209 Based on the previous studies, the payoff will always exceed the penalty multiplied by probability, because most athletes do not assign a very high probability to being caught. This may be true if we consider just the WADA Code and its punishments; but if we increase the scope to economic and reputational harm, this formula may change. The independent agency would be tasked with interfacing with the players' unions to carefully tailor sanctions that would consider these harms to avoid being overly punitive with unnecessarily lengthy sanctions, like the WADA Code.
Increased Fines
The independent agency should also take into account the potential deterrent effect of levying additional fines on top of a suspension. Studies have shown that higher fines can have a stronger deterrent effect than suspensions, so it may be more valuable to increase fines than to lengthen suspensions. 210 Missing games, while also getting a higher fine, may have a stronger deterrent effect than a suspension alone. This is borne out anecdotally by baseball superstar Alex Rodriguez, who noted that the most frustrating thing about his PED suspension was that it "cost [him] over $40 million." 211 More research is certainly needed to determine what amount of fine would effectively deter-as this would depend heavily on the sport and the salaries of the athletes-but it is worth considering. The fines should take into account the amount of salary lost based on the suspension and assess what sort of additional fine should be levied in order to achieve maximum deterrent effect. For example, in the NFL, a first positive test result for an anabolic agent is six regular and/or postseason games, which accounts for about 38% of the regular season. 212 As the NFL provides athletes the lowest guaranteed salary of the three major U.S. sports leagues, this economic penalty is already quite heavy, so whether further increase is needed to deter would need to be further explored. 213 However, adding an additional fine on top of the games suspended could likely deter athletes from offending without threatening to take away years of their playing career. Accounting for the economic deterrents should be required under the independent American agency's anti-doping policy, and to increase deterrence, a subsequent increase in fines may be necessary. agency would clearly have more leeway to push for such an agreement because they would not be linked with any sports leagues.
Establishing an American independent agency to monitor the NBA, MLB, and NFL, and enlisting sponsors to join the fight against doping, could address many of the concerns that are levied against U.S. anti-doping policies. Considering the steep penalties and the privacy concerns that accompany joining the WADA Code, an independent American anti-doping agency is the best way to strengthen anti-doping policies without also taking on WADA's many drawbacks.
IV. CONCLUSION
Sports doping is incredibly difficult to catch and even more difficult to properly deter because doping methods are always one step in front of the testing procedures. Athletes have been "quoted boasting that 'when they get a test for that [new doping substance] we'll find something else. It's like cops and robbers.'" 219 Thus, the outstanding question is how far should sports leagues go to catch doping athletes? Doping is not a severe enough problem to justify constant monitoring and the strict liability sanctions that arise under the WADA Code. Additionally, it is unclear how one would even determine whether the Code is sufficiently more effective such that adoption by the NBA, the NFL, or MLB is warranted. The substantial weight of these outstanding questions and concerns demonstrates that the NBA, the NFL, and MLB should not adopt the WADA Code.
However, this is not to say that doping should not be combatted or deterred, or that the U.S. sports leagues anti-doping policies require no reform. To address the conflicts of interest issues inherent in the current system, an independent agency should be created to step into the shoes of the Big Three and liaise with players' unions to set anti-doping policies without needing to accept the drawbacks that come with the WADA Code. Doping regulations should focus on protecting the health of athletes and on deterrence methods beyond lengthy suspensions. These methods could take the form of increased fines or the enlistment of sponsors to assist in antidoping policies. The implementation of these strategies would lead to an amenable balance between athlete privacy and effective deterrence.
|
2021-08-03T00:05:13.152Z
|
2021-01-24T00:00:00.000
|
{
"year": 2021,
"sha1": "53d864a3ca84995314ce641194f605a1677fcedd",
"oa_license": "CCBY",
"oa_url": "https://journals.library.columbia.edu/index.php/lawandarts/article/download/7824/3959",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "328e34e5cc8ae5c15580e2cf336b22056d4d818c",
"s2fieldsofstudy": [
"Law",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
14304819
|
pes2o/s2orc
|
v3-fos-license
|
Algebra of Higher Antibrackets
We present a simplified description of higher antibrackets, generalizations of the conventional antibracket of the Batalin-Vilkovisky formalism. We show that these higher antibrackets satisfy relations that are identical to those of higher string products in non-polynomial closed string field theory. Generalization to the case of Sp(2)-symmetry is also formulated.
Introduction
Lagrangian BRST quantization gets its most succinct formulation in the antibracket formalism of Batalin-Vilkovisky [1]. The basic objects of that approach, the antibracket itself and a so-called ∆operator (to be reviewed below), turn out to belong to a general algebraic structure that has attracted considerable attention recently, in particular in connection with a geometric interpretation and covariant generalizations [2].
The conventional antibracket of the Batalin-Vilkovisky formalism can be viewed as being based on a 2nd-order odd differential operator ∆ satisfying ∆ 2 = 0. In (super) Darboux coordinates it takes the simple form [1] where to each field φ A one has a matching "antifield" φ * A of Grassmann parity ǫ(φ * A ) = ǫ(φ A ) + 1. The antifields are conventional antighosts of the Abelian shift symmetry that for flat functional measures leads to the most general Schwinger-Dyson equations [3].
Given ∆ as above, one can define an odd (statistics-changing) antibracket (F, G) from the failure of ∆ to act like a derivation: (1. 2) The antibracket so defined automatically satisfies the following relations. First, it has an exchange symmetry of the kind (F, G) = (−1) ǫ F ǫ G +ǫ F +ǫ G (G, F ) . and it satisies a Jacobi identity, cycl.
(1. 5) In addition, there is a useful relation between the ∆-operator and its associated antibracket: shown how the Batalin-Vilkovisky ∆-operator (1.1) can be viewed as an Abelian operator corresponding to the Abelian shift transformation φ A → φ A − a A . The analogous non-Abelian ∆-operator for general transformations φ A → g A (φ ′A , a) was derived in ref. [5]: where the U k ij are the structure coefficients for the supergroup of transformations. 1 They are related to the field transformations g A (φ ′ , a) by the relation where (1.9) The ∆-operator of eq. (1.7) can be shown to be nilpotent [5], and it gives rise to a new non-Abelian antibracket by use of the relation (1.2). Explicitly, this antibracket takes the form [5] (F, (1. 10) In ref. [5] this non-Abelian antibracket was derived directly in the path integral (by integrating out the ghosts c A ), but it can readily be checked that it is related to the associated ∆-operator (1.7) in the manner expected from (1.2). Because this particular non-Abelian ∆-operator is of 2nd order, the corresponding antibracket automatically satisfies all the properties (3)(4)(5)(6).
As shown in ref. [4], even this non-Abelian antibracket is open to generalizations. One first notices that the non-Abelian ∆ is nothing but the Hamiltonian BRST operator Ω of a certain constraint algebra in an unusual representation, that of Hamiltonian ghost momentum. Taking the most general non-Abelian BRST operator Ω of an arbitrary non-Abelian open algebra, one can then construct the corresponding general ∆-operator by going to the ghost momentum representation [4]. This leads naturally to the concept of higher (non-Abelian) antibrackets. Interestingly, much of the appropriate mathematical machinery for such a formalism already exists in the mathematics literature [7,8]. There is also a surprising connection between the algebra of these higher antibrackets and that of so-called strongly homotopy Lie algebras (for a very readable account, written for physicists, see ref. [9]), which appear in string field theory [10].
Interest in general Batalin-Vilkovisky algebras has recently arisen also in the context of two-dimensional topological field theory and string theory [11]. One should expect the higher antibrackets to play a rôle there as well [8].
From the point of view of quantization of field theories, perhaps the most important reason for studying the algebraic structure behind higher antibrackets comes from the expectation that even the conventional Batalin-Vilkovisky ∆-operator will be modified by higher-order quantum corrections originating from operator-ordering ambiguities in the Hamiltonian framework. 2 This obviously makes it important to study the Master Equation for arbitrary higher-order ∆-operators, and to understand their associated BRST structure.
The purpose of the present paper is partly to present a simplified construction of the higher antibrackets introduced in ref. [4], partly to show how they can be generalized in a natural manner to a situation in which one has simultaneous BRST and anti-BRST symmetry. In fact, these two symmetries can, not surprisingly, be combined into an Sp(2)-symmetry. The mathematical analogue of this is an Sp(2)covariant strongly homotopy Lie algebra. While this algebra may be of interest in its own right, it also points towards the existence of an Sp(2) BRST-anti-BRST symmetric version of closed string field theory, as we shall show towards the end of our paper. This will then provide a comprehensive setting for the possible generalizations of the usual Batalin-Vilkovisky quantization formalism, and its Sp(2) extensions.
We start in section 2 with a brief review of how higher antibrackets naturally arise if one generalizes the Batalin-Vilkovisky formalism from shift symmetries (which generate the usual Batalin-Vilkovisky ∆-operator) to more general transformations. This is only to set the stage for what follows, because we are in this paper interested in the study of the higher antibrackets independently of such considerations.
We then proceed to a discussion of the Koszul construction of higher brackets and antibrackets based on general differential operators ∆ (section 2.1). Some useful mathematical background is introduced in section 2.2, and we show how to reformulate this construction in a simple fashion. In section 2. 3 we discuss the precise connection to strongly homotopy Lie algebras, and prove a useful lemma related to the algebra of two sets of higher brackets. As an explicit realization in terms of chosen coordinates, we describe the algebra by means of a suitable vector field in section 2.4. The analogue of the strongly homotopy Lie algebra structure associated with our generalized higher brackets is discussed in section 2.6. Section 2.5 is our first return to physics applications: we discuss the definition of a generalized Master Equation, first introduced in ref. [4]. This leads us to the subject of BRST symmetry in this higher-antibracket framework. When formulated as the possibility of deforming a given solution of the Master Equation by the addition of BRST-exact terms, it is of interest to find the associated symmetry algebra. While the most simple choice of symmetry transformations corresponds to an algebra that is open, we show how in a simple manner one can add "equation of motion terms" to the transformations in order to make the algebra close. We also discuss finite symmetry transformations. In section 3 we turn our attention to some intriguing parallels between higher antibrackets and the so-called "string products" in closed string field theory [18,10,19], when as ∆-operator one takes the BRST charge Q. Section 4 is devoted to the construction of an Sp(2)-symmetric analogue of the higher-antibracket BRST symmetry. Section 5 contains our conclusions. Finally, in two appendices we propose some generalizations which lie slightly outside the main line of the paper. In the first (Appendix A), we show how one can introduce yet higher levels of generalizations of the higher antibrackets discussed in the main text. While their rôle in physics applications is totally obscure, we nevertheless find it interesting that such a further generalization is possible. In Appendix B we discuss generalizations of the so-called "main identities", valid already at the level of the normal higher antibrackets. These new identities contain new information in cases where, for example, ∆ is no longer nilpotent, or, as discussed in section 4, when one imposes an Sp(2) symmetry as well.
Higher Antibrackets
As explained in ref. [4], one can introduce obvious generalizations of the Batalin-Vilkovisky ∆-operator by considering the most general Hamiltonian BRST operator Ω in the ghost momentum representation. Start with a representation of first class constraints which involves a right-derivative acting to the left. Because the constraints in this representation act to the left, one must choose a representation of the Hamiltonian ghost (super) Heisenberg algebra which also involves operators acting to the left. In the ghost momentum representation, this is One of the observations in ref. [4] is that to pass to the Lagrangian ∆-operator, one identifies the Hamiltonian ghost P j with the Lagrangian antighost ("antifield") φ * j . The most general Hamiltonian BRST operator Ω [12], in this representation takes the form [4] ← (2.6) The functions U i 1 ···in j 1 ···j n+1 are generalized structure "constants" of the possibly open algebra. The infinite sum in eq. (2.5) may terminate at finite order. For example, for ordinary super Lie algebras where the structure coefficients U k ij are just constant supernumbers, the series terminates at the first term.
The ∆-operator is now defined through One immediate consequence of the fact that the quantized Hamiltonian BRST operator satisfies [Ω, Ω] = 2Ω 2 = 0, is that ∆ also is nilpotent. One sees that in the case of an ordinary non-Abelian Lie algebra the general definitions (2.5) and (2.7) reproduce the ∆-operator of eq. (1.7). The ordinary Batalin-Vilkovisky formalism corresponds to Abelian shift transformations for which the general definitions (2.5) and (1.7) lead to the usual Batalin-Vilkovisky ∆-operator (1.1).
These preliminary remarks only serve as to motivate the study of higher-order ∆-operators, and their associated antibrackets. They show that such higher-order ∆-operators exist in the field theory context, and can be defined by a natural generalization of the Batalin-Vilkovisky ∆-operator. But in what follows we shall neither make explicit use of the form (2.5), nor of the precise manner in which it gives rise to new higher-order ∆-operators.
The Koszul Construction
In this subsection, let ∆ denote a Grassmann-odd differential operator with the properties Motivated by the previous examples, we assume that ∆ differentiates from the right. In physics, one will normally not need the case where ∆(1) = 0, but exceptions exist, and these cases can be treated with equal ease (see below). One can also relax the condition of nilpotency without encountering difficulties.
Following Koszul [7], one can define a unique antibracket (F, G), even when ∆ is not of 2nd order. This is the content of eq. (1.2), which holds in all generality. The antibracket so defined is a measure of the failure of ∆ to act like a graded derivation. This antibracket will automatically satisfy the exchange relation (1.3). The relation (1.6) also holds in all generality. But in general both the Leibniz rule (1.4) and the Jacobi identity (1.5) will be violated.
Koszul suggests that the antibracket derived from eq. (1.2) be used to define a "three-bracket", which measures the failure of the antibracket (F, G) to act like a derivation. This construction can proceed in an iterative way to define higher and higher antibrackets. We use the notation of ref. [7], and introduce objects Φ n ∆ which are directly related to the higher antibrackets. The lowest antibracket, the "one-bracket" is essentially identified with the ∆-operator itself 3 , while the higher antibrackets can be derived from it. In detail, (2.10) All higher antibrackets are Grassmann-odd in the sense that and they satisfy a simple exchange relation: This latter relation suggests that it is more natural to view the comma in Φ n ∆ as a graded (supercommutative) and associative product. We use this product notation in the next sections.
The usual antibracket of the Batalin-Vilkovisky formalism, the "two-bracket", is defined by Note that when the usual antibracket acts like a graded derivation, the "three-bracket" defined through Φ 3 ∆ vanishes identically.
Akman [8] has organized the above definition of higher antibrackets in a very convenient iterative sequence: (2.14) If Φ k ∆ acts like a derivation, Φ k+1 ∆ vanishes identically, and the iteration terminates.
When Φ 2 ∆ fails to act like a derivation of the kind (1.4), it also fails to fulfill the Jacobi identity (1.5). Instead, one finds cycl.
So Φ 3 ∆ equivalently measures the failure of the Jacobi identity for the usual antibracket. 4 In terms of the Φ n ∆ 's themselves, the (broken) Jacobi identity takes the form cycl.
The above construction shows explicitly that Φ n ∆ can be defined directly in terms of the lowest bracket Φ 1 ∆ . However, the defining equations are highly cumbersome when n is large, and it is therefore useful to have a more compact formulation. In order to be more precise, we will introduce some mathematical notation that turns out to be very convenient. Because we wish to compare directly with Koszul [7], we will give up the condition that the ∆-operator is based on right-derivatives (as is natural from the BRST-charge definition, and the Batalin-Vilkovisky formalism) and allow it to act as a higher-order left-derivative (as is more natural from the mathematical point of view). The translation between the two conventions is of course trivial. To avoid confusion, the analogous ∆-operators will in the following be denoted by capital roman letters S, T , etc.
An Algebraic Definition
Let A be a supercommutative algebra with unit 1 over the complex field C. Furthermore, let T A denote the tensor algebra of A: We distinguish between the unit element in the algebra 1 ∈ A and the unit element in the field 1 ∈ C by using boldface type for the algebra unit. Note in particular that 1 ⊗ A = 1 · A = A ∈ A, but 1 ⊗ A ∈ A ⊗ A for an element A ∈ A.
The quotient algebra SA = T A/I is the (super)symmetrized tensor algebra of A, where I denotes the two-sided ideal generated by the (super)commutator, i.e. elements of the form: We will mainly work in the (super)symmetrized tensor algebra SA, which by construction is an associative and supercommutative algebra with respect to the tensor product ⊗: It would actually be interesting to do the construction for an associative but non-commutative algebra A, and without super-symmetrizing with respect to the tensor product. But for the sake of clarity we will for the moment assume graded commutativity, and we will also (super)symmetrize the tensor product. Besides, without guidance from physics it is not obvious which of the many ways of generalizing to the non-commutative case we should choose. Akman [8] has provided a most natural definition, which turns out to coincide with a certain expression in terms of supercommutators which we will provide below.
Define a multiplication map ∼ :SA → A, which takes tensor product ⊗ into the product "·" of the algebra A:1 (2.20) For each linear operator T : A → A the composed map T • ∼ : SA → A is also, in a slight abuse of notation, denoted by T . In particular, we point out that with this definition T (1) = T (1).
At this stage define a co-multiplication (cf. [7]) λ : SA → SA × SA (2.21) Here SA × SA is equipped with a graded product ⊗: We can understand the curious sign-factor as originating from permuting B and C. SA × SA ∼ = SA ⊗ SA > ∼ SA has a canonical map onto SA, where the cross product × is substituted with the tensor product ⊗.
We now define a map Φ T : SA → SA for a linear operator T as The operator T only operates on the first copy of SA in SA × SA while leaving the second copy untouched. We can invoke this action for practical calculations with the help of an omit operator ∧ T : SA → SA (2.24) So whenever an argument of T is decorated with the omit-operator, the argument should be removed from the argument-list of T , and appear outside to the right (or left) instead. We emphasize that the omit-operation in general involves a sign factor. For instance, With this definition we can write We have here employed the obvious conventions A 0 ≡ 1 and A 1 ≡ A. A useful way of writing this is where → T operates on every argument to the right.
At the present stage the connection between the map Φ T and the corresponding higher antibrackets Φ n T may not yet be obvious. Roughly, the commas used to separate the entries in the higher antibrackets in the previous subsection have been replaced by the tensor products here. This is of course only a matter of notation, and clearly immaterial. (And we shall freely alternate between the two ways of writing it). To see that we are really very close to having defined the higher antibrackets Φ n T , let us evaluate the lowest cases of Φ T : (2.28) The higher antibracket Φ n T : S n A → A of order n is now finally defined by This means that We emphazise a particular useful representation of Φ n T : This immediately leads to the following recursion relation: which agrees with that of eq. (2.14).
Finally, let us evaluate some of the lowest cases: (2.33) Specializing to the case of T (1) = T (1) = 0, this definition is seen to agree with the one of eq. (2.10), once translated into an operator T differentiating from the left. The more general definition with T (1) not necessarily vanishing can of course (since the above considerations are based on Koszul's construction) be found in ref. [7] as well.
Normally, T is a differential operator. Note that if T is a (left) multiplication operator, then all brackets vanish identically, except for the zero bracket.
It may also be of interest to note that it is possible to invert the relation between the operator T and Φ T . One way is to project Φ T into the algebra A itself : π A • Φ T = T . The following relations hold in the tensor algebra as well:
The Strongly Homotopy Lie Algebra
There is an intriguing connection between the algebra of higher antibrackets based on Grassmann odd and nilpotent operators, and strongly homotopy Lie algebras [9,14].
LEMMA: Let S, T ∈ Hom C (A, A) and assume A is an algebra (and hence with a product). Then and (by operating with tilde on both sides) The Lemma also contain the first example of a bracket-brackets Φ S , Φ T : This is the simplest of an infinite tower of bracket-brackets. One can associate a tilded pendant We refer to appendix A and B for a throughout presentation of co-derivation and bracket-brackets. Here we will merely note that the second term in (2.35) with these generalizations can take the following disguises: Let us insert arguments A 1 , . . . , A n . The lemma can then be stated as ǫ π is the Grassmann parity originating from permuting Grassmann graded quantities: Proof of lemma: It is clearly enough to prove the lemma for bosonic arguments A 1 , . . . , A n . The first term on the righthand side is: It is straight forward to see that the (k 0 = 1)-terms are the left hand side of the lemma: due to a cancellation between terms in which S is not operating directly opon T . Note that in case of k 0 = 0 the S-and T -expressions are always multiplied. The (k 0 = 0)-terms are minus the second term on the righthand side in the lemma: An anti-supersymmetrization in S and T of the tilded version of the lemma cause the second terms to drop out: or equivalently, with arguments A 1 , . . . , A n inserted: This contains the main identities for strongly homotopy Lie algebras. (We borrow the terminology "main identity" from closed string field theory [10], where analogous expressions play an important rôle; see section 3). Let us write out the first few identities.
(2.49) n = 2: Leibnitz rule for a (not necessarily odd) Laplacian and associated (anti)bracket It is quite amazing that the main identities for strongly homotopy Lie algebras, which in closed string field theory rely on non-trivial geometric properties in moduli space [10], here can be derived as a purely algebraic result due to an assumed existence of a product (so that A is an algebra, and not just a vector space). If one does not assume the existence of this product, one can reformulate the right hand side of the main identity (2.46) in terms of nilpotency of co-derivations b Φ T : This follows quite easily from (A.16) and (A.17).
Coordinate Representation
We will now translate the above construction into a description with explicitly chosen coordinates. Let { e a | a ∈ I} denote a vector basis for A, and { η a | a ∈ I} the dual basis in A * , so that Without loss of generality we can take the coordinates A a of a general element A = a A a e a to be bosonic, i.e. the basis vectors are supposed to carry the Grassmann grading. Purchasing further the vector space structure of A, one can identify the space Hom C (S n A, S m A) of linear operators : S n A → S m A, with S m A ⊗ S n (A * ), the set of S m A-valued homogeneous polynomials in A of degree n: and ǫ π is the Grassmann parity originating from permuting the Grassmann-graded quantities: To avoid the sign-factor ǫ b appearing in (2.55), it is convenient to define a contraction symbol which first organizes all basis vectors e a to the right and all dual vectors η a to the left, and then contracts: If all vectors are odd the norm is therefore either 0 or 1.
Next define a "symmetrizer projection operator" by (2.61) Define the (super)symmetrized coefficients of an operator T by . . e bn )) . (2.62) In case of symmetric coefficients this yields an inversion of eq. (2.54): The composition of two operators S, T ∈ Hom C (SA, SA) is then or, in terms of coefficients, Let us now define a normal ordering in which all basis vectors e a are moved to the left and all dual vectors η a are moved to the right, while respecting the Grassmann grading: : We can then write and co-derivation (cf. eq. (A.14-A.15)) Note that the particular bracket |T 1 , . . . , T k | defined in eq. (A.13) is just the normal-ordered product: We can represent the dual basis vectors η a by a left derivative acting to the right: or analogously represent the basis vectors e a by a right derivative acting to the left: Then the contraction (2.55) can be written (2.72) The conditions e = 0 resp. η = 0 simply ensure that the contraction is non-zero only when n = m. Let us at this point mention a handy representation of the symmetrizer projection operator: An operator T ∈ Hom C (SA, A) with precisely one outgoing slot/entry can be represented by a vector field operating to the left: Note also that the action of •b T can be described by the vector field without letting η = 0: Or, in terms of coordinates, This has as one important implication that (the generalized version of) the main identity (2.46) for a strongly homotopy Lie algebra can be formulated as a contraction between vector fields: In the last expression the larger outer square brackets denote a contraction i.e. action of the last vector field on the former, and the smaller inner square brackets means anti(super)symmetrization in S and T .
The vector field is and Φ 2 T c ab are usual Lie algebra structure constants. In particular, when S = T and T 2 = 0, the whole main identity of strongly homotopy Lie algebras can then be expressed as the nilpotency condition of this new vector field. A description of strongly homotopy Lie algebras in similar terms has been discussed in ref. [15]. Stasheff [9] expresses the main identity of strongly homotopy Lie algebras in an analogous way, but without going to particular coordinates.
Notice that the main identity takes the following form in terms of symmetrized components: When written in this form, one also sees that the notion of strongly homotopy Lie algebras is open to a very natural generalization.
A Master Equation and the BRST Symmetry
So far all properties of the higher brackets have been derived in a general frame without any particular applications in mind. Clearly, for the usual Batalin-Vilkovisky Lagrangian quantization program, only one-brackets and two-brackets are required. This is because the BRST Ward Identities one wishes to impose on the Lagrangian path integral are Schwinger-Dyson equations. The BRST operator of Schwinger-Dyson equations can, for flat functional measures, be chosen to be Abelian [6], and the associated ∆-operator is then, as explained in section 2, of 2nd order in the appropriate representation of fields and antifields. But even in the conventional Lagrangian path integral one may wish to impose other BRST Ward Identities (subsets of the full set of Schwinger-Dyson equations), and the associated ∆-operator may then be of higher order [4,5]. Interestingly, this imposes the formalism of higher antibrackets as the natural generalization of the Batalin-Vilkovisky scheme. Both the (quantum) Master Equation and the (quantum) BRST operator of the Batalin-Vilkovisky antifield quantization are then seen as very special cases in a much more general framework. We begin the discussion of this with a few useful relations.
We have already seen how the higher brackets can be given a nice formulation in terms of commutators (see eq. (2.33). Let us for later convenience define a modified operator X T ;B 1 ,...,B k associated with the operator T : Notice that this last relation tells us how we can generate higher and higher brackets by composition! Consider the formal exponential function (2.83) Using this notation, we can write down a very useful formula (2.86) Here A = ī h S is identified with the action, 6 and T with the nilpotent Grassmann odd Laplacian: For given B 1 , . . . , B k ∈ A, the bracket Φ n T (with n ≥ k) automatically generates an (n − k)-bracket: In particular, a conventional "two-antibracket" (A, B) can always be generated from the higher antibrackets. Also, the Master Equation (2.85) can in this terminology be seen as the sum of "zeroantibrackets" generated by the action S itself: Suppose the Master Equation terminates after a finite order of terms, as happens when ∆ is of finite order: From the physics perspective it is more natural to view this as an expansion inh: This also suggests a solution S expressed as anh-expansion, beginning with the "classical action" S 0 : To leading order in the expansion, this leads to the N -th order "classical Master Equation", while to next order inh we get and so on.
It is curious to note that when the ∆-operator is of infinite order, and the full Master Equation therefore does not truncate, this solution in terms of anh-expansion loses its meaning. The "classical" antibracket is then pushed to infinity, and the analysis must start with the lowest antibracket ∆ instead.
In conventional Batalin-Vilkovisky quantization, the BRST operator is composed of two pieces, a classical part and a "quantum correction" (see, e.g., ref. [16]): We have given σ the superscript "r" to indicate that it acts with right-derivatives in our conventions (due to ∆). The most obvious generalisation to the case where the three-brackets (and perhaps higher brackets as well) do not vanish, would be (rescaling with a factor ī h , and converting to left derivatives): σ can be given a meaning purely in terms of higher antibrackets. The nilpotency of σ depends on the right to use the Master Equation M (S) = 0 before all differentiations are carried out (recall that the brackets in general contains differential operators): The last equality is a consequence of the main identity (2.46): Here variationσ(ǫ) of the form (2.101) only preserve (2.88) "on-shell". However the alternativeσ does have the nice property that nilpotency,σ 2 = 0, is a direct consequence of T being nilpotent.
The reason why the meaning of "on-shell" and "off-shell" here becomes somewhat obscured, can be traced back to the fact that neitherσ nor σ are derivations, i.e. do not fulfill the Leibnitz rule.
Finally, let us mention that in the case of σ, the invariance of the master equation (2.88) can be directly related to the nilpotency of σ: Both σ andσ can obviously be viewed as BRST symmetry operators, and, since S in the BRST context is taken to satisfy the quantum Master Equation, in fact coincide. From the BRST viewpoint the fact that deformations S → S + δS of a solution S to the Master Equation still satisfy this Master Equation is seen as the possibility of adding BRST-exact terms σ(ǫ) (orσ(ǫ)) to the action.
The Transformation Algebra
When the BRST transformations alternatively are viewed as transformations of the action S, one would like to find the possible algebra of such transformations. This has already been done in the framework of the conventional Batalin-Vilkovisky formalism by Hata and Zwiebach [2] (note that their odd Laplacian consists of left derivatives, so we denote it by T , to be consistent). Letting
105)
i.e., the algebra of transformations on S is just the algebra of the conventional antibracket. Here F is a general expression in S. Let us consider the analogous transformation algebra in the general case.
The algebra corresponding to σ does not close in general, but yields instead an algebra it is natural to call "open" (again a terminology motivated by closed string field theory; see ref. [10], eqs. (4.60-4.61)): The additional terms on the right hand side of (2.106) are here to be understood as "equation of motion" terms, and the gauge algebra is then of the usual open kind. In the conventional case of vanishing three-bracket, the "equation of motion" term in (2.106) drops out, and (2.107) boils down to ǫ 3 = Φ 2 T (ǫ 1 ⊗ ǫ 2 ), thereby reproducing (2.105).
The easiest way to derive eq. (2.106) is by using the main identity (2.46), and noting that Interestingly, the algebra can be made to close by choosing the transformationsσ instead. As we have emphasized before, the two transformationsσ and σ are equal "on-shell": The closed algebra corresponding toσ is: This holds even without assuming nilpotency of T .
Having found new nilpotent operators σ andσ generated by nilpotent T -operators, it is natural to consider the higher antibrackets generated by σ orσ.
This is the natural generalisation of the BRST operator to more entries.
Apart from the zero-bracket, the two sets of higher brackets Φ n σ , Φ n σ are equal: because the difference σ −σ is a (left) multiplication operator(cf. (2.110)). Note that The careful reader will have noticed that each time σ was treated in the past two sections, we chose, whenever possible, arguments that did not involve the assumption of a product for the algebra A. For instance (2.108) could be derived easier with the help of (2.113) and (2.50).
To summarize, the benefits of σ are chiefly that it can be written purely in terms of higher brackets, i.e. without the use of a product 7 , whileσ have the nicest properties with respect to nilpotency, invariance of master equation and closure of the transformation algebra.
Finite Transformations
If we keep the perspective that σ andσ can be seen as valid deformations δS of a solution S to the Master Equation M (S) = 0, it is natural to ask for the analogous finite deformations of S. In the case of the conventional Batalin-Vilkovisky formalism, this has also been considered by Hata and Zwiebach in ref. [2]. We shall here consider the general case. If we focus on σ, it is actually possible to derive, without too much effort, an integrated version. By a curious twist of events, precisely this case has been considered earlier in the context of closed string field theory as well [17]. We shall here present a more direct construction, making use of the machinery we derived in the previous subsections. Consider infinitesimal transformation of the form (2.116) Here ǫ and T are supposed to have the same Grassmann parity and A is bosonic. The above transformations correspond, as mentioned previously, to gauge transformations in closed string field theory [10] (there with A = κΨ being a string field). The transformation parameter ǫ ≡ ǫ 0 dt can be split into a finite constant ǫ 0 of same Grassmann parity as ǫ, and a bosonic infinitesimal parameter dt. We want to integrate up this expression to finite transformations. It follows that we have a 1. order initial value problem: This can be rewritten as an integral equation Let us define a(t) ≡ e ⊗A(t) . Exponentiating the integral equation yields: Iterating this "fixed-point integral equation" infinitely many times gives: Projecting A(t 1 ) = π A a(t 1 ) to the original algebra A results in Note that one only has to apply the fixed-point integral equation n times, to get the n'th order contribution with respect to the transformation parameter ǫ 0 . The first few orders in the parameter ǫ 0 are: Although eq. (2.121) gives the finite transformation in closed form by taking the limit n → ∞, it is clearly not very useful beyond the expansion in ǫ 0 (illustrated to O(ǫ 5 0 ) above). It is therefore of more interest to consider the order-by-order expansion. Let us first comment on the type of terms that can arise. Besides the zeroth-order term A 0 , all terms begin (and end) with a bracket Φ T , i.e. two brackets are never multiplied at the lowest level of nesting. Note that the symmetry factor 1 8 in the above expression breaks the otherwise apparent factorial pattern of the first orders, so the rule for giving the coefficients is clearly not that simple. In general, the symmetry factor for a term can be deduced according to two simple rules found empirically in ref. [17], and which easily can be read off from formula (2.121). The rule is the following. For each bracket Φ T appearing in the considered term, do the following: • If k entries are equal, divide by 1 k! . • Divide by the total number N of ǫ 0 's appearing somewhere inside the bracket (i.e. also the ǫ 0 's in further nested brackets).
These two simple rules suffice in determining the whole expansion. Of course, one can as easily simply expand eq. (2.121).
Connection to Closed String Field Theory
Non-polynomial closed string field theory [18,10] is based on a so-called "string product" which shares a number of properties with higher antibrackets. This is particularly obvious in the conventions of Zwiebach [10], which we will follow here. For an arbitrary genus g, the nth string product is denoted by [A 1 , . . . , A n ] g . It has n entries of states (string fields) A i , and it maps these states into a new state. 8 This string product is supercommutative, and Grassmann-odd: The string product also carries ghost number (the same for any genus g), but this notion is not of importance for what follows. In addition to the string product, an important rôle is played by the BRST operator Q. 8 In closed string field theory, these states are assumed to be annihilated by certain operators b − 0 and L − 0 (a property the string product inherits), but this assumption is not required in the following considerations, when restricted to properties of the string products alone.
In classical closed string field theory, corresponding to genus zero, the string product satisfies a so-called "main identity" of the form [19,10] where the last sum is restricted to l ≥ 1, k ≥ 2, and l + k = n. The sign factor σ(i l , j k ) is what is picked up by the prescribed reordering of terms, using the fact the string product is supercommutative.
The BRST operator Q is defined on a given conformal background, and the whole string field theory is then also defined on such a background. At genus zero, this means that the "zero-product" corresponding to n = 0 must be taken to vanish: (The corresponding definition away from a conformal background will be discussed later.) The first non-trivial string product is thus the "one-product", a linear map that takes one string state into another. It is given by The classical non-polynomial closed string field theory action can then be written [18,10] with the following exchange relation: where the last bracket has n entries. The classical equations of motion then take the form Finally, the closed string field theory action (3.6) is left invariant by the following gauge transformations: If we compare these string field theory expressions with the identities among higher antibrackets we derived in the previous sections, it is tempting to identify the nth string product at genus zero with the nth antibracket generated by an odd operator T : The obvious obstruction to such an identification is the lack of a simple product " · " of, using the notation of section 2, the algebra A. Still, let us consider the similarities. We have already listed the pertinent properties of the string products. All of these properties are shared with the higher antibrackets: They are both Grassmann-odd, graded commutative under exchange of entries, and the crucial "main identity" of the string products is recognized as being identical to the identity (2.47) of higher antibrackets.
Consider now the equations of motion (3.10), which in the previous section played the rôle of the full Master Equation (the action S replacing the string field Ψ). We can view this equation, with its infinite sum of higher brackets (or string products), from two points of view. Either as a clever way of representing the particular combination of exponential functions without reference to the algebra by means of which these exponential functions could be defined, or as a very complicated way of writing the simple formula e −κΨ → T e κΨ = 0 (3.13) through its power series expansion. Of course, to give meaning to exp(κΨ), we would have to assume that it is possible to redefine ghost number assignments so that κΨ becomes of ghost number zero. Closed string field theory is tied to the formulation in terms of a power series expansion.
Similarly, the gauge symmetry of string field theory (3.11) can be understood as the infinite-series expansion of the simple expression δ ǫ Ψ = e −κΨ → T , ǫ e κΨ . (3.14) We have also already seen the usefulness of the higher-antibracket formalism when deriving what in closed string field theory is viewed as the analogous finite gauge transformations (in section 2.9). In all of these cases, we can use the algebra A to derive results with far greater ease, and whenever these results are expressible in terms of higher antibrackets alone (without using the new product) we find that the expressions coincide with those of closed string field theory.
It is also of interest to see the results of subsection 2.8 from the point of view of closed string field theory. It was noted by Ghoshal and Sen [20] that there is an apparent clash between the gauge symmetry of closed string field theory being open off-shell, while the gauge transformations of the low-energy effective field theory derived from this theory form an algebra which closes off-shell. By analyzing special cases, they found that the usual gauge transformations of closed string field theory combine with "trivial" gauge transformations (proportional to the equations of motion) to give the proper transformations (which close) of the low-energy theory. Ghoshal and Sen in fact conjecture that all gauge transformations of closed string field theory can be organized in such a manner (by adding suitable "equation of motion terms") that the algebra eventually closes off-shell. Our symmetry operatorσ is precisely of this kind, but it cannot as it stands be given an interpretation in closed string field theory, since it -in contrast to σ -involves the product of string fields discussed above.
Beyond Conformal Backgrounds
An interesting place for considering the analogy between string products and higher antibrackets is that of closed string field theory in a background that is not conformal. Zwiebach [10] has analyzed the fate of the string product algebra in this situation.
So far the analogy has been based on the assumption that the "zero-product" (3.4) is vanishing. Away from a conformal background this zero-product will no longer vanish. Zwiebach calls it F , and distinguishes the new string products by a prime [10]: Denoting, accordingly, also the new BRST-like operator by Q ′ , some of the first few identities that generalize the "main identity" of eq. (3.3) read [10]: This last equation (3.17) gives the violation of Q ′ -nilpotency away from a conformal background. The analogue of Q differentiating the two-product (1.6) becomes 18) and the higher-order identities can also be worked out.
The appearance of a non-trivial zero-product F has a completely natural explanation in terms of higher antibrackets: it corresponds to the inclusion of a non-trivial zero-bracket Φ 0 T .
In closed string field theory, one studies the behavior away from a conformal background by shifting the string field: where Ψ 0 does not solve the classical equation of motion. 9 The precise connection between such a shift and the emergence of a new algebra of string brackets that now involves the zero-product is easily understood if one accepts the formulation in terms of higher antibrackets and the new, assumed, string product. Consider the equation of motion for the unshifted field. We can write it as that is, the equation of motion for the shifted string field Ψ − Ψ 0 , with respect to a new nilpotent BRST operator, a conjugate version of → T . Recall that → T here acts on everything to its right.
Because Ψ 0 is assumed not to solve the classical equation of motion, it follows immediately that the new string product algebra will have a non-trivial zero-product: i.e., precisely (κ times) the left hand side of the equation of motion for Ψ 0 (which by assumption is non-vanishing). We identify it as F = [ · ] ′ 0 above. Note, incidentally, that but this identity is not the same as eq. (3.17). The "BRST-like" operator Q ′ is the one-bracket Φ 1 The whole sequence of main identities can of course now be rewritten in terms of → T ′ , rather than → T . The only new feature compared with the usual main identities of closed string field theory is that the 0-bracket Φ 0 is non-vanishing. In particular, one sees immediately that the first identities (3.17) and (3.17) are trivially included in eq. (2.47), and similarly for the higher identities. They are all contained in eq. (2.47).
Note that → T ′ is nilpotent simply as a consequence of → T being nilpotent. It can be viewed as a genuine BRST operator corresponding to the shifted background. The "BRST-like" operator Q ′ of closed string field theory [10] is in the present context rather seen as the one-bracket; it is not nilpotent when F = 0.
We have thus shown that when shifting the string field Ψ by Ψ 0 , almost all of the formalism remains intact, and in particular almost everything can eventually be expressed in terms of string products. It is therefore not surprising that these results can also be derived directly on the basis of string products alone [10]. They just appear with far more ease in the present picture. There are also interesting exceptions, such as the new nilpotent BRST operator → T ′ . This is the appropriate BRST operator for shifted backgrounds, but it cannot be expressed solely in terms of antibrackets (or string products), and therefore has no obvious analogue in closed string field theory.
An Sp(2)-Symmetric Formulation
As discussed in section 2, the higher antibrackets give rise to a BRST symmetry which is a generalization of the BRST symmetry of Batalin and Vilkovisky. An obvious question to ask is whether one analogously can find a formulation that includes both BRST symmetry and anti-BRST symmetry. There have been various suggestions for Lagrangian BRST formulationsà la Batalin and Vilkovisky which includes the extended BRST-anti-BRST symmetry. All these have from the outset included the BRST-anti-BRST symmetries in an Sp(2) symmetry. The original approach is due to Batalin, Lavrov and Tyutin [21], and it has recently been suggested that this formulation be rephrased in terms of what has been called "triplectic quantization" [22] 10 The main new ingredient of an Sp(2)-symmetric formulation of conventional Lagrangian quantization is a Grasmann-odd vector field V , which satisfies V 2 = 0, and which must be added to the ∆-operator. We take V to be a differential operator based on a right-derivative. In the original formulation of ref. [21], the following relations are assumed: Here a, b, . . . denote indices in Sp(2), the invariant tensor of which is ǫ ab . 11 Symmetrization in Sp (2) indices is defined by and these indices are raised and lowered by the ǫ-tensor.
In refs. [22,24,25] the ∆ a -operators are assumed to of purely 2nd order, while the V a -operators are assumed to be of purely 1st order. However, in actual applications it is usually the combinations which appear. This suggests that we should simply view ∆ a ± as more general 2nd-order odd differential operators (still excluding a constant term). It follows from (4.1), (4.2) and (4.3) that Since by definition V a is of first order, the antibrackets defined by use of either ∆ a or ∆ a ± will coincide. These antibrackets are born with an Sp(2)-index: (4.7) The above antibrackets satisfy the usual exchange relation (1.3), and the same graded Leibniz rules (1.4). The analogue of the graded Jacobi identity (1.5) reads cycl. 8) and the Sp(2)-covariant version of the relation (1.6) is Furthermore, it follows from the above definitions that the vector fields V a differentiate the antibrackets according to This implies that also the relation (4.9) remains valid if we replace ∆ a by ∆ a ± .
Our task is now to generalize the above construction to the case of higher antibrackets. The obvious starting point is to introduce two higher-order ∆ a -operators, and proceed as in section 2, using the Sp(2)-algebra (4.2). The analogous V a -operator, taken by definition to be always of 1st order, can be introduced trivially by letting ∆ a → ∆ a ± ≡ ∆ a ± (i/h)V a , where V a simply equals the 1st-order part of ∆ a . The main ingredient is therefore the existence of two odd differential operators of arbitrary order, and with the algebra of ∆ a as in (4.2). As in the previous section, we can include the case of a possibly non-vanishing constant pieces in these differential operators as well, corresponding to ∆ a (1) not necessarily being zero.
Sp(2)-Covariant Higher Antibrackets
All necessary ingredients for the extension of the above Sp(2)-symmetric formulation to the higherantibracket BRST symmetry have been given in section 2. In particular, we refer to section 2.4, where we gave the algebra of higher antibrackets generated by two nilpotent operators S, T . In accordance with the above formalism, we shall here denote these operators by ∆ a and ∆ b . All the subsequent manipulations remain valid if we replace these by ∆ ± through the definition (4.5). Because of the proliferation of indices, we drop the subscript ∆ on the higher antibrackets, and just indicate the relevant ∆-operator by its Sp(2)-index a. For simplicity, we take ∆ a (1) = 0 , (4.11) so that there are no zero-brackets (they can of course trivially be included).
With the operators ∆ a being Grassmann-odd, the algebra of Sp(2)-symmetric higher antibrackets can then be written: This algebra contains all the usual identities of Sp(2)-symmetric quantization as outlined above, and the appropriate generalization if higher antibrackets are included. In detail, the first identity is nothing but Sp(2)-nilpotency of the operators ∆ a (cf. eq. (4.6): while the 2nd identity gives the Sp(2)-covariant rule for how ∆ a differentiates the "two-antibracket" (as in eq. (4.9)): (4.14) (Note that this identity is not altered by the presence of higher order operators in the ∆ a 's). The next identity is the Sp(2)-covariant analogue of the Jacobi identity (4.8), including its possible breaking when the ∆ a 's are of order 3 or higher: The subsequent identities are of course completely new, involving higher and higher order of antibrackets. They can be read off directly from eq. (2.47). Also the higher main identity (B.8) generates a series of new identities. We quote the first few: This identity is rather trivial, but it turns out to be very convenient for proving that no other independent Sp(2) main identities exist. The next reads The interesting point about these new identities (the first few of which of course are valid also in conventional Sp(2) BRST quantization) is that they do not involve symmetrizations in the Sp (2) indices. The higher identities can be read off from eq. (B.8) in Appendix B.
These higher main identities are qualified guesses for what will arise in an Sp(2) symmetric formulation of genus zero closed string field theory.
We next turn to the question of the corresponding BRST operators. In the conventional Sp(2)-covariant scheme of ref. [21], one can show [23] -as expected -that the two symmetries are generated by the two antibrackets and the solution to the Master Equations More interestingly, also in this context one derives a "quantum BRST operator" (see the 2nd reference of [23]), which reads where the first-order contribution V a to ∆ a explicitly separates out.
Consider now the corresponding BRST operators in the generalized situation in which one has higher antibrackets. Repeating the exercise of the analogous situation without Sp(2) symmetry, letting one finds immediately that the appropriate generalization is (rescaling by a factor of (i/h), converting to left-derivatives, and lowering the Sp(2) index): 21) and similarly for the associated BRST operatorσ a , which one can define completely analogous to the case without Sp(2) symmetry. When expanded as a possibly infinite sum, the first three terms of eq. (4.21) agree with the corresponding Sp(2) quantum BRST operator of ref. [23]. The new terms involve higher and higher antibrackets precisely as anticipated. By construction, to all orders.
Conclusions
Higher antibrackets provide us with a rich mathematical background for studying various quantization problems in physics. They give the obvious generalization of the Batalin-Vilkovisky formalism to situations in which the ∆-operator is of order 3 or higher. When viewed from this more general perspective even the original Batalin-Vilkovisky formalism is seen in a completely new light. Many of the ingredients of the Lagrangian BRST formalism suddenly become very natural. For example, in the conventional Batalin-Vilkovisky formalism the quantum Master Equation involves both the conventional antibracket (the two-antibracket from the present perspective) and the ∆-operator. Usually, the need for the quantum correction in the form of this ∆-operator is viewed as a kind of coincidence, the result of a particular correction from the path integral measure to the classical BRST transformation of the action. Similarly, the "quantum correction" to the classical BRST transformation due to this ∆operator is seen as a (slightly annoying) modification of the otherwise fully "anticanonical" formalism that only involves the use of a two-antibracket: a Grassmann-odd analogue of the Poisson bracket. What we have seen here, is that the ∆-operator is in no way mysteriously present in the formalism. It plays two rôles: First, it is the operator by which higher antibrackets are formed, and second, it really is to be viewed as a "one-antibracket", completely on par with the conventional antibracket. If ∆(1) would not vanish, this identification would no longer hold. The quantum Master Equation is based on ∆, and it holds in all generality that this equation can be expressed solely in terms of the higher antibrackets generated by ∆.
The fact that an almost-canonical formulation 12 of the Lagrangian quantization program exists, is thus in many respects coincidental, and not fundamental. It is due to the fact that in the conventional representation of fields and antifields the BRST operator of Schwinger-Dyson BRST symmetry (and hence ∆) is of 2nd order. In general, a Master Equation of the form ∆ exp ī h S = 0 (5.1) will contain an infinite series of arbitrarily high antibrackets. The canonical considerations are of course limited to the two-antibracket.
From the Lagrangian BRST quantization point of view it is interesting that the appearance of the ∆-operator can be traced to a totally different origin: that of integrating out ghosts while keeping the antighosts [3]. Also from this point of view the ∆-operator immediately appears on an equal footing with the two-antibracket: the same ghost integration that introduces the conventional antibracket in the BRST operator also simultaneously introduces the ∆-operator. It is nevertheless astonishing that the whole mathematical framework of higher antibrackets can be derived by simple ghost-field integrations in the Lagrangian path integral [5,4]. The fact that there is an analogous construction from the ghost momentum representation of Hamiltronian BRST quantization [4] hints at new and unexpected relations between the Hamiltonian and Lagrangian BRST schemes.
In this paper we have focused on some of the more mathematical aspects of the theory of higher brackets. The formulation has been greatly simplified, thereby providing a much cleaner setting for the field theory aspects. Of course, one most interesting result is the close correspondence between higher antibrackets and the so-called string products of closed string field theory [10]. We have argued that there are many hints at the existence of a new product of string fields by means of which non-polynomial closed string field theory could have at its origin a formulation based on exponentials (defined within this product). This remains speculation at the present stage, but even if it should turn out not to be possible to realize such a product in closed string field theory, our formalism may still be of use in this context. Namely, one may conjecture that at least all those results which can be expressed solely with the help of brackets (or, here, string products) may still be valid in closed string field theory. Then the product may be used only in intermediate steps, to simplify the calculations.
The BRST symmetry associated with higher antibrackets is part of a more general BRST-anti-BRST symmetry, and we have shown how they both can be included in a manifestly Sp(2)-covariant formulation. As an amusing by-product of this, we can also write down Sp(2)-covariant analogues of the closed string field theory equations of motions, and the corresponding Sp(2)-extended gauge symmetries. For the path integral of conventional quantum field theory, the associated Sp(2)-covariant BRST symmetry is required when one imposes certain identities as Sp (2)-BRST Ward Identities in the path integral, as discusses in the analogous case without Sp(2) symmetry in ref. [5]. It is interesting that this Sp(2)-covariant formulation in a most natural manner arises from the mathematical structure of strongly homotopy Lie algebras.
The first few brackets can be rewritten as Here ℓ ± = 1 2 (|ℓ| ± ℓ) is just the positive (resp. negative) part of the real number ℓ. This is clearly not a systematical way of describing the generalized brackets for more than three operator entries. In order to proceed into higher numbers of operator entries, we use characteristic functions. Let χ s be the characteristic function associated with the statement s. χ s = 1 if s is true, and χ s = 0 if s is false. Then (−1) r>s p ǫ Tr ǫ Ap χ ℓp=s + p>q ǫ Ap ǫ Aq r<s χ ℓp=r χ ℓq=s The generalized higher antibrackets are all graded symmetric: where (−1) ǫτ is the sign factor originating from permuting Grassmann graded quantities: (T 1 , . . . , T k ) → (T τ (1) , . . . , T τ (k) ) . (A.11) They are restricted linear and enjoy simple composition properties: When all the coefficients t 1 , . . . , t k = 1 are equal to 1, one can say a lot more. First of all let us simplify the notation in this special case: Following Zwiebach ([10], eq. (4.100)), we defines a co-derivation b T for an operator T ∈ Hom C (SA, SA).
We also note the following simple relations The two last statements only holds for operator T , which is not tensor valued, i.e. T ∈ Hom C (SA, A). Less obvious are the following identities: where T ∈ Hom C (SA, A). The first identity in (A.17) is an important special case of the second identity. Many of the above (and coming) constructions can actually be carried out in a vector space frame just as well, i.e. not assuming a dot product for the algebra A. For instance the Φ T and b T construction works without a dot, if T ∈ Hom C (SA, SA). The most notable exceptions are the tilde operation, the higher brackets Φ n T , and in particular the recursion relation (2.32). However, one can impose the existence of the higher brackets Φ n T (and their so-called "main identity": see below) as a principle. For instance in closed string field theory the higher brackets can be built up from a geometric consideration on moduli space [10].
B Higher Main identities
The purpose of this appendix is to show that by applying the lemma (2.35) and (A.17) several times, one can derive higher order versions of the same lemma. Unfortunately, there is no closed expression for Φ T 1 T 2 ...T k in terms of higher brackets Φ T 1 , Φ T 2 , . . . , Φ T k alone, but there is a fairly simple graphical representation, which we now sketch.
We will argue that Φ T 1 T 2 ...T k can be understood as a restricted sum over oriented and connected tree diagrams with k 1-, 2-and 3-vertices.
First of all, we take every line in the tree to run between vertices. In particular: every external leg is assumed decorated with an external point, a "1-vertex". All other vertices but a root-vertex are supposed to have at least one in-going line. Because there are at most three lines connected to each vertex, one can draw all oriented lines in the tree horizontally downwards, and vertically to the right.
• Each vertex corresponds to a higher bracket Φ T i .
• A horizontally connected collection of r − 1 oriented lines r = 1, 2, 3, . . ., correspons to a r-bracketbracket Φ T i 1 , . . . , Φ T ir , where i 1 < i 2 < . . . < i r (cf. definition (A.13)). Of course one can skip the horisontal orientation i 1 < i 2 < . . . < i r inside a bracket-bracket, at the cost of introducing a symmetry factor 1 r! for each bracket-bracket. • A downward line corresponds to the action of the co-derivation (·) • b (··) with i < i 1 , . . . , i < i r . (A conventional higher bracket Φ T i = Φ T i is also considered to be a 1-bracket-bracket.) An incoming downward lines actual attachment position to a bracket-bracket is immaterial, and tree diagram with different incoming attachment position are considered equal, and should only be counted as one.
• Each tree is given a sign, because of the permutation of Grassmann-graded brackets within it. The easiest way to specify this sign is to enumerate the vertices, which is basically the same as specifying a permutation τ ∈ S k that takes the enumeration of the operators T 1 , T 2 , . . . , T k into this enumeration of the vertices. The sign is then computed as the sign originating from simply permuting Grassmann graded quantities: (T 1 , . . . , T k ) → (T τ (1) , . . . , T τ (k) ) .
(B.2)
The vertex enumeration goes as follows: Start at the left-uppermost vertex, proceed downwards if possible, else to the right. When entering a bracket-bracket, start with the left entry. When hitting an end-bracket-bracket, go back to the last furcation point (that is, the next-to-last bracketbracket), then go to the right, etc.
Proof: (sketched here only for the bosonic case). We use induction in the total number k of 1-,2-and 3-vertices. From the lemma Now each tree with k+1 vertices of the above type can be grown from a tree with k vertices by attaching either an extra Φ T k+1 -entry to the right (which gives a horizontal growth) in a bracket-bracket, or a downward growth •b Φ T k+1 , from a vertex, if there is not already an outgoing downward line there. It is easy to see from (A.17) that the action of •b Φ T k+1 on all the trees with k vertices Φ T 1 T 2 ...T k yields all the trees with k + 1 vertices exactly once, except the diagram where the root-bracket-bracket is enlarged by an entry to the right. This tree is then built via the second term on the right hand side of (B.3).
To make this construction more tangible, let us evaluate some of the lowest cases: It is clear that these generalized (higher) main identities quickly become totally unwieldy when written out in full. In the special case of just one Grassmann-odd operator T , the higher main identities actually give no genuinely new information when T 2 = 0. This nilpotent case can be seen using the graphical representation, where the main identity (2.35) states that one vertical line (with a bracket at each end) is equal to zero. This means that an end-bracket-bracket that contains precisely one bracket causes the tree to vanish. Any end-bracket-bracket containing more brackets causes the tree to vanish, because the brackets are Grassmann odd. However, already in the simple case of just one odd operator T which is not nilpotent, the above generalized main identities relate brackets based on T n (up to as many powers possible while still having T n = 0) to those based on T . Koszul [7] has given one particular example of these identities, but no general prescription for finding them.
|
2014-10-01T00:00:00.000Z
|
1996-04-04T00:00:00.000
|
{
"year": 1996,
"sha1": "47fa88bc053f604e4f8780439e3167c371e83746",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9604027",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "47fa88bc053f604e4f8780439e3167c371e83746",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
210297018
|
pes2o/s2orc
|
v3-fos-license
|
REDUCING ENERGY INTENSITY AND INSTITUTIONAL ENVIRONMENT: A CROSS COUNTRY ANALYSIS
The article analyses the way the quality of institutions affects the performance of energy saving policies. Based on the analysis of dynamic panel data for 69 countries over the period from 2002 to 2012, using Arellano-Bond approach, we have shown that the elasticity of energy consumption by price of energy depends on institutional factors. We also demonstrated that absolute values of the said indicators are higher in OECD than CIS countries over the whole data sample, which is explained by a higher quality of institutions. Similar valuations and calculations have been produced for the industrial sector as well as the production sphere. The energy consumption in the industrial sector has proved to be more sensitive to the quality of institutions than in production sphere as a whole. We have also performed a general analysis of trends of GDP energy consumption for a number of countries pointing out that a growth of energy prices enhances energy saving processes.
INTRODUCTION
Energy is known to play a key role in sustainable economic development as it influences both production and social welfare. Limited energy resources cause energy price growth which, in its turn, could result in increased producer's costs, growing inflation and hampered economic growth and social welfare if such changes are not accompanied with energy saving. Economic energy efficiency and climate change regulations are ultimately considered a key factor of energy security, such as lower dependence on the world energy markets, and form development strategy policy in many economies around the world (IPCC, 2014).
Sharp rises in world energy prices in the 1970s and 2000s made many economies create and develop energy-efficiency measures, try to reduce their dependence on the energy imports and lower emissions of harmful substances resulting from fuel combustion (Energy Efficiency Market Report 2016; International Energy Outlook 2016; Bashmakov, 2013). Such measures depend mostly on the government, which is supposed to provide a background for technological and market opportunities stimulating energy saving behavior. According to the Energy efficiency market report-2016 (Energy Efficiency Market Report, 2016), a larger list of regulating instruments and their influence on energy saving behavior resulted in the fact that public energy policy has been the key driver of efficiency improvements in recent years. There were special institutions created to maintain energy efficiency, and together with using fiscal tools and broadening minimum energy performance standards (MEPS) they helped to reduce energy intensity, given the substantial fall of primary energy prices. As a result, investments into energy saving were growing. the consumers. Secondly, prices for traditional energy resources can grow due to the increasing returns to scale in fuel extraction. Thirdly, the most important aspect is that energy prices are still one of the crucial factors in regulating economies. As tax regulation and tough requirements for environment protection can influence energy prices, and policy regulation becomes effective when economy is sensitive to price signals.
The underlying hypothesis of our research is that while measuring the price elasticity of energy intensity we should take into account the quality of institutions. We presume that price elasticity of energy efficiency is dependent on market economy institutions -with a high degree of government regulation, the impact of price factor is greater and vice versa. Thus, we estimate the price elasticity of energy intensity for different economies considering the institutional factors.
Our analysis is based on statistical data that cover 27 former socialist economies, the OECD countries and some countries of Asia, Africa, and America during the period of 2002-2010. The scope of our analysis includes not only production sphere (apart of energy consumption of households) but also the industrial sector as its separate component. Separate consideration of the industry sector is relevant on account of available statistical data required for model parameter evaluation. Verification of hypothesis both for production sphere and for the industrial sector allows us to test the validity of obtained results. In our regression model, we consider energy consumption in production industry only, rather than household energy consumption. We apply both panel data analysis and dynamic panel data analysis using lagged instrumental variables.
We find that the energy saving policy that regulates energy prices is more effective in the OECD countries due to the developed institutions. The elasticities we obtained for these countries are the highest in absolute value. It means that the energy intensity is more sensitive to price change, which increases the regulation effectiveness, such as taxes and subsidies, influencing the general level of energy prices on the market. During 2002-2010, the average value of elasticity for the CIS countries was lower in absolute value by 35% than that in the OECD countries, with the Baltic countries and the countries of Eastern Europe being behind the developed countries by about 20%. It can be explained by weaker incentives for economic agents to reduce energy consumption in the latter countries as compared to the developed countries during the period considered. At the same time, the regulation applied in order to encourage the use of energy saving technologies was not effective enough due to the low dependence of energy consumption on changing energy prices.
In addition, our regression includes the climate severity index as we assume that the more severe the climate, the higher the level of the economy's energy intensity. We found that this was statistically significant but at the 10% level, unlike in the previous research by (Suslov and Ageeva, 2005;Suslov, 2013) based on cross-country analysis, it demonstrated the significance at the 1% level. So in this research it is used as a control variable. Separate evaluation of model parameters for production sphere and for the industrial sector allowed comparison of obtained results. Thus, we determined that institutional environment has an impact on energy consumption in both cases but the impact is greater when we consider the industrial sector for various countries. Price elasticity of energy consumption for both production sphere and the industrial sector depends on institutional environment, which amplifies the price factor and, thus, making improvement of institutions another one of requisite factors of energy saving policy.
The rest of this paper is organized as follows. Section 2 provides a literature background followed by some trends in energy intensity during the end of the 20 th -beginning of the 21 st centuries in Section 3. Section 4 is devoted to the initial data analysis, Section 5 describes the methodology of our analysis, Section 6 discusses the results obtained and Section 7 concludes.
LITERATURE REVIEW
Recently, a growing number of studies has been devoted to assessing energy consumption elasticity of price and income. A well-known approach to the analysis of the relationship between the output, energy consumption, and other production factors is based on the application of a translog cost function (Hudson and Jorgenson, 1974;Berndt and Wood, 1975). It gives an advantage of estimating the coefficients of long-term price elasticity of energy demand. However, it is hardly suitable when we try to estimate particular features of the objects analyzed. The translog cost function approach does not allow us to test the significance of separate factors responsible for individual countries' differences and can only show their aggregate impact on the energy intensity of production at best.
Another well-known method of measuring energy demand elasticity is based on specifying energy demand functions derived from the Koyck distributed lag scheme (Common, 1981;Kouris, 1983;Haas and Schipper, 1998). This approach has been widely applied to estimate world economies, which resulted in a wide range of empirical estimations (Welsch, 1989;Beenstock and Dalziel, 1986;Hunt et al., 2003). The use of lagged energy demand variables allows for estimations of both short-and long-run coefficients of income and price elasticity. In their work, Espey and Espey (2004) used various methods to assess the households' short-and long-term electric power demand elasticity of price and income. Finally, the authors concluded that dynamic models, which include a temporary component of elasticity, give lower values than other models. Some scholars considered only households' energy demand (Espey and Espey 2004;Schulte and Heidl, 2017), while others considered economies' energy demand (Jamil and Ahmad, 2011). Schulte and Heidl (2017) used a wide range of tools to analyze price elasticity demand in different countries, and they concluded that it was higher in developing economies. Their paper also discussed the importance and significance of the GDP growth rate and the capital market growth for the country's energy demand.
Growing concerns about climate changes, environment and security of energy supply, which can be partly solved through smoothing the consequences of volatility in energy prices on international markets, cause policy makers to search for energy efficiency policy instruments to stimulate energy saving behavior. Recent years have demonstrated that price signals alone cannot influence energy saving behavior. In their work, Oikonomou et al. (2009) discuss the dependence of energy saving behavior on such factors as income, climate, instance effort, etc. Eyre (2013) considers it misleading to use price mechanisms as the only regulating instrument. In his opinion, such instruments are to include taxes and cap and trade systems which can influence both the price and the carbon content of energy. Gillingham et al. (2009) also demonstrate that the price is not the only factor to reduce energy intensity. They emphasize that government regulation should take into account market failure and list such examples of government control instruments such as information programs, loan programs, real-time pricing, and market pricing.
Limited use of price signals alone is also due to the fact that price elasticity is not always enough to reduce energy intensity by means of the price, which is discussed in Hunt et al. (2003). The authors consider that additional non-price measures can be more efficient. A similar conclusion is derived by Hepburn (2006). The author discusses possibilities of using with price based mechanism also the political and institutional factors to reduce the economy's energy intensity.
The list of possible non-price signals is growing, which creates new opportunities to use other instruments to stimulate energy saving behavior. Li et al. (2013) analyzed energy intensity in China and singled out three types of factors -economic structure, energy consumption structure, and technological progress. Goldemberg and Prado (2013) focused on the second group of factors. They showed that energy intensity can be decreased as a result of unprecedented reduction of the energy intensity of service. Huang et al. (2017) in their work considered technological factors using 30 Chinese provinces as an example during the period of 2000-2013. The authors used panel data and showed that out of the four factors considered the most significant one was R and D.
Our research focused on such an economic structure factor as the institutional component. Recently, the problem of institutional strength influence on economic outcomes has attracted researchers' special attention (Tanzi and Davoodi, 1997;Wei, 1997;Kaufmann et al., 1999;Chong and Calderon, 2000;Kaufmann et al., 2008;McArthur and Sachs, 2001). They prove that there is a strong correlation between the quality of institutions and policies and the quality of institutions and per capita income. Some variations in transitional economies during the transformational period in their economies are determined by the countries' ability to maintain effective government institutions and develop market institutional frameworks (Popov, 1998;McArthur and Sachs, 2001;Transition Report, 2006). In addition, the transformation decline degree is associated with distortions in the fixed capital, production, and trade patterns accumulated before the reforms (De Melo et al., 1997). Institutional transformations being a way out of economic recessions and causing further development, transitional countries demonstrated an urgent need to work out an effective strategy and methods for market transformations given a theoretical model of corruption influencing energy efficiency (Polterovich, 1999(Polterovich, , 2004. Fredriksson et al. (2004) found a strong correlation between the corruption variable and the energy intensity of production sectors in the OECD economies over the period of 1982-1996. Сorrelation between institutional and biogeographical conditions analyzed by O. Olsson showed that the latter play a very important role (Olsson, 2003). Therefore, some medical and biogeographical components may be used as instrumental variables for calculating institutional strength indices. An example of such a variable is the distance of the country from the equator, as suggested by Hall and Jones (1999).
In our analysis we also tried to analyze how climatic conditions could influence the energy-saving behavior. We refer to recent publications by Bloom and Sachs (1998), who investigated the impact of the mean temperature and some other biogeographical factors on the agricultural production in developing economies.
We focused our analysis on the energy price elasticity demand and assume that the higher its absolute values are in a certain economy, the better market price mechanisms can operate due to stronger agents' reactions to price signals. At the same time, a question arises as to what extent these values could be affected by government policy measures undertaken within special energy saving programs. Given a weak reaction of businesses to price signals, could any government be able to strengthen energy saving activities?
We believe that government regulation measures are more effective when market mechanisms operate better because their influence is realized mostly through strengthening energy saving incentives. On the other hand, there are a lot of arguments supporting the idea that the total volume of energy saved when costs rise happens due to the market price mechanisms rather than the government policy. For example, having summarized the experiences of economies' reactions to the price shocks of the 1970-80s, Sweeney (Sweeney, 1984) formulated it as follows, "The extent to which governmentsponsored energy conservation programs or other nonmarket forces have reduced the demand for energy is unknown. However, at least 80% and probably much more of the demand reductions can be attributed to price and economic activity changes."
ENERGY INTENSITY PUZZLE
Energy intensity decrease became the dominating trend in the world after the energy crisis. During 1983, the average level of GDP energy intensity in the OECD 1 economies decreased by 14% with another 11% decrease by 2000, which totals in 1/3. At the same time, leading energy saving countries, such as Ireland and Denmark, showed 45-50% decrease, Germany, the UK and the USA -more than 40% and the Netherlands -about 40% ( Figure 1).
Such impressive results are known to have been a result of not only market forces caused by rising energy prices, but also of special measures of government policy aimed at better energy saving.
1
OECD economies without former socialist states and the new members after 1996.
According to Sweeney, about 80% of general energy saving in the USA can be attributed to price rise (Sweeney, 1984). We believe that the measures were caused by the price rise as well, but we assume that they were more effective when the market mechanisms worked better. The success of such measures, their high level and quality of development and implementation largely depend on the quality of bureaucracy.
The data available for the countries with socialist economy (National Economy of the USSR, 1970USSR, -1990 show that in the 1970s-1980s they decreased energy intensity as well. However, the official statistics in socialist economies is known to overestimate the output growth indices resulting in low reliability of the data on energy intensity dynamics at the macroeconomic level (Suslov, 2013). The decrease in the energy intensity in former socialist economies was evidently not as high as in the developed economies. As a result, they were far behind market economies, especially the OECD countries, in terms of energy intensity. In the early 1990s, when the economic reforms were launched, the GDP energy intensity in transitional economies significantly exceeded the levels of market economies ( Figure 2). As for the CIS economies, the average level of their GDP energy intensity in 1990 was 2.85 times as high as the average level in the world and 3.14 times as high as that in the OECD economies. Despite low energy prices in the 1990s, the imperative for energy efficiency growth, which was created previously, made the energy intensity decrease, especially in the Eastern Europe and Baltic countries, by 40%. It can be attributed to relatively successful economic reforms and the growth of domestic energy prices up to the level of outside markets due to the liberalization of external trade. The CIS economies had a higher average decrease in energy intensity than the international trend, but lower than in other transitional economies. We refer it to inconsistent reforms in some of the countries, where output contraction did not lead to the shutdown of outdated production capacity, which increased semi-fixed energy costs significantly and resulted in the growth of the GDP energy intensity instead of the expected decrease.
During the next decade, 2001-2010, the energy intensity in the CIS economies decreased most, on average by more than 40%, while the corresponding international value was 11%, in the OECD economies 13% and in the Eastern Europe and Baltic countries 23%. We suppose that the former socialist economies showed such results due to, apart their growing energy prices, developing institutions and some regulating policy measures taken aimed at the growth of energy efficiency and energy saving, the advantage of catching-up development as they could use the experience and technologies of leading countries and a relatively cheap energy saving due to their higher level of energy intensity. Another favorable factor was a scale effect due to fast economic growth and increased capacity utilization.
As a result of such impressive decrease in specific energy costs per GDP unit, their level almost equaled the level of leading economies, from 2.7 times in 2000 to 1.8 times in 2010, which is still quite high.
Higher energy inputs in former socialist economies may partially be attributed to the inclement climatic conditions: in this part of Eastern Europe and the Asian part of the former Soviet Union average annual temperatures are significantly lower and the amplitude of seasonal variations is much higher than in Eastern Europe itself. However, as our analysis showed (Suslov and Ageeva, 2005), this factor fails to account for the entire difference in the levels of energy intensity. We assume that a significant factor affecting the level of specific energy consumption is the quality of economic institutions, which determines the key aspects of the economic system performance. Table 1 presents key data concerning the GDP energy intensity in some groups of the countries in 2002-2010. We see that the mean and standard deviation over the period differ strongly in various groups of the economies with the greatest standard deviation observed in the CIS countries and former socialist economies. Despite considerable divergences in values and some fluctuations of the indicator, we assume that the key factors determining energy price elasticity are similar. We also assume that climatic conditions and institutional factors, regardless of the level and fluctuations of energy intensity, can influence energy intensity. It explains why 69 countries can be put together as one group.
Data
Our sample is chosen so that it complied with the requirements for data homogeneity. Availability of the energy price statistics narrows the number of the countries analyzed and the periods that could be included into our research. As we are interested in the long-run differences between the economies in dynamics, we applied the panel data and dynamic panel data analysis. In order to be able to compare the indicators, we use PPP income variables. As we consider a production factor only, we removed the residence energy consumption from our consideration. The total number of the samples studied counts 69 economies, including the OECD and CIS countries, economies from Asia, Africa, and America, during the period of 2002-2010.
Data collection was based on the following information: • E 1 is energy consumption by production sectors and is calculated as a total energy supply less households' consumption and non-energy use over 2002-2010 (data presented in the International Energy Agency Database); • E 2 -energy industry own use and industry consumption (without energy use for transport) over 2002-2010 (data presented in the International Energy Agency Database); • e 1 is production energy intensity calculated as a ratio of E 1 to GDP PPP. The latter variable was calculated on the date of World Bank Database for 2002-2010; • e 2 -is industry energy intensity calculated as a ratio of E 2 to industry value added.
Its calculation was like this: Industry value added was determined on the basis of World bank data (in constant 2010 USD). The index included industries of ISIC division classification from 10-45 2 . The index E 2 , calculated based on IEA data, spanned the same divisions for comparable calculations. Thus, according to the classification ISIC Rev.3.1., calculation of E 2 and Industry value added spanned the industries of mining and quarrying, manufacturing, electricity, gas water supply, and construction.
DISTE is a seasonal temperature fluctuation calculated as a difference between the mean temperature values in January and July over 2002-2010; it is measured in tenths of degree centigrade; the data was obtained from the National Centers for Environmental Information, National Oceanic and Atmospheric Administration.
INST is a common designation of an institutional strength index obtained from the project "Governance Matters V, Governance Indicators for 1996-2010" available at the World Bank dataset at http://www.worldbank.org. The following variables are included into this database (Kaufmann et al., 1999;Kaufmann et al., 2008) and were tried directly in our regressions for 2002-2010. In our analysis, we used both individual variables and their combinations, but present here are the most satisfactory versions of these variables, which are the sums of two institutional indices -Government Effectiveness and Control of Corruption, both of them measuring quality of the governance and business collaboration: Where GE (Government Effectiveness) is the quality of bureaucracy and credibility of the government's commitment and CC (Control of Corruption) measures the perception of corruption.
The first index represents evaluation of quality of provided social services and the government's ability to pursue selected targets, the second one -a perceived degree to which government power is used in the interest of private structures and how much this power is controlled by elites. Despite the fact that the both indices are closely correlated, their combination turns out to be more robust than each of them apart. They are, obviously, complementary, which is significant from the point of view of economy's sensitivity to price signals. The higher a level of corruption, the greater an implicit part of transaction costs in firms that carry out investment projects. The lower quality of government-provided services and less consistent its policies, the higher are explicit transaction costs and less effective are the institutions designated to promote energy saving.
P is the average output price calculated as a ratio of nominal GDP in USD to PPP GDP obtained from the World Bank Database. As this indicator is used in panel regressions, it is corrected using the dynamics corresponding to the US inflation. Starting with the 2 nd year of the period evaluated, every price index is multiplied by the US inflation index in the previous year.
p E is end-use average energy price for industry calculated using the statistical data available from two sources: (1) IEA Database -enduse prices for industry for different energy products; (2) Transition Report, EBRD, 2010 -electricity tariffs in transitional economies. = .
Then we calculate the common average price of energy for every economy i as an average geometric value of relative prices of all energy carriers that we have information on. The multitude of indices of such energy carriers for the base year and country i will be denoted as J i 2002 while the number of elements it comprises -as k i 2002 . Thus, the average price of energy for country i will be:
Methodology
The institutional conditions influencing businesses' behavior concerning investment projects vary enormously among the countries and groups of countries analyzed. Our approach and model specification were based on the assumption that these differences could influence the price signal efficiency for energy saving behavior. Supremacy of the rule of law, corruption control, the quality of the economic policy and of government turn to be important in terms of describing the investment climate. We believe that they are responsible for additional stimulus to reduce energy consumption apart from the price regulating instruments (e.g., taxes, subsidies, green payments). In case of weak property rights, poor regulating policy or high corruption, investors face additional risks. If the quality of general economic institutions is very low, the implementation of investment projects in different areas including energy saving may involve high transaction costs caused by the bureaucracy, such as additional reconcilement, permissions, regulation, corruption rent, and difficulties in financing. All of it cannot encourage energysaving behavior. Due to poor control and principal agent problems, different methods that could stimulate energy-saving behavior, such as emission taxes, mandatory MEPS, motivation/information, advice, energy audits, benchmarking, financial and tax incentives, etc. fail to work. The government policy aimed at energy saving may be inefficient as well because of high transaction costs not covered by the government. In addition, not all the transaction costs can be monetary or explicit, which is not considered in business plans.
Our working hypothesis is that energy saving efficiency directly depends on the quality of institutions. We analyze the reaction of businesses to changing energy prices. If the price grows, investment into new energy saving technology may become profitable when cost reduction due to saving energy covers all the project expenses including transaction costs. Bad institutional environment results in higher transaction cost (often including a significant implicit component) what prevents energy saving, decreases the efficiency of measures and can even freeze investments into energy saving.
Usually, authors distinguish between the concepts of energy efficiency and energy saving For instance, Oikonomou et al. (2009) state it like this, "Energy efficiency concerns the technical ratio between the quantity of primary or final energy consumed and the maximum quantity of energy services obtainable (heating, lighting, cooling, mobility, and others), whilst end-use energy saving addresses the reduction of final energy consumption, through energy efficiency improvement or behavioural change" (Oikonomou et al., 2009). We believe that growing energy prices change the characteristics of energy consumers related to both these concepts. First of all, energy consumption reduces due to energy saving activities realized through behavioral change. Such changes take minimal effort if any as they are related to changing habits rather than making investments. Then, during a long period that can last up to several years, according to (Sweeney, 1984), technologies start to change when the technical ratio between the quantity of primary or final energy consumed and the maximum quantity of energy services obtainable meet the new price structure.
We noticed that rises in energy prices played a certain role in shaping the modern system of energy saving and energy efficiency support, including the mandatory MEPS, energy efficiency market, etc., which were to boost the reaction of energy consumers to the energy price rise. On the other hand, policy measures might dominate over the energy price dynamics, which happened in 2013-2015, when such measures, together with developing institutions that supported energy saving, prevented decrease in energy efficiency on transport, which might have been caused by the oil price crash of 60% (Energy efficiency market report, 2016). At the same time, we assume that energy price rise as a driver of energy efficiency and energy saving has not lost its significance yet. In any case, further growth in energy efficiency and positive changes in energy saving policy cannot happen without effective markets with efficient basic institutions.
Our theoretical assumptions are based on the concept of transaction costs that energy-consuming firms bear when they implement energy saving projects. Poor market functions and weak regulation thereof lead to higher costs as compared to smoothly functioning market mechanism. Additional costs may take the form of explicit expenses caused by loss of time and money looking for partners, financing, infrastructure connection as well as implicit ones arising from bureaucratic bargaining and corruption. In (Suslov, 2013) there is a model of competitive economic sector with a Cournod market structure. It is demonstrated that in response to price of energy increase the average elasticity of price energy consumption among firms of this sector is the higher in absolute terms, the lower the level of transactional costs related to implementation of available energy saving projects aimed at compensation of energy costs growth of the said firms.
The conception in question is as follows. Let us assume that a typical firm in an energy sector encounters the growth of initial energy price p E by value ∆p E . Meanwhile all n firms of this sector, which are considered symmetrical, have access to an energysaving project that allows to bring down the initial level of energy spend E by value ∆E and that requires expenditures of non-energy factor equaling ∆С with the price of p c . As we are interested in the substitution effect itself let us simply assume that there is no income effect or, which is the same, that we are considering a conditional function of demand for the energy factor and that the firms' volume of output does not change no matter whether they take on the project or not. However, implementation of the project might incur additional transaction costs TC.
In order to decide whether to implement the project or not the manager of a firm must compare costs of both cases, that is to choose: Where the part in the square brackets denotes the growth of the firm's costs in case of refusal to implement the project, whereas another part in the square brackets further on is the growth in case the project is accepted and being implemented. Thus, the project is implemented if: Assuming that pc ∆C+TC<(p E +∆p E ) ∆E is true. In this case, if the volume TC is not high or, in other words, the level of transaction costs related to the project is low, the project will be implemented by all firms and if the latter is high, it will be rejected as impractical.
Now, to simplify this line of reasoning let us assume that the TC value takes one of only two values -the low level of TC L , which makes the project profitable, and TC H , which leads to rejection. Further, we presume that in certain economic circumstances out of n firms in the considered sector k of them come up against low transaction costs and consequently implemented the project, while n-k of them -against high ones and rejected the project. Then, the total energy use over the sector goes down by k ∆E. It is now easy to calculate the elasticity of the conditional demand for energy consumption function at its price ε: It is obvious that its absolute level will be higher, the more the k n ratio, which also point out the likelihood for a firm to encounter low transaction costs, denoted as prob , prob k n = . The said value, in general terms, depends on investment climate in the economy: the better it is, the less likely an economic agent will encounter a high level of transaction costs. In fact, we are talking about the quality of economic institutes that determine bureaucratic burden on firms, adequacy of laws and their execution, access to financing, development of infrastructure and information systems. The faults of institutional environment create barriers for business, facilitate corruption and shadow economy. What is more, weak institutions slow down stimuli of energy saving behavior -due to the problem of control and moral hazard. We propose that the indicated dependence between the elasticity of energy consumption at the price of energy and the quality of institutional environment is to some degree characteristic for most sectors of economy as long as they are part of market relations and consequently the firms there being sensitive to price changes. A special case is the sector of energy production and processing where the income effect from energy price growth may be positive and lead to increase of supply. However, the energy consumption of extraction and processing of energy resources being quite high will also open way to high levels of substitution effect. Reducing costs related to higher energy expenditures will require substantial investment and consequently implementation of vast investment projects that may come up against institutional barriers and related hurdles of informational, infrastructural and financial nature.
We believe that the administrative sector also wants to reduce costs in as much as budget constraints of its organizations are rigid. The existence of 'soft' budget constraints signifying that the state is willing to cover rising costs of budgetary organizations is also an institutional phenomenon characteristic of economies with bad institutions and unstable financial systems. In such a case, impact of price shocks on energy cost reduction will be less than in economies with stable financial systems.
Specification
We considered two models -one for energy consumption of production sphere in general and another for energy consumption in industrial sector, while using their common specification: Combined influence of real energy price and institutions is , which is an interaction term used after Polterovich and Popov (2003). If it proves to be significant, one could suggest that institutions affect energy intensity through a price mechanism. On the other hand, a simple transformation in (2) helps to see that is energy intensity elasticity of price for a particular economy, being a function of an institutional strength index with β 2 , β 3 >0 that is the same as our arguments. Thus, direct calculation of elasticity variables based on model valuation parameters helps clarify the use of logarithms despite the fact that price variables are relative.
Variable INST is convenient as it has a negative value for the economies with poor institutions and has a higher absolute value for worse cases, being opposite for the economies with effective institutions. Thus, the absolute value of the elasticity value is less than coefficient β 3 in case of poor institutions and greater than it in case of effective institutions. Using this variable, we see that the reaction of energy consumers to the energy price rise is shaped under the influence of both market and government institutions because the variable combines the institutional indices of government effectiveness and control of corruption. The former index is related to the management quality at the level of the government, while the latter mostly characterizes the market, and both of them indicate the interaction of the government and businesses. In our opinion, our approach is also proved by the results obtained by other scholars, for instance in (Fredriksson et al., 2004).
Hence, if the price variable and the interaction term of the equation are of sufficient significance, the elasticity of price of energy intensity for each economy at any particular moment will depend on the quality of institutions. The concept of 'energy efficiency elasticity' as such differs from the 'energy demand elasticity of price' as it does not consider the income effects and measures only substitution effects, which obviously describes the results of energy saving much better.
RESULTS AND DISCUSSION
In order to calculate the elasticity of energy consumption by price, we considered 69 countries over the period from 2002 to 2010 that differ from each other by their level of economic and social development. The number of countries included in the sample is due to restricted available statistics, particularly as for data on relative energy prices. Based on country data, we evaluated model parameters (equation 2) and calculated the elasticity of energy consumption by price for energy as dependent on the institutional factor both for production sphere (model 1) ad for industry (model 2). Based on evaluation of ratios for model 1 we have calculated price elasticity for energy for every year while using the value of institutional factor for each country.
The Hausman test for the fixed and random effects models, where the null hypothesis says that the model with random effects is more preferable than one with fixed effects (Greene, 2008), helped us to choose the most appropriate model. The test looks to see if there is a correlation between the unique errors and the regressors in the model. The null hypothesis is that there is no correlation between the two. We reject the null hypothesis considering the use of the model with fixed effects more preferable. To test that the OLS estimates are non-biased and consistent after the base specification we also made the dynamic panel data analysis to remove unobserved heterogeneity modeling of a partial adjustment mechanism (Durlauf et al., 2009).
Cross-sectional dependence is quite common for macro panels with long time series (over 20-30 years), despite it we used the Pasaran CD (cross-sectional dependence) for our micro panel (69 countries and 9 years). This test helped us to identify whether the residuals are correlated across entities (Hoechle, 2007). Based on the estimations, we couldn't reject the hypothesis that the residuals are not correlated, which favors the results. To solve the problem of group-wise heteroskedasticity, we used the Huber-White sandwich estimator, which allowed us to obtain the heteroskedasticity-robust standard error.
The results obtained from the regression equation are given in Table 2.
Thus, we showed that according to equation (2) the energy intensity relative price elasticity is equal to -(0.303+0.0302×INST) and depends on the quality of institutions in the economy considered, namely indices GE (Government Effectiveness) and CC (Control of Corruption). The higher the indices, the better the institutions are and the higher the absolute energy efficiency price elasticity. Hence, if the price grows, the energy intensity decreases more significantly. In other words, the higher the INST value, the more effective the price signals are for the economy's energy-saving behavior, and the rise of temperature excursion in January and July by 0.1 results in the growth of energy intensity by 0.003%.
For model 2, the industrial sector, we have determined that energy consumption elasticity for energy price is also dependent on the institutional factor. Its impact is greater than for the model for the economy's production sphere. The elasticity of energy consumption for relative price of energy for production sphere based on obtained results has the following formula: -(0.727+0.0383×INST). When comparing this with its results for the production sphere, one may notice that the ratio of significance of institutional environment differs only slightly, whereas the values of elasticity by price, which do not depend on the quality of institutions, are quite distinct from each other. Industrial energy consumption without the institutional factor is much more sensitive to change of relative price for energy as compared to the production sector as a whole -the ratio to the factor of relative price in the model is 0.727 and 0.303 respectively.
Appendix 1 shows the energy intensity price elasticity with the institutional factor obtained on the bases of panel data for each of the 69 countries. The results for different groups of countries shown in Table 3 below demonstrate that the absolute elasticity for the OECD countries is higher than that for the CIS, Eastern Europe and former socialist economies. In addition, we see that the absolute elasticity for the OECD is higher than the world level, which indicates that the price factor is a very efficient instrument for decreasing energy intensity. In other words, the governments' regulating measures that increase the energy price for industries (through taxes or penalties) are more effective in the OECD economies than world-wide average. Taking into account the specificity of our calculating the energy intensity elasticity, i.e. with the institutional factor in view, we conclude that the most effective energy policy in these economies is due to the high quality of institutions.
The values of elasticity by price with account of institutional factor for the industry over the countries we have considered demonstrate higher values in comparison with those represented in Table 4. This fact may be explained in our view by the higher sensitivity of agents in this sector to price change as well as greater uniformity of industrial producers versus the larger production sphere in the countries under consideration ( Table 4). The attachment 2 presents calculations for elasticity for every country's production sector from 2002 to 2010.
In order to take into account the AR(1) process and solve the problem of endogeneity and unobserved heterogeneity we also estimated the dynamic panel data. Both energy price and institutional factors are highly significant, results are presented in Table 3. The significance of institutional factor both in the short and long term for production sphere and industry testifies to the stable nature of obtained results and, thus, supports our original supposition on importance of its consideration for calculation of price factor contribution to regulations in energy consumption.
CONCLUSION
Using dynamic panel data and model with fixed individual effects, we show that the quality of market institutions influences the level Hansen test of overid. restrictions (robust, but weakened by many instruments) of energy intensity in both short-run and long-run perspectives. The significance of factors obtained during the panel data analysis helped us to calculate the energy intensity elasticity for 69 economies during the period of 2002-2010. The estimates are non-biased and consistent as the explanatory factors are highly significant both in model with fiхed effects and dynamic panel data.
Empirically, we show that the energy intensity is influenced by not only the price factor, but also by the quality of institutions, such as government effectiveness and control of corruption. High quality of institutions increases the energy intensity sensitivity to energy price changes, which boosts the efficiency of price-based instruments.
The energy intensity price elasticities for the OECD economies calculated according to our model appeared to be the highest in the absolute value, which indicates the greatest energy intensity sensitivity to the price rise and improves the effectiveness of regulation measures, such as imposing emission taxes (Table 5). During 2002-2010, the average elasticity for the CIS economies was 40% lower in the absolute value than that for the OECD economies, with Eastern Europe and Baltic economies being 20% behind the developed economies. We believe that this fact means weaker business sector agents incentives to decrease energy consumption for the agents in the CIS, Eastern Europe and Baltic economies as compared to the developed countries. The regulation aimed at intensifying the use of energy saving technologies in those economies was not effective enough due to the low sensitivity of energy consumption to energy price changes.
We believe that our analysis provides a useful insight for policy making decisions on energy-saving policy. Policies that impose the cost (in the form of taxation, for example) appear not to be effective when they are influenced by market factors, such as the quality of institutions. Similar conclusions were stated in (Gillingham et al., 2009), where the authors emphasized the importance of market mechanisms for providing the stimuli to economic agents for energy-saving behavior.
Our analysis of energy consumption trends from 1991 to 2010 for countries and group of countries of the world shows that over periods of growing prices for energy, efforts to save energy intensified, while in times of lower energy prices, such efforts slacked off without stopping completely. This corresponds perfectly with the statement that special policy measures and institutional development have been the key drivers of efficiency improvements in recent years (Energy Efficiency Market Report, 2016).
The results obtained seem to be of particular importance for the economies with weaker institutions, which have to take into account the institutional factors and be aware of lower efficiency of the measures stimulating energy-saving behavior if they try to decrease the energy intensity without improving the quality of institutions.
ACKNOWLEDGEMENT
The research is supported by the grant No. 19-010-00731 "Complex Analysis of Russian Regions' Heterogeneity and Assessment of its Impact on Socio Economic Development" from the Russian Foundation for Basic Research.
|
2019-10-10T09:30:52.562Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "021e0cbb771f985bb4c6486c67154647b8019b4e",
"oa_license": "CCBY",
"oa_url": "https://www.econjournals.com/index.php/ijeep/article/download/8259/4659",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1af3173a7fbaf905ca505691b3881a1f732c75ee",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
264260833
|
pes2o/s2orc
|
v3-fos-license
|
Towards non-perturbative matching of three/four-flavor Wilson coefficients with a position-space procedure
We propose a strategy to non-perturbatively match the Wilson coefficients in the three- and four-flavor theories, which uses two-point Green's functions of the corresponding four-quark operators at long distances. The idea is refined by combining with the spherical averaging technique, which enables us to convert two-point functions calculated on the lattice into continuous functions of the distance $|x-y|$ between two operators. We also show the result for an exploratory calculation of two-point functions of the $\Delta S=1$ operators $Q_7$ and $Q_8$ that are in the $(8_L,8_R)$ representation of ${\rm SU(3)}_L\times{\rm SU(3)}_R$ and mix with each other.
Introduction
Lattice calculations of weak matrix elements play an important role in searching for physics beyond the Standard Model.Weak-boson exchanges in low-energy processes can be reduced to an effective weak Hamiltonian composed of four-quark operators by integrating out the weak bosons and quarks heavier than the renormalization scale µ.Then, the information at high energies > µ is expressed in terms of the Wilson coefficients, the coefficients of the four-quark operators in the weak Hamiltonian.
For many processes, the corresponding Wilson coefficients are known to one-or two-loop level in perturbative QCD both in the MS and RI/(S)MOM schemes.Therefore the four-quark operators need to be renormalized at the same renormalization scale and in the same scheme as the Wilson coefficients to construct the proper weak Hamiltonian.The RI/(S)MOM scheme is more straightforward than the MS scheme for actual lattice calculations.
We also need to match the number of flavors in the renormalization scheme of the Wilson coefficients and the four-quark operators.The perturbative calculation of the Wilson coefficients in the three-flavor theory needs a conversion from those in the four-flavor theory at an energy scale below the charm threshold m c 1.3 GeV, where perturbative calculation is quite ambiguous.While the difference between the three-and four-flavor Wilson coefficients is not significant if the form of the four-quark operators in the three-and four-flavor theories is the same and the sea charm effect is not significant, the issue is more serious when the charm quark can be involved in the four-quark operators in the four-flavor theory.In such a case, it is preferable to introduce the fourquark operators in the four-or five-flavor theory so that we do not need the three-flavor Wilson coefficients.However, if the lattice ensemble on which matrix elements are calculated is too coarse a −1 ≤ 2 GeV to introduce the charm quark, a non-perturbative matching of the Wilson coefficients between the three-and four-flavor theories is needed.The RBC and UKQCD collaborations are facing this issue in the calculation of direct CP-violating effects in K → ππ decays.Their original result contained 12% systematic uncertainty because of the perturbative matching of the Wilson coefficients Ref. [1].
In this work, we formulate a strategy to nonperturbatively match the three-and four-flavor Wilson coefficients and perform some exploratory calculations.As explained in Section 2, the strategy basically uses the two-point functions of four-quark operators, which are gauge invariant and prevent mixing with gauge-noninvariant operators and operators that are forbidden by equations of motion.In order to take the continuum limit of the matching matrix accurately, we propose to take the spherical average of two-point functions [2], which is briefly explained in Section 3. Some exploratory results for the spherical average of two-point functions of the three-flavor operators in the (8 L , 8 R ) representation are shown in Section 4.
Non-perturbative three/four-flavor matching of Wilson coefficients
We start with the weak Hamiltonian where µ denotes the renormalization scale in a scheme indicated by the superscript S n f .The number of flavors n f as the subscript of S is the number of sea quarks, while the n f as the subscript of O and w is the number of valence quarks that characterizes the concrete form of operators.For simplicity, we use vector and matrix notation as in the RHS of Eq. (2.1) by omitting the index i of operators.The superscript T denotes the transposition of the vector or matrix.The weak Hamiltonian is independent of n f in the sense that matrix elements calculated in QCD between states which involve an energy scale E do not change when n f is increased above n e f f f , which is chosen so that quark flavors indexed by n > n e f f f have masses m E. If we calculate weak matrix elements with three-flavor operators in 2 + 1-flavor QCD ensembles, we need the Wilson coefficients w S 3 3 (µ) in the three-flavor theory to obtain the proper weak Hamiltonian However, perturbative calculation of w S 3 3 (µ) requires a matching from w S 4 4 (µ ) that is performed below the charm threshold, which induces a large systematic error (∼ 12%) [1].Therefore a nonperturbative matching in a non-perturbative scheme is desired.The RI/(S)MOM scheme is not suitable since it cannot prevent mixing with irrelevant operators allowed by a gauge-fixing and contact terms, which may become more important at low scales.A position-space scheme X is a reasonable scheme to implement the non-perturbative matching since it prevents significant mixing with gauge-noninvariant operators and operators that are forbidden by the equations of motion.
We consider the equality of two-point function for 3 and 4 flavors: which is valid at long distances 1/|x − y| m c .Then we obtain where we define and introduce renormalization matrices which satisfy If the sea charm quark is neglected, the relation becomes easier so that renormalization matrices in the position-space scheme are not needed.In addition, if the three-flavor operators are the same as the four-flavor operators O 3 = O 4 , i.e. if the valence charm quark cannot be involved in the operators, the matching between the three-and four-flavor Wilson coefficients is identical as long as the sea charm quark is neglected.On the other hand, if the charm quark is present in the four-flavor operators, as in the case of K → ππ decays, the matching of the Wilson coefficients is needed if the weak matrix elements are to be calculated with the three-flavor operators.Note that we actually need the lattice Wilson coefficients w lat 3 , which can be obtained from Eq. (2.7) if we simply drop the multiplication by (Z S 3 /lat O 3 ) −1 from the RHS of Eq. (2.7), removing any reference to the scheme S 3 .
We will choose S = S = RI/SMOM, in which the Wilson coefficients in the four-flavor theory can be calculated perturbatively.To obtain the Wilson coefficients in the three-flavor theory, we need to calculate the four-flavor renormalization matrix Z (1/a; x − y).In the following sections, we fix y = 0 for simplicity and present our strategy to calculate the two-point functions with controlled discretization errors and the results from a test calculation.
Spherical average of two-point functions
The three-flavor Wilson coefficients calculated with the strategy proposed in the previous section will depend on x, the relative distance between two operators in the correlators.In an ideal calculation with no discretization errors and sufficiently large m c this x dependence should be absent.In a practical calculation, x-dependence will arise because the charm quark is insufficiently massive and from finite cut-off effects.Although distance scale in 1/|x| m c is much longer than recently used lattice spacings, discretization effects on correlators at 1/|x| 400 MeV are more than 10% depending on lattice spacing and much larger than statistical errors.Thus, it is preferable to take the continuum limit to avoid such ambiguity.However, in order to take the continuum limit, we need to calculate correlators at a fixed physical distance for each lattice spacing, while correlators on the lattice have values only at discrete points that depend on lattice spacing.
We will apply the spherical averaging technique [2] to evaluate correlators at any physical distance as well as to reduce discretization errors.While correlators on the lattice violate O(4) symmetry and depend on lattice points in a complicated way, this technique enables us to obtain correlators that depend only on the distance |x| as if they have O(4) symmetry.There are two steps to evaluate sphere-averaged correlators using lattice correlators f a,n 1 : • Interpolation In this step, we estimate the values of correlators at any physical location x.
In the case of one dimension, it is easy to verify that the linear interpolation cancels the O(a 1 ) discretization error that arises from the Taylor expansions of f a,n and f a,n around x.In the case of four dimensions, the interpolation is modified to where n µ = x µ /a , μ is the unit vector for the µ-direction and we define It is also easy to verify that this interpolation is free from O(a 1 ) errors.
• Average over spheres While the interpolated correlators in four-dimensions designed above have values at any physical location, they still violate rotational symmetry and depend on x in a complicated way.To obtain correlators as a continuous function of only the distance |x|, we average them over a four-dimensional sphere U |x| with the radius of |x|, (3.4)
Exploratory calculation of two-point functions of four-quark operators
In this section, we show the result for a preliminary calculation of two-point functions of unrenormalized ∆S = 1 four-quark operators Q i, j in the three-flavor theory.In general, the calculation of these two-point functions requires all-to-all quark propagators since there are diagrams that contain a quark loop at the sink point.Thus, there may be power divergence from loop diagrams, which needs to be eliminated before renormalizing the operators.Among the ∆S = 1 operators relevant for the K → ππ matrix elements, where e q is the electric charge of a quark q the RHSs are summed over the Lorentz index µ and the color indices α and β , enable us to investigate the simplest case of mixing correlator matrix since only these two operators belong to the (8 L , 8 R ) representation of SU(3) L × SU(3) R symmetry [3,4].
In this article, we show the result for the contribution of the fully-connected diagrams in which all the quark propagators connect the source and sink points and there is no power divergence.( 300 MeV to 370 MeV. Figure 1 shows the results for two-point functions of (8 L , 8 R ) operators.Since the correlator matrix is real symmetric, we take the average of the 78 and 87 elements, which is shown as the 78 element (crosses) in the figure.The upper panel shows the results for G i j (x) before taking the spherical average.Here, we distinguish different lattice points that are not equivalent with respect to 90 • rotations or parity inversion in the four-dimensional hypercubic group.The results are averaged over sets of lattice points related by hypercubic transformations.The ambiguity due to the violation of O(4) symmetry could amount to more than ×10 at |x| = 3a as numerical results at |x| = 3a read G 77 (0, 0, 0, 3a) = 1.12(1) × 10 −2 GeV 12 and G 77 (0, a, 2a, 2a) = 3.41(3) × 10 −4 GeV 12 .From the same observation, the ambiguity at |x| = 6a is about ×3.The lower panel shows the results for the spherical average G i j (|x|).The discretization errors in the spherical average appear to be much smaller than those in G i j (x). Figure 2 show the results for the the spherical average calculated on finer lattices, a −1 = 2.38 GeV (left panel) and a −1 = 3.15 GeV (right panel).As mentioned in the previous section, the spherical averaging technique enables us to evaluate the values of correlators at any physical distance.Therefore, the matching matrix between the Wilson coefficients in the three-and four-flavor theories Eq. (2.4) or (2.7), which is calculated from two-point functions, can easily be extrapolated to the continuum limit at any physical distance |x|.
Summary
We formulate a non-perturbative strategy to match the three-and four-flavor Wilson coefficients of ∆S = 1 four-quark operators.We propose to use two-point Green's functions of fourquark operators and their spherical average to take the continuum limit of the matching matrix.As Eq. (2.7) indicates, we also needs the renormalization matrix of the four-flavor operators in a scheme in which perturbative calculation is available.The four-flavor operators will be calculated as well as the two-point functions in near future.
Figure 2 :
Figure 2: Same as the lower panel of Figure 1 but the result on finer lattices, a −1 = 2.38 GeV (left panel) and a −1 = 3.15 GeV (right panel).
These are the only non-zero diagrams if we use the I = 3/2 components of O 7 and O 8 .)Weuse 2+1-flavor domain-wall ensembles with three lattice cutoffs a −1 ranging from 1.79 GeV to 3.15 GeV generated by the RBC and UKQCD collaborations.Pion masses are in the region from Results for G 77 (x) (circles), G 88 (x) (squares) and G 78 (x) (crosses) calculated on the coarsest lattice with a −1 1.79 before (upper panel) and after (lower panel) taking the spherical average.
|
2019-01-14T01:55:45.000Z
|
2019-01-14T00:00:00.000
|
{
"year": 2019,
"sha1": "712020660f6089b033bdbdd5019e451e8a8d3ae7",
"oa_license": "CCBYNCND",
"oa_url": "https://pos.sissa.it/334/216/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "4c978f4643180f0d6622d767f0f0c15cdff30711",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
54646957
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge, Perceptions and Practices on Antiretroviral Therapy in Farming Communities in Ghana: A Study of HIV Positive Women
Low levels of knowledge of antiretroviral therapy (ART) and Prevention of Mother-To-Child-Trans mission (MTCT) among persons living with HIV present an unwanted window for transmission within the general population. The purpose of this study is to assess the level of knowledge, attitudes and perceptions of HIV positive women on antiretroviral therapy (ART) and Prevention of Mother-To-Child-Transmission (MTCT). The study surveyed 211 HIV positive women from ART centres in two districts in Ashanti region of Ghana. Data was collected through interviews using structured questionnaires and focus group discussion using interview guides. Qualitative and quantitative techniques were used to analyze the data. The study revealed that about 15% of the women exh ibited no knowledge about the possibility of transmission of HIV from mother to child whilst 36% had no knowledge on the mode of MTCT of HIV. Those who had knowledge of MTCT indicated that this could be intrauterine (88%), delivery (69%) and through breastfeeding (82 %).Mothers with incomprehensive knowledge on ART were 2.5 t imes more likely to defau lt ART (OR=2.5, p=0.002). Comprehensive knowledge was positively influenced by high education level (OR=1.9; p=0.003). Social marketing campaigns should be developed and targeted at improving women literacy on their health issues and getting more women to test for HIV in order to incorporate them into PMTCT programmes. Further research however needs to be conducted to ascertain the facility and community based factors that influence the women’s knowledge on ART and PMTCT.
Introduction
HIV/AIDS remain a major cause of death worldwide with the majo rity co ming fro m sub-Saharan Africa. A IDS has killed more than 25 million people since 1981 and an estimated 33.2 million (31.4 million -35.3 million) people are liv ing with HIV/AIDS worldwide with 2.5 million of them fro m sub-Saharan Africa. In 2007, 2.1 million HIV related deaths were recorded with 1.6 million (76%) fro m Sub Saharan Africa [13].
In Ghana, HIV prevalence among adults in 2010 was 1.5%. An estimated 267,069 persons made up of 95,206 males and 126,735 females were liv ing with HIV as at 2010 and the prevalence o f HIV/ AIDS among ant enatal clients was 2.0% [14]. Prev alence o f HIV amo ng A NC wo men is therefo re 0.5% h ig her than p rev alence amo ng ad u lts population and the estimated nu mber of p regnant wo men liv ing with HIV in 2009 was 13,000. Currently, the Ashanti and Eastern regions are home to the greatest percentages of HIV positive people with prevalence o f 3.2% and 3.0% respectively [15].
The HIV ep idemic is becoming increasingly femin ized with nearly50% of people living with HIV being females globally as at 2010 [20]. HIV remains the leading cause of death among women in reproductive age and HIV infection among children has main ly been through Mother-To-Child-Transmission (MTCT). However, the most effective way of preventing MTCT of HIV is to prevent infection in wo men of reproductive age.
As of December 2009, HIV testing and counselling services were accessed by 53% of all p regnant women in Ghana, 74% of who m were tested for HIV and given their results. The HIV prevalence among those tested was 1.7% of which 55% received antiretroviral drugs to prevent vertical transmission. The comparative proportion of babies born to HIV infected wo men who received antiretroviral d rugs for prophylaxis was 30% [8].
The rapid incidence and fatality o f HIV/AIDS g lobally with its greatest impact in sub-Saharan Africa has been a growing concern of world leaders and stakeholders in health to continuously seek a remedy to this can ker. In the light of this, there have been International and national efforts to improve care and support for PLHIV, including HIV Testing and counselling (HTC) services, establishing ART centres and PMTCT services. Though awareness of HIV and AIDS have been high since 2003, where 98% o f wo men and 99% of men were reportedly aware on HIV, co mprehensive knowledge on HIV and AIDS, appropriate prevention and non-stigmatizing behaviour have been lagging behind [5]. As at 2007, 25.1% of young women and 33% of young men aged 15-24 years had co mprehensive knowledge (i.e. correctly identified ways of transmitting HIV and rejected misconception about HIV t ransmission) of HIV and AIDS. In 2008, the Ghana Demographic and Health Survey (GDHS) showed that only 28.3% of female respondents age 15 -24 and 34.2% of men had comprehensive knowledge about HIV and AIDS. There has thus been little progress along this front [8].
Patient's knowledge, attitudes and practices on HIV/AIDS, PMTCT and A RTs influence their motivation and uptake of ARVs for PMTCT. A good level of understanding about HIV by the patient, a belief that ART is effective and prolongs life, and recognition that poor adherence may result in viral resistance and treatment failure all impact favourably upon a patient's ability to adhere. Conversely, a lack of interest in beco ming knowledgeable about HIV and a belief that ART may in fact cause harm adversely affect adherence [18]. A study in Uganda to find out the barriers to accessing Highly Active Antiretroviral Therapy (HAART) by HIV positive wo men, found out that women who had not enrolled in the (Highly Active Antiretroviral therapy) HAART-Plus programme had a remarkab ly lower level of knowledge on HIV/AIDS and HAART co mpared with those who had enrolled in the programme [4]. Other studies in the continent also found mothers knowledge on PMTCT to be low [1], [7].Ones knowledge of HIV, ART and PMTCT is however influenced by interp lay of socio economic and other cultural factors including clients' educational level and marital status. A lower level o f general education and poorer literacy may impact negatively on some patients' ability to adhere, while a h igher level of education has a positive impact [2]. The purpose of this study is to assess the level of knowledge, attitudes and perceptions of client's on ART and PMTCT and determine the extent of influence of client's knowledge level on accessing ART. Low levels of knowledge of HIV status among persons living with HIV present an unwanted window for trans mission within the general population, in addition to sex with female sex workers, their clients, and non-paying partners [8].
Study Design
The study was a descriptive cross sectional design. The methods were both qualitative and quantitative and data was collected at the individual and facility level.
Study Area
The study was conducted at the ART centres in two farming towns in the Ashanti Region, Ejura and Ny inahini. These are two farming do minated towns in the Ejura-Sekye dumasi and AtwimaMponua districts respectively. The agriculture sector in the Ejura-Sekyedu mase District dominates all the other sectors of the economy in terms of emp loyment as a typical characteristic of a Ghanaian setting. It employs about 68.2% of the population which is above the national rate o f 60%.With respect to HIV prevalence, the Ashanti Region recorded the second highest in the country in 2011 (3.0%). Routine HIV testing and counselling are offered during antenatal care visits for pregnant mothers at both ART centres.
Sampling and Sample Size
The sample was selected in two (2) stages. Two ART centres were purposively selected fro m t wo farming communit ies in the Ashanti Region. These were the Nyinahin Hospital and the Ejura District Hospital. Systematic rando m sampling was used to select respondents for exit interviews and FGDs at the ART centre. Admin istrative records, which included the pharmacy refill register, medical consultation appointment visits, were also used to get informat ion of respondents. A total of 211 respondents were involved in the study.
Data Collection and Tools
The data collection technique for the quantitative method was interviews and the tool employed was structured questionnaires (open ended and closed). Qualitative data was obtained using semi-structured interviews, focus group discussions (FGDs) and interviews with key in formants using tape recorders and interviews guides as data collection tools. Interviews and the FGD were carried out in quiet and discreet locations in a vacant room in the hospital's outpatient department. The interviews were conducted and audio-taped in the local language. Tapes were transcribed verbatim in Twi and then back-translated into English. Spot checks of interview and FGD transcripts and translations were regularly conducted to ensure the completeness of the transcription and the accuracy of the translation. Questionnaires and interview guides were pre-tested to check for clarity, consistency and acceptability of the questions to respondents. Following this, the necessary corrections were made and questionnaires finalized for the actual field work.
Statistical Analysis
All questionnaires and interview results fro m the field were checked for co mpleteness and internal errors. Questionnaires were then sorted, numbered and kept in files labelled per facility fro m wh ich the participants were Communities in Ghana: A Study of HIV Positive Women interviewed.
Responses on the various questions to test for knowledge were coded as yes, no or don't know. General knowledge level was co mputed by respondents total correct responses fro m the various issues posed to test for knowledge. Respondents who accepted all correct responses were groups having "adequate knowledge" and vice versa.
Bivariate associations and 95% confidence intervals were used to access the influence of certain socio demographic characteristics on the knowledge level of the wo men using STATA 11
Ethical Considerati on
Ethical clearance fo r the study was obtained from the Co mmittee on Hu man Research, Publications and Ethics (CHPRE) of the Kwame Nkru mah Un iversity of Science and Technology (KNUST) and Ko mfo Anokye Teaching Hospital (KATH). The participant's capacity to consent was considered. There was full d isclosure or discussion of relevant information/ questions. Translators were used for participants who could not read.
Background Characteristics
The research was conducted using 211 HIV positive wo men fro m A RT centres at Ejura and Nyinahin i in the Ashanti Region. One hundred and twenty one of the respondents representing 57% are fro m Ny inahini, and 90 (43%) are fro m Eju ra ART centre.
More than 50% have been on treat ment for less than 24 months with the maximu m length on treatment being 156 months (figure 1). The mean and med ian length on treatment is 20 months and 21 months respectively. A regression analysis indicate a statistically significant association between one's months of being on ART and regularity at ART (t =3.91, p=0.000).
Majority of the women aged below 35 years (55%) and the mean age was 36 years (SD = 8). Majority of the wo men were married and 13% had schooled to the secondary level with 35% having no fo rmal education. Seventy-three percent with farming being the most cited job. In general, the defaulter rate was 21%. Th is was inconsistent with estimates of average rates of adherence to ART in many different social and cultural settings which range from 50% to 70% [11], [16], [18]. The mean age of the respondents was 36 years (SD = 8) with majority of the wo men aged below 35 years (55%). Forty eight percent of the wo men were married and 63% were Christians. Among those married, majority of their husbands had some form o f formal education with 22% having none. Thirteen percent of the women had schooled to the secondary level and 35% had no formal education. One hundred and fifty-three representing 73% were emp loyed and the most cited occupation was farming (70, 46%) as detailed in table 1.
ART Defaul ting Rate
As detailed in figure 2 the total defaulting rate among the wo men was 21% (45 out of 211 respondents). At Nyinahini, 28 out of the total of 121 had defaulted ART. Defaulting was higher among respondents from Nyinahini as compared to Ejura (23% v rs 19% respectively). Table 2 gives a summary of the responses of the women on their knowledge about MTCT. Seventy-two percent of them knew that HIV/AIDS could be transmitted through MTCT About 15% of the wo men exhibited no knowledge about the possibility of transmission of HIV fro m mother to the baby. The women who had knowledge of MTCT indicated that this could be intrauterine (88%), delivery (69%) and through breastfeeding (82%). Th irty-six percent however had no knowledge on the mode of MTCT of HIV. Knowledge on ART and PMTCT was adequately higher among the respondents. Ninety percent knew that MTCT was possible and 82% knew this could be through breast feeding. Clients' high knowledge on PMTCT and ART as reported in this study could also be partly due to the institution of counselling as part of the programme, where new clients are taking through the benefits of adhering to ART, the problems associated with defaulting ART and issues relating to PMTCT. This was evident in their responses that ART wo rks with optimal access (98%). The respondents who have achieved optimal access cited the need to adhere to ensure effectiveness of the drug as a reason for always coming for ART appointment. Overall, knowledge of the cause of HIV/AIDS, modes of transmission, and importance of ART adherence was good in a study in South Africa [12].
Knowledge on PMTCT
The result was also consistent with a recent study [7], where knowledge on PMTCT was high among the wo men studied. In that study, majority of the mothers knew that it was possible to reduce the risk of t ransmission during pr egn ancy ( 82 .2 %) and th e br eastfeedin g period (7 1.6 %) . 8 8 % knew vert ical transmission is preventable and 85% knew it can be done through giving ÄRT.
Majority (63%) of the wo men correctly indicated that MTCT of HIV is preventable and could be done through giving ART to the nursing mother, avoiding breastfeeding, and opting for caesarean delivery. The views of the respondents about the ART were also investigated. Almost all the wo men (98%) in the study correctly knew that ARTs work effect ively with optimal adherence to it. About 80% of the respondents are aware that ART are drugs to prolong the lives of people living with HIV once they do not default.
In the qualitative study, most of the respondents had good knowledge on PMTCT and how ARVs wo rk. Their v iews about the benefit of ART had considerable influence on their use of the drug. Most of the women stated that, the drugs makes the virus weak and unable to attack their immune system. So me said the drug acts like a cup to cover them and prevent them fro m acting so one needs to take it always to keep them where they are, and she can live as long as God wants them to.
One client form Ejura reported: "When I was first coming I was carried like a mat. Now look at me. No one even realize I am sick. The drug has helped me so I will never default" (never defaulted).
Another client fro m Nyinahin i reported: "The drug cannot cure the HIV. It suppresses the virus so you always need to take or the virus will become strong again" (never defaulted).
Although at least 72% of the respondents are aware each of the possible causes of HIV/AIDS, some still have negative perception about the causes of HIV/AIDS. About (22%) thought HIV is caused by bewitchment and have been going to prayer camps for spiritual intervention.
One patient reported:
"When the virus was first detected in my blood, I brought all my children and none had it. My husband also hasn't got it. So I believe I got it through spiritual means because I have never committed adultery" (defaulter).
Knowledge level on PMTCT and ART was quite high among the respondents. One hundred and seventy-five wo men constituting 83% had adequate knowledge on PMTCT and ART. Co mprehensive knowledge was measured as correctly identifying the modes of HIV transmission, possibility of PMTCT, modes of MTCT, means of PMTCT, other related questions on ART and rejecting all misconceptions on spread of HIV. This was however consistent with the GDHS report 2008 [9], which showed that only 28.3% of female respondents age 15 -44 had comprehensive knowledge about HIV and AIDS.
A cross-tabulation analysis indicated a significant positive association between respondents' knowledge level and defaulting of A RT. Mothers with inco mprehensive knowledge on ART were 2.5 t imes more likely to default ART (OR=2.5, p=0.002). Other factors exert ing an independent effect on MTCT include preterm delivery, rupture of memb ranes (every hour of memb rane rupture increases the risk of infect ion by 4%). Elect ive caesarean delivery before labour and rupture of the membranes reduces the transmission risk by approximately half [6], [19].
A bivariate analysis revealed a significant association between the knowledge level and the educational level of the women as shown in table 3. Co mparatively, comprehensive knowledge level was higher among the wo men below 34 years as against those above (88% vrs 74%). HIV positive women with formal education were significantly almost two times more likely to have a comprehensive knowledge on PMTCT and A RT (OR=1.9; p=0.003). Negative perceptions about ART were associated with low education level in the study by [12]. The wo men whose husbands had formal education were also more likely to have comprehensive knowledge on PMTCT and ART as compared to those this no formal education (78% vrs 73%; OR = 0.23). [17] and a recent study in 2010 [4] knowledge level was significantly associated with the use of ART (p = 0.000).
A lower level of general education and poorer literacy may impact negatively on some patients' ability to adhere, while a higher level of education has a positive impact.
Increased level o f education was associated with increased level of knowledge on PMTCT and ART. Knowledge level was higher among those below 34years and those whose husbands have formal education. Mothers with formal education were significantly more likely to have adequate knowledge of PMTCT and ART (OR=1.9; p=0.003). Th is is consistent with the finding of reference [2], wh ich asserted that a lower level of general education and poorer literacy may impact negatively on some patients' ability to adhere, while a h igher level of education has a positive impact This could be due to the fact that most sensitization media including bill boards, TV adverts and leaflets as part of the social market ing campaign strategies are conducted in English language making it difficult for the illiterate in society to understand.
Conclusions
It is evident that respondents' knowledge level plays an important role in their access to ART which supports the findings of [14]. Superstition with respect to the causes of HIV is still high among the respondents. This could be attributed to the fact that, education on HIV given to these wo men is not targeting misconceptions about the etiology of the disease. However, mothers' educational level is a key determinant of their knowledge on HIV/AIDS, A RT and PMTCT.
Generally the respondents understand that is effective and prolongs life. In addition they are also aware that poor adherence may result in viral resistance. This conforms to the result of a study by [18]. Fu rthermore, majority of the respondents are aware that MTCT was possible and could be through Mother's breast to the child during breast feeding process. Hence they are likely to accept any measures that would prevent this mode of transmission provided they can afford it and there is no stigma attached to it. The study revealed that counselling at the ART centre is very important in ensuring the respondent regularity at the ART centres to pick their med ications [13].
The facility and co mmunity based educational interventions should therefore be scaled up and should be designed to be acceptable to both the literate and illiterate in the society. This must also seek to demystify the scientific nature of the d isease and clear all misconceptions and possible thoughts of spirituality in the et iology of the disease. Social marketing campaigns should also be developed and targeted at improving wo men literacy on their health issues and getting more wo men to test for HIV in order to incorporate them into PMTCT programmes. Further research however needs to be conducted to ascertain the facility and co mmunity based factors that influence the wo men's knowledge on ART and PMTCT.
|
2019-03-12T13:02:26.023Z
|
2012-12-01T00:00:00.000
|
{
"year": 2012,
"sha1": "33c888a4c2437601661c25cc1bdba99b12e6dc59",
"oa_license": "CCBY",
"oa_url": "http://www.sapub.org/global/showpaperpdf.aspx?doi=10.5923/j.phr.20120205.04",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f466cf7f838aa8fa8b46d95453c6f1e8ad827936",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119216105
|
pes2o/s2orc
|
v3-fos-license
|
Detecting an Eavesdropper in QKD Without Public Bit Comparison
We present a method for determining the presence of an eavesdropper in QKD systems without using any public bit comparison. Alice and Bob use a duplex QKD channel and the bit transport technique for relays. The only information made public is the respective basis choices which must be revealed in standard QKD systems anyway. We find that every filtered bit can be used to determine the presence of errors without compromising the security. This is an improvement on using a random sample in the standard BB84 protocol.
Comparison of Bits in the BB84 Protocol
Quantum Key Distribution (QKD) is a method for establishing a secret key between two parties, conventionally labelled as Alice and Bob, in which the laws of physics guarantee the security.
The original technique, known as the BB84 protocol, works by exploiting quantum complementarity to encode information in the eigenstates of complementary operators. By randomly selecting which of these operators is used to encode the information an eavesdropper cannot retrieve that information without disturbing the integrity of some of the transmitted states. Other protocols have since been developed, but all rely on either complementarity or quantum correlation to ensure security. For an excellent review of the first decade or so of QKD see [1]. We focus here on the BB84 protocol, but with suitable adaptation, the technique can be extended to others.
Let and be the operators representing complementary bases and consider a spin system consisting of just two orthonormal states. The eigenstates of the operators and are and , respectively. The "+" states are given a bit value of 1 and the "−" states given a bit value of 0. 9. The remaining timeslots can be renumbered for convenience so that T = 1,2,3,4, . . .,.N where N is the total number of successful timeslots.
Step 6 in this protocol represents the end of the quantum transmission and measurement processes. The subsequent steps are to do with the processing of the results of the transmission and measurement.
In order to ensure that Alice and Bob can match those timeslots for which they should have
Detecting the Presence of an Eavesdropper without Bit Comparison
In order to detect errors without any public bit comparison Alice and Bob must operate a duplex QKD channel. That is Alice transmits photons to Bob, according to the BB84 protocol, and Bob transmits photons to Alice according to the BB84 protocol. For convenience we shall imagine these to be interleaved so that for odd timeslots a photon is transmitted by Alice whereas Bob transmits in even timeslots. These can, in fact, be two entirely separate transmissions; all that is required is that we can uniquely correlate a particular transmission with a particular measurement. This uses the "bit transport" technique developed in [2] where it was used to show how intercept/re-send relays can be used to extend the distance of QKD. The basic idea is that separate "good" channels are correlated by linking different timeslots. The overall effective number of channels that can be used for key exchange is unaffected by the introduction of extra relays. We can view the current duplex channel as of the form Alice -Relay -Bob folded back on itself.
An example of such an interleaved transmission is shown in Table 1. We assume that each timeslot is occupied and that the photon reaches its destination. This is, of course, not true in practice, but it is easy to accommodate timeslots where nothing is transmitted or received.
Alice informs Bob of her basis choices for both transmission and measurement. Bob filters this data
into 3 sets. The first set is the data for which they expect no agreement because they have chosen different bases. In Table 1 this set consists of timeslots 1,4,7,12,13,19 and 20. Bob informs Alice of these timeslots and they both discard this data. The second set consists of those remaining timeslots in which Alice is the transmitter and in Table 1 consists of the timeslots 3,5,9,11,15 and 17. The third set is the remaining data and represents data in which Bob has initiated the transmission and Alice has measured in the correct basis. In Table 1 Table 1.
In Table 2 we have written BV for each timeslot and separated the data into the two sets, as discussed above. After the communication Bob would send Alice a list of triples. The first value is from set 2, the second from set 3, and the extra bit tells Alice whether to flip the bit from set 3 with 1 taken to mean to perform the flip and 0 taken to mean leave alone. Although, for simplicity here we assume the rule that the flip is associated with set 3, all that is required is that one of the bits from set 2 or set 3 is flipped by
Set 2 (Alice's Transmission
Alice. An alternative way of viewing this is to think of the extra bit as a parity check. precisely one bit of extra information out of the two possible in these timeslots. She will know that timeslots 2 and 3 have bit values of 0,1 or 1,0, but not which of these is correct.
In order to eliminate this information gain by Eve, Alice and Bob could adopt the rule that the compared timeslots must be understood as follows. If the bit values are 0,1 or 0,0 then this is read as a 0. If the bit values are 1,0 or 1,1 then this is read as a 1. Alice and Bob reduce their potential key size by a factor of 2, but as it is a duplex channel this amounts to the same key potential key size as for the single channel BB84 protocol. So in the above example, the first triple sent by Bob is to be understood as having the final bit value 1.
The fundamental difference between this protocol and BB84 is that the duplex channel and bit transport mechanism allows Alice and Bob to use their entire filtered transmission to check for errors rather than just use a random sample which must then be discarded.
|
2019-04-13T02:22:07.740Z
|
2012-03-03T00:00:00.000
|
{
"year": 2012,
"sha1": "10147199b94dec5fca62407e661ff2e4f6b8d1ea",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6ec5adf1986deb8c78905b15bd60430c55b215c5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
3316688
|
pes2o/s2orc
|
v3-fos-license
|
Function and mechanism of microRNA-210 in acute cerebral infarction
Acute cerebral infarction (ACI) is a common cerebrovascular disease. Previous studies have indicated that microRNAs (miRs) are aberrantly expressed in patients with ACI. However, the functions of miRs in the pathogenesis of ACI still require further investigation. The aim of the present study was to investigate the function of miR-210 in ACI and its associated mechanism. The expression of miR-210 in the serum of 40 patients with ACI and 40 normal controls was examined using reverse transcription-quantitative polymerase chain reaction (RT-qPCR). Then, human umbilical vein endothelial cells (HUVECs) were treated with serum from patients with ACI or healthy volunteers, and a CCK-8 assay was performed to examine cell proliferation. Next, cells were stained with PI/Annexin V, and the apoptosis rate was examined using flow cytometry. Furthermore, cells were harvested and lysed, and RT-qPCR and western blotting assays were performed to compare the expression of vascular endothelial growth factor (VEGF), Notch1 and Hes1 in different groups. It was observed that the expression of miR-210 was significantly increased in the serum of patients with ACI compared with normal controls (P<0.01), and receiver operating characteristic curve analysis indicated that the area under the curve for miR-210 was 0.799 (95% confidence interval, 0.700–0.899), the optimum cut-off point was 1.397, and the sensitivity and specificity at the cut-off point were 62.5 and 87.5%, respectively. Furthermore, serum from patients with ACI induced a significant increase in proliferation (P<0.05 at 48 h, P<0.01 at 72 h) and a significant decrease in the apoptosis rate of HUVECs (P<0.01). In addition, serum from patients with ACI significantly increased the expression of VEGF, Notch1 and Hes1 at the mRNA and protein level (all P<0.01 with the exception of Notch1 mRNA expression, P>0.05). In conclusion, these results demonstrate that miR-210 is upregulated in the serum of patients with ACI, and miR-210 may be involved in the pathogenesis of ACI through regulating the proliferation and apoptosis of endothelial cells.
Introduction
In recent years, the incidence of cerebrovascular disease has increased worldwide (1). Stroke is the second leading cause of mortality among people >60 years old worldwide. In China, the incidence of new cases of stroke is 2.5 million/year (2). Acute cerebral infarction (ACI) is cerebral infarction that occurs within 6 to 24 h and is caused by sudden occlusion of the cerebral artery, which can induce death of brain tissue (3). If rapid, efficient medical treatment is not administered, ACI can lead to severe sequelae and an increased rate of mortality among patients (4). Therefore, it is critical to identify ACI-specific serum biomarkers for the early diagnosis of ACI, even before magnetic resonance imaging (MRI) or computed tomography tests, to allow physicians to make rapid clinical decisions.
Previous studies have indicated that brain damage after ACI can be reflected by changes in certain serum biomarkers (including S100 protein, neuron specific enolase and cystatin C) (5)(6)(7). However, as proteins, these biomarkers consist of complicated components and degrade easily. Therefore, identifying novel, specific and stable serum biomarkers for the early diagnosis of ACI is now a key challenge for researchers and physicians in the field of cerebrovascular disease.
MicroRNAs (miRs) are a class of single-strand, endogenous, non-coding RNAs that are ~22 nucleotides in length. In 1993, Lee et al (8) first discovered miRs in Caenorhabditis elegans. In recent years, miRs have been demonstrated to be involved in numerous biological processes, including cell proliferation, differentiation and apoptosis, and the incidence and progress of diseases (9)(10)(11). Previous studies have indicated that the expression of miRs in the serum is stable, and the method for detecting circulating miRs is sensitive and accurate. This implies that miRs have the potential to become novel biomarkers for the diagnosis of numerous diseases (12)(13)(14). In the case of ACI, aberrant expression of certain miRs in the brain tissue and peripheral blood of patients has also been observed, suggesting that miRs may be involved in the pathogenesis of ACI (15)(16)(17).
The aberrant expression of miR-210 has been identified in the blood of patients with ACI (18). However, the underlying mechanism of this requires further investigation. The present study aimed to elucidate the role of miR-210 in ACI and its associated mechanism. The serum expression of miR-210 was compared between patients with ACI and healthy controls, and the effects of miR-210 on proliferation and apoptosis of endothelial cells were investigated.
Materials and methods
Patients. In the present study, 40 patients were enrolled who had been diagnosed with ACI at the Department of Emergency, Taizhou People's Hospital (Taizhou, China), between January 2016 and October 2016. A total of 40 healthy volunteers were recruited from the Medical Examination Center of Taizhou People's Hospital and served as the controls. The clinical information of patients is presented in Table I. Venous blood samples (~5-8 ml) were drawn from all participants. The serum of patients and volunteers was isolated by centrifuging at 300 x g at 4˚C for 20 min, and then collected and stored at -80˚C until analysis. The present study was approved by the Ethics Committee of Taizhou People's Hospital, and all participants signed informed consent forms. The inclusion criteria were as follows: 1) Presence of ischemic lesion compatible with the pathological and imaging characteristics of the vasculature in the central nervous system (CNS) or the presence of clinical evidence for ischemic injury of the CNS; 2) presence of neurological deficits lasting more than 24 h due to ischemic lesions confirmed on conventional MRI of the brain. Patients were excluded if any other CNS disease was identified, based on medical history.
Cell culture. Human umbilical vein endothelial cells (HUVECs) were purchased from Bnbio (Beijing, China). Cells were cultured in RPMI-1640 (Thermo Fisher Scientific, Inc., Waltham, MA, USA) supplemented with 10% fetal bovine serum (FBS; Thermo Fisher Scientific, Inc.), 100 U penicillin/ml and 100 mg streptomycin/ml, in a humidified incubator at 37˚C with 5% CO 2 until 70-80% confluence. The control group was treated with RPMI-1640 supplemented with 10% FBS, the patient group was treated with RPMI-1640 supplemented with 10% serum from patients, and the healthy volunteer group was treated with RPMI-1640 supplemented with 10% serum from the healthy volunteers in a humidified incubator at 37˚C with 5% CO 2 for 48 h.
Cell proliferation analysis. The proliferation rate of HUVECs with different treatments was determined using a CCK-8 proliferation assay kit (Sigma-Aldrich; Merck KGaA, Darmstadt, Germany), according to the manufacturer's protocol.
Cell apoptosis analysis. For the apoptosis analysis, HUVECs in different groups were stained with a PI/Annexin V-FITC apoptosis detection kit (BD Biosciences, San Jose, CA, USA) and analyzed on a BD FACSCalibur flow cytometer (BD Biosciences) according to the manufacturer's protocols. The results of the flow cytometry analysis were analyzed using FlowJo software (version 9.7; FlowJo LLC, Ashland, OR, USA).
Reverse transcription-quantitative polymerase chain reaction (RT-qPCR).
Total RNA was isolated from the serum and harvested cells using TRIzol reagent (Thermo Fisher Scientific, Inc.). Then, RT-qPCR was performed using the One Step SYBR ® PrimeScript™ RT-PCR kit (Takara Biotechnology Co., Ltd., Dalian, China) on an ABI 7300 Real-Time PCR system (Applied Biosystems; Thermo Fisher Scientific, Inc.), according to the manufacturer's protocol. The thermocycling conditions were as follows: 95˚C for 30 sec, followed by 40 cycles of 95˚C for 5 sec and 60˚C for 30 sec. All primers were synthesized by Sangon Biotech Co., Ltd. (Shanghai, China). The sequences of the primers were as follows: Hes1 forward, CTC CCG GCA TTC CAA GCT A and reverse, AGC GGG TCA CCT CGT TCA TG; Notch1 forward, CAC CCA TGA CCA CTA CCC AGT T and reverse, CCT CGG ACC AAT CAG AGA TGT T; vascular endothelial growth factor (VEGF) forward, CTT GCC TTG CTG CTC TAC CT and reverse, CTG CAT GGT GAT GTT GGA CT; GAPDH forward, GAA GGT GAA GGT CGG AGT C and reverse, GAA GAT GGT GAT GGG ATT TC. The relative expression of Hes1, Notch1 and VEGF in each sample was normalized to the level of GAPDH using the 2 -ΔΔCq method (19). The relative expression of miR-210 was examined using the Hairpin-it™ miRNAs qPCR Quantitation kit (GenePharma Co., Ltd.) according to the manufacturer's protocol, and U6 (RNU6B; GenePharma Co., Ltd.) was used for normalization.
Statistical analysis. Statistical analysis was performed using SPSS 19.0 software (IBM Corp., Armonk, NY, USA). Data are presented as the mean ± standard deviation. Two independent sample t-test was performed to compare two groups, and one-way analysis of variance followed by Dunnett's post-hoc test was performed to compare multiple groups. A receiver operating characteristic (ROC) curve was used to evaluate the diagnostic performance of serum miR-210. P<0.05 was considered to indicate a statistically significant difference.
Expression of miR-210 is increased in the serum of patients with ACI.
In order to investigate the potential role of miR-210 as a diagnostic marker, the level of miR-210 was compared in the serum of patients with ACI and healthy controls using RT-qPCR. The expression of miR-210 was significantly increased in the serum of patients with ACI compared with the healthy controls ( Fig. 1A; P<0.001). Furthermore, ROC curve analysis was performed to determine whether the serum level of miR-210 could distinguish ACI patients from healthy volunteers (Fig. 1B). The area under the curve (AUC) of miR-210 was 0.799 [95% confidence interval (CI), 0.700-0.899], the optimum cut-off point was 1.397, and the sensitivity and specificity at the cut-off point were 62.5 and 87.5%, respectively.
Treatment of HUVECs with the serum from patients with ACI induces a significant increase in the expression of miR-210.
To further explore the function of miR-210 in the pathogenesis of ACI, HUVECs were treated with the serum from patients with ACI or healthy controls, and the expression of miR-210 in HUVECs was examined using RT-qPCR. As shown in
Discussion
If no efficient medical treatment is administered within the first few h, ACI can lead to poor prognosis and increased mortality rate among patients (20). Thus, effective early diagnostic biomarkers will help physicians to make rapid clinical decisions and improve clinical outcomes. The functions of circulating miRs as diagnostic markers of ACI have been discussed in numerous previous studies. Yuan et al (21) examined the plasma level of miR-26b, and its target calmodulin (CaM), in patients with ACI, and identified that miR-26b was upregulated and CaM was downregulated in the plasma of patients with ACI compared with healthy controls. The same study also demonstrated that elevated CaM and decreased miR-26b expression in the plasma of patients was associated with poor clinical outcomes. Zhou and Zhang (17) demonstrated that the plasma levels of miR-21 and miR-24 may act as diagnostic markers during the early stage of ACI. Weng et al (22) identified that the plasma concentration of miR-124 is a promising candidate biomarker for early detection of cerebral infarction.
In the present study, it was observed that the expression of miR-210 was significantly increased in the serum of patients with ACI compared with normal controls. Furthermore, ROC curve analysis indicated that the AUC of miR-210 was 0.799 (95% CI, 0.700-0.899), the optimum cut-off point was 1.397, and the sensitivity and specificity at the cut-off point were 62.5 and 87.5%, respectively. These results indicate that the serum level of miR-210 is an efficient biomarker that can distinguish patients with ACI from healthy individuals. In summary, these data suggest that circulating miR-210 has potential as a novel biomarker for the early diagnosis of ACI. Hypoxia and the production of excessive reactive oxygen species are frequently observed in the brain tissue of patients with cerebral ischemia (23,24), and these extreme conditions can lead to apoptosis of cells (including neurons and vascular endothelial cells) in the brain. It has been reported that the aberrant apoptosis of vascular endothelial cells in cerebral vessels can aggravate secondary brain injury after cerebral infarction (25). Thus, proliferation and angiogenesis of endothelial cells are key events for recovery of the brain after cerebral infarction. Previous studies have suggested that miRs can regulate the proliferation and migration of endothelial cells after cerebral infarction. Zhang et al (26) reported that miR-433 was downregulated in hypoxia conditions, and miR-433 could regulate the proliferation and migration of HUVECs. Chen et al (27) demonstrated that miR-145 could increase proliferation and migration of endothelial progenitor cells in a cerebral infarction mice model through regulating the c-Jun N-terminal kinase signal pathway. Yuan et al (28) reported that the polymorphism of MMP-9 at the miR-491-5p binding site was associated with the risk of cerebral infarction in a Chinese population. Lou et al (29) demonstrated that miR-210 could activate the Notch signaling pathway and participate in the process of angiogenesis following cerebral ischemia.
In the present study, the effect of miR-210 on the proliferation and apoptosis of HUVECs was evaluated. HUVECs were treated with serum from patients with ACI or healthy controls, and it was identified that treatment of HUVECs with serum from patients with ACI induced a significant increase in the expression of miR-210 compared with the controls. This suggested that the expression of miR-210 was significantly upregulated in HUVECs under ACI conditions. Furthermore, transient overexpression of miR-210 induced a significant increase in cell proliferation and a significant decrease in cell apoptosis. It also induced an increase in the expression of VEGF, Notch1 and Hes1 at the mRNA and protein levels. These results indicated that miR-210 may serve a protective function in ACI through facilitating the proliferation of vascular endothelial cells and angiogenesis.
In conclusion, the present study demonstrates that miR-210 is upregulated in the serum of patients with ACI, and miR-210 may be involved in the pathogenesis of ACI through regulating the proliferation and apoptosis of endothelial cells. This suggests that miR-210 has potential as a diagnostic tool and therapeutic target for the early diagnosis and management of ACI.
|
2018-04-03T04:13:02.781Z
|
2017-11-27T00:00:00.000
|
{
"year": 2017,
"sha1": "5b129184c819bdaef3583be54d7c6837a7a3b1d5",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2017.5577/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b129184c819bdaef3583be54d7c6837a7a3b1d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54741622
|
pes2o/s2orc
|
v3-fos-license
|
A Plan to Rule out Large Non-Standard Neutrino Interactions After COHERENT Data
In the presence of neutrino Non-Standard Interactions (NSI) with matter, the derivation of neutrino parameters from oscillation data must be reconsidered. In particular, along with the standard solution to neutrino oscillation, another solution known as"LMA--Dark"is compatible with global oscillation data and requires both $\theta_{12}>\pi/4$ and a certain flavor pattern of NSI with an effective coupling comparable to $G_F$. Contrary to conventional expectations, there is a class of models based on a new $U(1)_X$ gauge symmetry with a gauge boson of mass of few MeV to few 10 MeV that can viably give rise to such large NSI. These models can in principle be tested by Coherent Elastic $\nu$-Nucleus Scattering (CE$\nu$NS) experiments such as COHERENT and the upcoming reactor neutrino experiment, CONUS. We analyze how the recent results from the COHERENT experiment constrain these models and forecast the discovery potential with future measurements from COHERENT and CONUS. We also derive the constraints from COHERENT on lepton flavor violating NSI.
Introduction
When a particle or wave propagates through a medium, due to the collective forward scattering off the particles in the medium, it will feel an effective potential that changes its energy-momentum dispersion relation. In the case of photons, the effect is the well-known refraction phenomenon. Neutrinos propagating in matter undergo a similar effect but given that the interaction is via the weak nuclear force, the speed of neutrinos in matter will remain very close to their speed in vacuum. Nevertheless, the correction to the dispersion relation due to matter effects can impact the pattern of neutrino oscillations which is well-established within the Standard Model (SM) and is a dominant effect for solar neutrinos.
Neutrino oscillation data can also be used to test the possibility of neutrino interactions with matter fields arising from Beyond the SM (BSM) physics. Dubbed Non-Standard neutrino Interactions (NSIs), this new physics is typically parameterized by the dimension-6 effective interaction, where the parameter f,V αβ determines the strength of the non-standard neutral current interaction between medium fermions f and neutrinos of flavors α and β where α, β = (e, µ, τ ). NSI was originally studied in the seminal paper by Wolfenstein on the matter effect [1], and has since been widely studied in a variety of settings (we refer the reader to the reviews in the literature [2][3][4]).
As a result of the impact on the matter potential, neutrino oscillation data has provided some of the strongest probes of NSI [1,[5][6][7][8]. In fact, when neutrino oscillation data is analyzed in the presence of nonzero NSI, in addition to the standard Large Mixing Angle (LMA) solution with θ 12 34 • and f αβ ≡ 0, another solution, known as LMA-Dark, appears with θ 12 in the "dark" octant [9] (45 • < θ 12 < 90 • ) and large NSI ∼ O (1). Distinguishing between the standard LMA solution and this LMA-Dark [10] regime requires going beyond oscillation data alone.
The most recent probe of NSI comes from the observation of Coherent Elastic ν-Nucleus Scattering (CEνNS) by the COHERENT experiment [11]. CEνNS is a process wherein a neutrino scatters coherently off an entire nucleus. While the cross section is large thanks to the coherent enhancement, ∝ [A − 2Z(1 − 2 sin 2 θ W )] 2 , it is challenging to detect this process due to the low nuclear recoil energies ∼ keV. The COHERENT collaboration [12] reported the first detection of CEνNS at 6.7 σ [11]. The measurement is consistent with the SM expectations within 1.5 σ and therefore offers a new probe of NSI [11,[13][14][15][16]. Taking the effective interaction of form (1.1), it has been argued that this data is already sufficiently strong to rule out the LMA-Dark solution [13]. Notice however that if the mass of the intermediate state leading to the effective coupling (1.1) is of order of or smaller than the energy-momentum transfer in the scattering experiment, using the effective action formalism will not be viable.
In this paper, we revisit the question of whether or not large NSI can still be accommodated in light of COHERENT data. Our broad conclusion is that it can, though it requires a mediator that is light compared to the momentum transfers probed at COHERENT. We then investigate the possibility of tightening the constraint on LMA-Dark by future CEνNS results. The remainder of this paper is organized as follows. In section 2, we very briefly describe the class of models that can give rise to LMA-Dark solution and then in the next section we overview the LMA-Dark solution phenomenology. In section 4, we discuss the measurement of CEνNS by COHERENT and use it to constrain the LMA-Dark solution as well as lepton flavor violating NSI. In section 5, we estimate the future sensitivity to the LMA-Dark solution by both COHERENT and reactor neutrino CEνNS measurements such as CONUS. Conclusions are summarized in section 6.
General characteristics of models leading to large NSI with a light mediator
Similarly to the models developed in [4,[17][18][19], let us consider an interaction of the following form between neutrinos and quark fields with a new U (1) X gauge boson, Z L ⊃ q∈{u,d} The coupling of Z to neutrinos can originate via (at least) two distinct mechanisms: (1) from gauging an arbitrary (not necessarily flavor universal) linear combination of lepton numbers of different generations [17,18]; or, (2) from mixing of ν with a new electroweak singlet fermion charged under new U (1) X with mass of O(GeV) [19]. The couplings of the quarks to the Z boson are U (1) X gauge couplings. Thus, the flavor structure of g q is determined by the pattern of the U (1) X charges assigned to different flavors. For each generation, the U (1) X charge of the quark with electric charge 2/3 has to be equal to that of the quark with electric charge −1/3 to make the hadronic current coupled to W + µ (i.e.,ūγ µ (1 − γ 5 )d +cγ µ (1 − γ 5 )s +tγ µ (1 − γ 5 )b) invariant under the new U (1) X . As a result from theoretical point of view, we expect g u = g d , g c = g s and g t = g b . (2.2) Moreover, because of the flavor violation in the mass mixing of quarks (i.e., the CKM mixing), any flavor non-universality (g u = g c and/or g u = g t ) can induce dangerous flavor-changing neutral currents so it will be safer to set g u = g c = g t but this aspect of the model is not relevant for neutrino oscillation in matter or for CEνNS experiments in which we are interested in the present paper.
As long as the transferred energy momentum is small compared to M Z , we can integrate out Z and arrive at an effective interaction of form Eq. (1.1) with In the literature analyzing the experimental data, it is however sometimes assumed u = d , although there is no theoretical justification for this assumption. As shown in [4,[17][18][19], it is possible to reproduce the flavor structure required for the LMA-Dark solution. Moreover, there are viable mechanism to produce off-diagonal lepton flavor violating as well as lepton flavor conserving (g ν ) αβ [18,19]. For neutrino-nucleus scattering experiments (such as COHERENT), the contribution from new interaction to the ν-N scattering amplitude scales as 1 (2.5) Independently of the energy of the neutrino, the non-standard effective potential for neutrinos induced because of the forward scattering of neutrinos off the matter fields in medium is given by Notice that in forward scattering the energy momentum transfer is zero, q = 0. That is why even if the energy of the neutrino beam is larger than the mass of the intermediate state (M Z ), for the purpose of calculating the matter effects, we can still use the four-Fermi interaction shown in Eq. (1.1). Comparing Eq. (2.5) and Eq. (2.6), we observe that in the limit M 2 Z /q 2 → 0 and g ν g q → 0 (but fixed g ν g q /M 2 Z ), the effect on CEνNS will vanish but still large NSI can be achieved. For a general matter profile with a given neutron yield Y n ≡ N n /N p = N n /N e , we can write 1 Notice that unlike the case of scalar coupling studied in [16], with the vectorial interaction that we are considering in Eq. (2.1), there will be interference between SM contribution and the new physics contribution. 2 Throughout the text we distinguish between the Lagrangian level NSI terms (RHS of Eq. 2.7) from the Hamiltonian level NSI terms (LHS of Eq. 2.7) by the presence of a quark superscript (q, u, or d) or its absence, respectively.
Before the release of the COHERENT results, it had been discussed in detail in [4,[17][18][19]] that across the mass window 5 MeV < M Z < few 10 MeV, (2.8) viable models respecting all the existing bounds could be built, giving rise to ∼ 1 with The upper limit on the range (2.8) depends on the details of the model. The lower limit of this mass window comes from the bound on extra relativistic degrees of freedom from CMB and Big Bang Nucleosynthesis (BBN). As shown in [20,21], the contribution from Z to δ(N ν ) ef f will violate the bounds if M Z < 5 MeV and g ν > 10 −9 (M Z /MeV). This constraint is obtained by studying the thermalization and decay of the Z . Even if the mass of Z is large enough to make Z nonrelativistic at the neutrino decoupling era, its subsequent decay into a neutrino pair can effectively heat the neutrino bath.
In the parameter range of our interest, the Z boson can be produced inside the supernova core and decay back to a neutrino/antineutrino pair within the core. This production cannot provide a new cooling mechanism for the star but by providing a new neutrino scattering channel it can affect the duration of the neutrino emission. Any direct information from CEνNS on the Z coupling to ν would be an invaluable input for studies of supernova and for predicting the neutrino emission duration.
We also note that both oscillation experiments and scattering experiments are only sensitive to the product g ν g q . It may be possible to constrain the g ν term directly (and therefore constrain g q through the combination) through Non-Standard neutrino Self-Interactions (NSSI) from the measurement of the neutrino spectra from a galactic supernova [22]. Moreover, rare meson decays can constrain g ν [23].
In this work we restrict ourselves to vector NSI with quarks only and most of the time drop the superscript V from V . Axial-vector NSI are fairly well constrained at the A ∼ 0.1 level from SNO neutral current measurements [10].
LMA-Dark
In this section we review the theoretical derivation of the LMA-Dark solution and then describe the latest constraints from oscillation experiments determined in a global fit by Ref. [24].
LMA-Dark theory review
The CPT invariance implies the invariance of the neutrino Hamiltonian under H → −H * , leading to the Generalized Mass Ordering Degeneracy (GMOD) [25]. In vacuum this leads to the LMA-Dark solution wherein θ 12 > 45 • , degenerate with the standard LMA solution [9]. In matter the degeneracy is broken, but can be restored with new physics in the form of NSI of the same magnitude as the weak scale, = O(1) [10]. In particular, if ee = −2, the ee term of the matter potential changes sign maintaining the degeneracy. Furthermore, adding any term proportional to the identity matrix to the 3×3 Hamiltonian of neutrinos does not affect neutrino oscillations. Thus, Table 1. Limits at 90% C.L. on NSI from a global fit to neutrino oscillation data while marginalizing over all other standard and NSI parameters taken from [24]. The marginalizations are performed leaving NSI with one quark (q = u, d) at a time free. The ∼ −1 solutions corresponds to the LMA-Dark solution with as far as neutrino oscillations are concerned, the SM is equivalent to ( ee , µµ , τ τ ) = (−2, 0, 0) as well as (0, 2, 2) or any expression of the form for arbitrary real x. Since the neutrino beam at the COHERENT experiment is composed of both ν µ and ν e fluxes, its sensitivity to x is almost flat but the reactor CEνNS experiments, having onlȳ ν e beam, will lose sensitivity at x = 2. By looking at oscillations in different matter densities with different neutron to proton ratios, the GMOD can be broken again, except for the case where the neutron contribution is zero. From Eq. (2.7), we observe that vanishing neutron contribution requires u,V αβ + 2 d,V αβ = 0. Thus, no oscillation experiment can distinguish between standard LMA solution and the LMA-Dark solution . Notice however that within the models described in section 2, we expect u = d .
Scattering experiments are required to break these degeneracies. While oscillations constrain NSI for any mediator mass, scattering experiments can only constrain NSI when the transfer energy is less than the mediator mass q M Z . Scattering experiments and oscillation experiments are therefore complementary: while the oscillation experiments can constrain NSI for any mediator mass, but are insensitive to the x parameter of Eq. (3.1) and the GMOD, the scattering experiments can break these degeneracies, but are only sensitive to certain mediator mass ranges.
Oscillation constraints on LMA-Dark
From a global fit to neutrino oscillation data, Ref. [24] obtains the 90% C.L. limits shown in Table 1. From Eq. (2.7) along with the one-at-a-time values in Table 1 we can observe that the LMA-Dark solution found in oscillations dominantly comes from data with Y n < 1 implying that the solar data dominates the contribution to the LMA-Dark solution, as expected. Unless stated otherwise, from hereon whenever we discuss LMA-Dark solution we set x = 0 (i.e., ee = −2 and µµ = τ τ = 0).
As mentioned above, we focus on models with u, Since Y n varies in the range [1/6, 1.05] which is the experimentally probed range, we choose Y n = 1/3 which is in the middle of solar range (Y n ∈ [1/6, 1/2]) because as shown in Fig. 1, the solar data provides the main constraint on LMA-Dark. This gives our canonical definition of LMA-Dark of u,V ee = d,V ee = −1/2, although we also consider varying x as defined in Eq. (3.1). While Table 1 from [24] confirming that solar data dominates the LMA-Dark constraint. the red line (marked with LMA-D, Earth) and orange region (marked with LMA-D, Sun) are the solutions to ee = −2 for relevant values of Y n , the green regions are observational limits, derived from data. Notice that the uncertainties in the current atmospheric and long baseline neutrino data are too large to allow sensitivity to matter effects. In fact, the observational constraint on the LMA-Dark solution comes mainly from solar data. This is confirmed by the overlap of the green regions (corresponding to the one at a time global fit limits from Table 1) with the orange region as well as the absence of any overlap with the red line.
COHERENT constraints on the LMA-Dark solution
As was pointed out in [24,26], a Coherent Elastic ν-Nucleus Scattering (CEνNS pronounced "sevens") experiment such as COHERENT could be used to constrain NSI for light mediators with masses O(10) MeV. Above ∼ 1 GeV additional Deep-Inelastic Scattering (DIS) constraints from CHARM [27] and NuTeV [28] apply, with the NuTeV constraints being particularly strong [24]. The recent COHERENT data has been used to constrain NSI for M Z > O(10) MeV [11,13,15]. We expand upon those analyses here with a focus on the LMA-Dark solution.
CEνNS is a process wherein a neutrino scatters elastically off an entire nucleus. Thus, the scattering cross section will be given by the square of the sum of the scattering amplitudes off each nucleon in the nucleus and scales with square of atomic number. Within the standard model, the cross section is enhanced by [A − 2Z(1 − 2 sin 2 θ W )] 2 and is relatively large. However, it is difficult to detect CEνNS due to low nuclear recoil energies ∼ keV. Recently the COHERENT collaboration [12] reported the first detection of CEνNS at 6.7 σ [11]. COHERENT uses neutrinos from pion decay at rest (DAR) coming from the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory detected in a low threshold CsI detector.
We calculate the CEνNS event rates as a function of the NSI parameters as described in [24] using form factors from [29] and a detection threshold of 7 keV [30]. We assume the background to be 20% of the signal and a systematic uncertainty in the total flux of 20% consistent with the uncertainties reported by COHERENT. We marginalize the χ 2 over the normalization uncertainty using the pull method [31].
The SNS beam is pulsed which means that the ν µ 's from the prompt π + decay can be distinguished from the delayed ν e 's andν µ 's from the µ + decay coming from the initial π + decay. We make use of two separate timing bins contributing to the χ 2 as first described in [24]: the prompt component and delayed components. The numbers of prompt and delayed events, as a function of each flavor are where the contamination from early muon decay given by in which p w = 0.695 µs is the pulse width and b w = 1 µs is the bin width from the data presented by COHERENT. Note that our results are fairly insensitive to the value of P c ; as long as the prompt and delayed events can be largely separated, we get the full benefit of discriminating between the flavors. The contamination due to other backgrounds are suppressed by at least two orders of magnitude and are safely ignored here. The per-flavor event rates are then given by where M t is the mass of the target nuclei, N t is the number of target nuclei in the detector, and E r,tr is the threshold recoil energy. The electroweak charge is and the normalized per-flavor fluxes from πDAR are to an excellent approximation given by kine-matics as where E ν ∈ [0, m µ /2]. In general we fix all off-diagonal NSI terms to be zero unless otherwise specified. Note that there is a degeneracy in the weak charge between the SM and NSI which occurs at .
For COHERENT, this corresponds to q,V αα = 0.18 in the heavy mediator limit for g V n = − 1 2 and g V p = 1 2 − 2 sin 2 θ W ≈ 0.055. The current COHERENT constraints in the ee sector are shown in Fig. 1 for heavy mediator at x = 0. Note that these results are stronger than those previously presented [11] due to the additional timing information to separate electron and muon neutrinos. While the SM ( = 0) is included within the blue bands, it is disfavored. A good fit with χ 2 = 0 can be obtained by varying more than just the q,V ee terms. For COHERENT to be sensitive to the details of the Z , there must be nonzero momentum transfer. This leads us to define the generalized NSI coefficient, which is related to the 's relevant to oscillation physics by taking the q = 0 limit, f,V αβ ≡ f,V αβ (q = 0).
For M Z ∼ q, in principle by studying the energy dependence of the scattering cross section, the values of both M Z and the product (g ν ) αβ g f can be extracted. Taking a flavor universal coupling to neutrinos and using the released COHERENT data, Ref. [15] constrains √ g ν g q for M Z ∼ few 10 MeV. In principle, by using the timing information to discriminate between flavors a similar analysis of energy spectrum can be carried out for arbitrary flavor structure of NSI including the LMA-Dark flavor pattern in Eq. (3.1). Although the COHERENT collaboration has released the information both on time (count per arrival time bin) and on energy (count per number of photoelectrons), it has not unfortunately released information on simultaneous dependence on both (count per time per number of the photoelectrons). In the absence of this information, we have resorted to using only the timing (or equivalently only flavor information) to derive bounds on M Z . In the event that COHERENT releases the energy spectrum in both timing bins, we expect that even stronger constraints could be placed by combining timing and energy information.
where k ∈ p, d is the set of prompt and delayed signals, the 0.2 represents the 20% background rate, and we take σ sys = 0.2 for the systematic normalization uncertainty. The event rates are defined in Eqs. (4.1-4.5). The χ 2 for the LMA-Dark solution as a function of mediator mass is shown in Fig. 2. Notice that for fixed ( ee , µµ , τ τ ), M Z → 0 corresponds to the SM with g ν g q → 0. Had the best fit of the COHERENT data corresponded to the SM prediction, the χ 2 would have approached zero as M Z → 0. The SM prediction however has a small (1.5 σ C.L.) deviation from the results of COHERENT and this justified convergence to a nonzero value of χ 2 at M Z → 0. From Fig. 2, we observe that for all values of x considered, there are dips which means the corresponding NSI can provide better fit to data than the SM (the limit (q 2 ) → 0). For x = 3/2 and x = 1, the χ 2 can even vanish at M Z = 38 MeV and 18 MeV respectively. The solid black curve is the result of marginalizing over x. As seen from this figure, COHERENT constrains NSI LMA-Dark for mediator masses M Z > 48 MeV at 95% C.L. after marginalizing over x. This constraint is dominated by x ≈ 3/2 or ( ee , µµ , τ τ ) = (−1/2, 3/2, 3/2). If we fix x = 0, the constraint improves to 17 MeV. The multiple dip structure is a result of the fact that the event rate scales roughly like [g SM + (q)] 2 where (q) is a function of both M Z and x (through (0)); see Eqs.
Additional COHERENT constraints
Beyond constraining large NSI in the form of LMA-Dark, COHERENT can also constrain the NSI parameters directly. Maintaining u = d , COHERENT can constrain the ee and µµ elements as shown in Fig. 3. COHERENT has no sensitivity to the τ sector, but constraints can be inferred by including oscillation information (see Table 1) which constrains | q,V µµ − q,V τ τ | 0.03, so the bounds on q,V µµ are essentially the same as those on q,V τ τ . Note that there are four points where the χ 2 = 0. These are related to the degeneracy mentioned in Eq. (4.6), but are not quite at exactly 0.18 since COHERENT did not measure the SM. Had COHERENT measured the SM, all four would be at The COHERENT experiment also constrains the off-diagonal NSI terms q,V eτ , q,V µτ and q,V eµ as shown in Fig. 4. One at a time constraints are listed in Table 2. COHERENT is able to constrain all the NSI parameters except for the τ τ term. Constraining the τ τ element is possible by combining the bound on the µµ component from the COHERENT with the | µµ − τ τ | 0.03 constraint from oscillations listed in Table 1. Assuming COHERENT's CsI detector continues at its current rate 3 and collects data ∼ half the time, the expected future sensitivity of COHERENT to M Z for the LMA-Dark solution is shown in Fig. 5 which also includes a marginalization over x. Two features are of note. The first is the sharp improvement in the sensitivity. This is due to the non-trivial shape of the exclusion plot shown in Fig. 2. When the dip in the χ 2 increases past the threshold, the sensitivity suddenly improves considerably. The other feature is that the current projected limit is slightly worse than the actual current limit. This is because for the sensitivity we have assumed that COHERENT will exactly measure the SM: = 0, while their current measurements are slightly higher than the SM leading to slightly different limits.
Reactor: CONUS
Reactor neutrinos will also help to constrain NSI [16,33,34] and numerous such experiments are in various stages of progress from running to proposed including TEXONO, NOSTOS, CONUS, GEMMA, CONNIE, MINER, and others [35][36][37][38][39][40][41]. One such experiment is the COhernt NeUtrino Scattering experiment (CONUS), a proposed experiment to measure CEνNS from reactor neutrinos The horizontal axis shows the real time, and we assume 50% uptime. The blue region is the current exclusion limit as shown in Fig. 2. The red region is the predicted future exclusion range assuming true value of = 0 which becomes saturated at ∼ 10 MeV. The sharp drop occurs when the higher mass minimum seen in Fig. 2 passes the threshold. The orange region is the exclusion limit coming from BBN and CMB constraints [20]. Future measurements from reactor experiments like CONUS will reach the ∼ 1 MeV level and this figure will be completely covered. with a Germanium detector and an ultra-low threshold of ∼ 0.1 keV. They anticipate ∼ 10 5 events assuming standard physics over five years [38]. We simulate the expected signal for the SM and for LMA-Dark with different mediator masses. We take the 235 U flux from [42] and form factors from [29], although the suppression from form factors are negligible since F (q 2 ) ∼ 1 for relevant energies. We conservatively estimate the systematic uncertainty from various reactor neutrino uncertainties and detector uncertainties to be 10% to account for nuclear uncertainties, the reactor anomaly [43], and the 5 MeV bump [44], and we consider a count only analysis. 4 With 10 5 events the result is completely dominated by systematics. Assuming these detectors measure the SM ( = 0), their ability to constrain the LMA-Dark with x = 0 (i.e., ( ee , µµ , τ τ ) = (−2, 0, 0)) is shown in Fig. 6. The Si and Ge detectors respectively impose M Z < 0.45 MeV and M Z < 1.3 MeV at 95% C.L. The difference is dominated by the choice of detector nuclear recoil thresholds, 0.1 keV and 0.6 keV for Si and Ge respectively. Recall that at x = 3/2, the constraint by COHERENT was the weakest providing an upper bound M Z < 48 MeV. At x = 3/2, CONUS with Si and Ge detectors can constrain the LMA-Dark solution with light mediator respectively to M Z < 0.9 MeV and 2.6 MeV, both of which are well below the constraint from BBN and the CMB covering the gap. In addition, for comparison, in the event that the flux uncertainties can be reduced to optimistic levels of 1%, the constraints improve to 0.15 and 0.45 MeV for Si and Ge respectively. We note that these results are quite general and apply to a wide range of possible detectors, limited mainly by the flux uncertainties.
The various constraints in the coupling-M Z plane are shown in Fig. 7 along with the location of the LMA-Dark solution. For the left figure we have only turned on the ee term and have taken (g ν ) ee g q < 0 in agreement with the LMA-Dark solution at x = 0, for the right figure we have turned on only the µµ and τ τ terms and taken (g ν ) µµ g q = (q ν ) τ τ g q > 0 in agreement with the LMA-Dark solution at x = 2. The current COHERENT constraint is shown in blue. The thin sliver on the right figure of no sensitivity is the result of the degeneracy from Eq. (4.6). Using energy and/or timing information may be enough to rule out this sliver in the future, but whether or not this can happen is rather sensitive to the future systematics that COHERENT can reach. Since that degeneracy only occurs for q,V αα > 0, it does not appear on the left figure of Fig. 7. COHERENT's expected future sensitivity shown in red is for ten years of running CsI assuming 50% uptime and that = 0. Note that as shown in Fig. 5, at this point COHERENT is dominated by systematics. The orange region is the constraint from the CMB and BBN and the green region is the expected sensitivity from CONUS conservatively taken to use the Germanium detector design. As seen from these figures while after COHERENT, still LMA-Dark with mediator in the range 5.3 MeV < M Z < 12 MeV survives, CONUS bounds (combined with the BBN and CMB bounds) can fully test LMA-Dark solution except for the special case x → 2.
Conclusions
Oscillation data provides excellent constraints on new interactions in the neutrino sector parameterized as Non-Standard Interactions (NSI) for any mediator mass. There are, however, two degeneracies from oscillation data: flavor universal contributions (parameterized as x throughout this text) and the Generalized Mass Ordering Degeneracy (GMOD). The GMOD leads to the f. constraints on NSI respectively from the present COHERENT data and the forecast for 10 more years of COHERENT running with CsI assuming no NSI. The sliver on the right panel is a result of the degeneracy in Eq. (4.6). The constraint from BBN and the CMB is shown in orange [20]. The CONUS (see section 5.2) constraint in green conservatively takes the Germanium detector and assumes that they will measure the SM. CONUS cannot constrain the µµ or τ τ terms. The black line in the left (right) panel correspond to the LMA-Dark solution with x = 0 (with x = 2). Note that g ν g q is taken to be negative (positive) for the left (right) panel to give the LMA-Dark solution at x = 0 (x = 2). Solid lines are current bounds, dashed lines are future bounds.
LMA-Dark solution which requires interaction strength comparable to that of the weak interactions: g 2 /M 2 Z ∼ G F . While scattering experiments can constrain both of these, they are only sensitive for mediators heavier than the characteristic energy of the experiment. Large NSI with very light mediators 5 MeV is constrained by CMB and Big Bang Nucleosynthesis (BBN) measurements.
Thanks to COHERENT's measurement of Coherent Elastic ν-Nucleus Scattering (CEνNS) with a new low-threshold CsI detector, more stringent upper bounds on the mass of the mediator for NSI can be placed than what was previously possible. We find that the COHERENT data rule out LMA-Dark for M Z > 48 MeV at 95% C.L. and future measurements should improve this constraint to ∼ 10 MeV, which is not enough to close the gap with the constraints from the CMB and BBN. However, it is possible to reach the ∼ MeV scale using future high statistics reactor neutrino experiments measuring CEνNS for NSI in the ee sector. With a combination of CEνNS measurements from COHERENT and reactor data along with BBN and CMB information, LMA-Dark in the ee sector (x = 2) will be ruled for many orders of magnitude of mediator masses. MeV scale NSI will still be viable even after reactor measurements for LMA-Dark NSI in the µµ, τ τ sector. Notice that from model building point of view, the special case of x = 2 is not necessarily a fine-tuned limit and can be justified by symmetries. For example, if the new sector is electrophobic, we will expect ee = eµ = eτ = 0 but still µµ , τ τ = 0. Until such data arrives however LMA-Dark will remain viable in the ∼ 10 MeV range for any x and will continue to play a role in our ability to move neutrino physics into the precision era.
|
2018-04-10T18:00:01.000Z
|
2018-04-10T00:00:00.000
|
{
"year": 2018,
"sha1": "42dccca0b098c5499c86c38b3c4132787be838be",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "42dccca0b098c5499c86c38b3c4132787be838be",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
7685452
|
pes2o/s2orc
|
v3-fos-license
|
ORIGINAL ARTICLE DOI: 10.3904/kjim.2010.25.4.356 Usual Dose of Simvastatin Does Not Inhibit Plaque Progression and Lumen Loss at the Peri-Stent Reference Segments after Bare-Metal Stent Implantation: A Serial
Background/Aims The aim of this study was to assess the effects of a usual dose of simvastatin (20 mg/day) on plaque regression and vascular remodeling at the peri-stent reference segments after bare-metal stent implantation. Methods We retrospectively investigated serial intravascular ultrasound (IVUS) findings in 380 peri-stent reference segments (184 proximal and 196 distal to the stent) in 196 patients (simvastatin group, n = 132 vs. non-statin group, n = 64). Quantitative volumetric IVUS analysis was performed in 5-mm vessel segments proximal and distal to the stent. Results IVUS follow-up was performed at a mean of 9.4 months after stenting (range, 5 to 19 months). No significant differences were observed in the changes in mean plaque plus media (P&M) area, mean lumen area, and mean external elastic membrane (EEM) area from post-stenting to follow-up at both proximal and distal edges between the simvastatin and non-statin group. Although lumen loss within the first 3 mm from each stent edge was primarily due to an increase in P&M area rather than a change in EEM area, and lumen loss beyond 3 mm from each stent edge was due to a combination of increased P&M area and decreased EEM area, no significant differences in changes were observed in P&M, EEM, and lumen area at every 1-mm subsegment between the simvastatin and non-statin group. Conclusions A usual dose of simvastatin does not inhibit plaque progression and lumen loss and does not affect vascular remodeling in peri-stent reference segments in patients undergoing bare-metal stent implantation.
INTRODUCTION
Stent-edge and reference segment changes are composed of the evolution of plaque and/or vessel area changes, which can be visualized with intravascular ultrasound (IVUS) before and after stenting [1][2][3][4][5][6][7]. Serial IVUS examination of the plaque is very important because it can offer a relatively exact mechanism of plaque evolution.
Recent trials have demonstrated that lipidlowering therapy with statins improves clinical outcomes [8,9] and reduces the progression of atherosclerosis [10]. The
ORIGINAL ARTICLE
beneficial effects of statins, beyond their lipid-lowering actions, mostly rely on their anti-inflammatory properties [11]. Simvastatin has also been shown to inhibit smooth muscle cell proliferations [12].
To the best of our knowledge, few data are available regarding the effects of statins on plaque regression and vascular remodeling in peri-stent reference segments. In the present study, we assessed the effects of a usual dose of simvastatin on plaque regression and vascular remodeling in peri-stent reference segments after the use of a bare-metal stent (BMS) using serial IVUS observations. Our hypothesis was that a usual dose of simvastatin would not affect plaque regression and vascular remodeling in peri-stent reference segments after BMS implantation.
Study population
From January 2004 through December 2005, 196 patients who were treated with BMS implantation under the guidance of IVUS at Chonnam National University Hospital, Gwangju, Korea, were analyzed retrospectively. The patients were divided into two groups: the simvastatin group (n = 132) and the non-statin group (n = 64). For the simvastatin group, a 20 mg/day schedule of simvastatin was introduced from just after stent implantation through the follow-up period without discontinuation.
Among 392 peri-stent reference segments, 12 segments proximal to the stent edge were excluded because of their ostial location. Therefore, 380 peri-stent reference segments were available for analysis, which consisted of 184 segments proximal to the stent edges and 196 segments distal to the stent edges.
Cases of stent thrombosis, ostial stenting, far distal stenting with < 2.5 mm of reference diameter, and inadequate IVUS quality were excluded from the analysis. The protocol was approved by the institutional review board of Chonnam National University Hospital. Hospital records of patients were reviewed to obtain clinical and demographic variables.
Laboratory analysis
In all patients, serum was collected before stent implantation for measuring lipid profiles and highsensitivity C-reactive protein. All laboratory values were measured after an overnight fast. The serum levels of total cholesterol, low-density lipoprotein-cholesterol, highdensity lipoprotein-cholesterol, and triglycerides were measured using standard enzymatic methods. Highsensitivity cardiac C-reactive protein reagent (Beckman Coulter, Fullerton, CA, USA) was used for the quantitative determination of C-reactive protein in serum samples on a fully automated IMMAGE ® Immunochemistry System (Beckman Coulter). The IMMAGE ® Immunochemistry System utilizes proven rate nephelometry methodologies to provide specific, reproducible, quantitative protein results. Serum lipid profiles and high-sensitivity Creactive protein were measured at baseline and at followup.
Stent implantation procedure
Patients received BMS implantation for de novo lesions in native coronary arteries having a reference diameter between 2.5 and 4.0 mm. Stent implantation was performed as previously described [13]. If residual stenosis occurred after stent implantation, adjunctive balloon angioplasty using a balloon with the same size as, or a larger size than, the stent was performed.
Quantitative coronary angiography (QCA)
Angiograms were analyzed with a validated QCA system (Phillips H5000 or Allura DCI program; Philips Medical Systems, Best, The Netherlands). Using the outer diameter of a contrast-filled catheter as the calibration standard, the minimal lumen diameter and reference diameter were measured in diastolic frames from orthogonal projections.
In-stent restenosis
Patients were examined for in-stent restenosis during the follow-up period. Angiographic restenosis was defined as ≥ 50% stenosis in the stented segment, including peristent reference segments within 5 mm from each stent edge at follow-up, or at least a 50% loss of the original gain in the minimal luminal diameter.
IVUS imaging protocol
IVUS examinations were performed at post-stenting and at follow-up after intra-coronary administration of 300 µg nitroglycerin using a commercially available IVUS system (Boston Scientific Corporation/SCIMed, Minneapolis, MN, USA). This system allows for digital storage of pullback sequences. The IVUS catheter was advanced distally to > 5 mm from the distal stent edges, and imaging was performed using retrograde pullback at an automatic pullback speed of 0.5 mm/sec proximally to > 5 mm from the proximal stent edges.
IVUS analysis
We performed IVUS analysis for the entire 5-mm proximal and distal stent edge segments. Both proximal and distal vessel segments were divided into 1-mm subsegments and analyzed. Using planimetry software (TapeMeasure; INDEC Systems Inc., Mountain View, CA, USA), volumetric analysis for each subsegments was performed. External elastic membrane (EEM) and lumen areas were measured, and plaque plus media (P&M) area (EEM-lumen area) and plaque burden (P&M area divided by EEM area) were calculated from each cross-sectional slice and were expressed as mean values (summation of each measured values at 1-mm subsegments divided by 5). Area changes (∆ values) for each measurement were calculated as follow-up minus post-stenting values.
Statistical analysis
The SPSS version 15.0 (SPSS Inc., Chicago, IL, USA) was used for all analyses. Continuous variables were presented as the mean value ± 1 SD and compared using paired or unpaired Student t tests or a nonparametric Wilcoxon test if the normality assumption was violated. Discrete variables were presented as percentages and relative frequencies; comparisons were conducted using a chi-square test or Fisher's exact test as appropriate. A p value < 0.05 was considered statistically significant.
Baseline characteristics and changes in serum lipid profiles and high-sensitivity C-reactive protein
No significant differences in patient demographic variables and medications, except for statin use, were observed (Table 1). At follow-up, total cholesterol, lowdensity lipoprotein-cholesterol, and triglyceride levels had significantly decreased, and high-density lipoproteincholesterol level had significantly increased, in the simvastatin group as compared to the non-statin group.
High-sensitivity C-reactive protein levels were also significantly lower in the simvastatin group as compared to the non-statin group during follow-up (Table 2).
QCA results and restenosis
No significant difference in baseline coronary angiographic findings and procedural results was observed between the simvastatin group and the nonstatin group (Table 3). At follow-up, binary in-stent restenosis was present in 16% of the simvastatin group (21/132) and 20% of the non-statin group (13/64), and repeat revascularization was performed in 14% of patients in the simvastatin group (18/132) and 17% in the nonstatin group (11/64). However, these differences were not significant (p = 0.3, p = 0.4, respectively).
Although lumen loss within the first 3 mm from each stent edge was primarily due to an increase in P&M area rather than a change in EEM area, and lumen loss beyond 3 mm from each stent edge was due to a combination of increased P&M area and decreased EEM area, no significant differences were observed in changes in P&M, EEM, and lumen area at every 1-mm subsegment between the simvastatin and non-statin groups (Table 5).
DISCUSSION
The results of this study demonstrate that usual dosesimvastatin therapy does not affect disease progression (plaque increase and lumen loss) and vascular remodeling in peri-stent reference segments in patients undergoing BMS implantation.
Several IVUS studies have demonstrated the effects of statins on plaque regression and vessel remodeling. Suzuki et al. [14] reported that plaque area decreased by 12% in patients who received a statin as compared to 13% increase in plaque area in those who did not receive a statin. Additionally, vessel area was not enlarged in patients treated with a statin, but did show positive remodeling in those who had plaque progression without a statin. Jensen et al. [15] reported a significant reduction (4.6%) in the lesion EEM area and in the lesion plaque area (5.9%), but no change in reference measurements after 12 months of simvastatin treatment. As a result, the remodeling index was reduced by simvastatin from 1.01 ± 0.12 to 0.95 ± 0.09. Petronio et al. [16] reported that therapy with 20 mg/day of simvastatin did not prevent intimal hyperplasia or in-stent restenosis, but it promoted atherosclerotic regression both at stented and nonstented sites in patients with normocholesterolemia who underwent coronary stenting. However, the main objectives of previous studies [14 -16] have not been to assess the effects of statins on plaque regression and vascular remodeling in peri-stent reference segments in patients who underwent BMS implantation. In the present study, we sought to assess the effects of a usual dose of simvastatin (20 mg/day) on plaque regression and vascular remodeling at peri-stent reference segments; however, therapy with 20 mg/day of simvastatin did not regress plaque at either the proximal or distal edges from post-stenting to follow-up, and did not prevent in-stent restenosis at a mean of 9.4 months of follow-up after stenting. The response of adjacent reference segments not covered by the stent is of major interest. Several studies have demonstrated lumen loss adjacent to the stent edge after BMS implantation. Hoffmann et al. [2] performed serial IVUS analysis at the most normal-looking cross section within a 10-mm segment proximal or distal to the stent, another midway between this slice, and the proximal or distal edge of the stent. In this study, the more distant reference segments showed a greater degree of remodeling (decrease in EEM area) than of tissue growth, whereas anatomic sections sampled at a point closer to the edge of the stent showed a similar amount of remodeling and a greater degree of cellular proliferation (increase in P&M area) as compared to the more distant reference segments. Mudra et al. [3] reported no relevant progression of the disease adjacent to the stent, despite a considerable plaque burden within the reference segments. Weissman et al. [4] analyzed reference segments 10 mm proximal and distal to the stent at index and follow-up. In this study, lumen loss in the adjacent reference segments, which was most pronounced within the first 2 mm of the stent edge, and lumen loss within 2 mm of the stent edge were due primarily to intimal proliferation. In contrast, beyond 2 mm, negative remodeling contributed more to lumen loss. In the present study, lumen loss within the first 3 mm from each stent edge was primarily due to an increase in P&M area rather than a change in EEM area. Lumen loss beyond 3 mm from each stent edge was due to a combination of increased P&M area and decreased EEM area.
The present study has some limitations. First, the present study is retrospective and therefore subject to limitations inherent to this type of clinical investigation. Second, this single-center study included only a small number of patients. Third, we did not assess changes in EEM, lumen, and plaque areas that were more distant from the stent edges, i.e., segments that were not affected by the stent or balloon. Fourth, we did not compare the effects of low-dose statin with moderate or high-dose statin therapy on plaque regression and vascular remodeling. Therefore, further prospective, randomized, large-scale studies are needed.
In conclusion, a usual dose of simvastatin does not inhibit plaque progression and lumen loss and does not affect vascular remodeling in peri-stent reference segments in patients undergoing BMS implantation.
Conflict of interest
No potential conflict of interest relevant to this article was reported.
|
2014-10-01T00:00:00.000Z
|
0001-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "fbab87a9136f70f2d1ca2c5a32302242ba7650d1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3904/kjim.2010.25.4.356",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbab87a9136f70f2d1ca2c5a32302242ba7650d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
88513287
|
pes2o/s2orc
|
v3-fos-license
|
Bayesian analysis for a class of beta mixed models
Generalized linear mixed models (GLMM) encompass large class of statistical models, with a vast range of applications areas. GLMM extends the linear mixed models allowing for different types of response variable. Three most common data types are continuous, counts and binary and standard distributions for these types of response variables are Gaussian, Poisson and Binomial, respectively. Despite that flexibility, there are situations where the response variable is continuous, but bounded, such as rates, percentages, indexes and proportions. In such situations the usual GLMM's are not adequate because bounds are ignored and the beta distribution can be used. Likelihood and Bayesian inference for beta mixed models are not straightforward demanding a computational overhead. Recently, a new algorithm for Bayesian inference called INLA (Integrated Nested Laplace Approximation) was proposed.INLA allows computation of many Bayesian GLMMs in a reasonable amount time allowing extensive comparison among models. We explore Bayesian inference for beta mixed models by INLA. We discuss the choice of prior distributions, sensitivity analysis and model selection measures through a real data set. The results obtained from INLA are compared with those obtained by an MCMC algorithm and likelihood analysis. We analyze data from an study on a life quality index of industry workers collected according to a hierarchical sampling scheme. Results show that the INLA approach is suitable and faster to fit the proposed beta mixed models producing results similar to alternative algorithms and with easier handling of modeling alternatives. Sensitivity analysis, measures of goodness of fit and model choice are discussed.
Introduction
There has been an increased interest in the class of Generalized Linear Mixed Models (GLMM). One possible reason for such popularity is that GLMM combine Generalized Linear Models (GLM) (Nelder and Wedderburn, 1972) with Gaussian random effects, adding flexibility to the models and accommodating complex data structures such as hierarchical, repeated measures, longitudinal, among others which typically induce extra variability and/or dependence.
GLMMs can also be viewed as natural extension of Mixed Linear Models (Pinheiro and Bates, 2000), allowing a wider class of probability distributions for response variables. Common choices are Gaussian for continuous data, Poisson and Negative Binomial for count data and Binomial for binary data. These three situations include the majority of applications within this class of models. Examples can be found in (Breslow and Clayton, 1993) and (Molenberghs and Verbeke, 2005).
Despite that flexibility, there are situations where the response variable is continuous and bounded above and below such as rates, percentages, indexes and proportions. In such situations the traditional GLMM based on the Gaussian distribution, is not adequate, since bounding is ignored. An approach that has been used to model this type of data is based on the beta distribution. The beta distribution is very flexible with density function that can display quite different shapes, including left or right skewness, symmetric, J-shape, and inverted J-shape (da Silva et al., 2011).
Regression models for independent and identically distributed beta variable proposed by Paolino (2001), Kieschnick and McCullough (2003) and Ferrari and Cribari-Neto (2004). The basic assumption is that the response follow a beta law whose expected value is related to a linear predictor through a link function, similarly to GLM's. Cepeda (2001), Cepeda and Gamerman (2005) and Simas The beta regression is implemented by betareg package (Cribari-Neto and Zeileis, 2010) for the R environment for statistical computing (R Development Core Team, 2012). Extended functionality is added for bias correction, recursive partitioning and latent finite mixture (Grün et al., 2012). Mixed and mixture models are further discussed by Verkuilen and Smithson (2012).
For non independent data, development have been proposed in times series analysis by McKenzie (1985), Grunwald et al. (1993) and Rocha and Cribari-Neto (2008 Ferrari and Cribari-Neto (2004) using a Bayesian approach. The authors considered two distributions for the random effects (Gaussian and t-Student) and several specifications for the prior distributions for parameters in the model. Bonat et al. (2013) extend the beta model proposed by Ferrari and Cribari-Neto (2004) with the inclusion of Gaussian random effects, under a GLMM approach. Likelihood inference is based on two algorithms. The first uses the Laplace approximation to solve the integral in the likelihood function and the second uses an algorithm proposed by Lele et al. (2010) called data cloning. Authors analyzed two real data sets, with different structures for the random effects. Likelihood inference under GLMM is non-trivial because of presence random effects and several procedures have been proposed. Approximate likelihood methods are adopted by Breslow and Clayton (1993) and a Monte Carlo approach is adopted by Chen et al. (2002). Both come with a computational overhead. A popular approach is based upon a Bayesian framework using Markov Chain Monte Carlo (MCMC) algorithms with attempts to set non informative priors. Figueroa-Zúñiga et al. (2013) perform Bayesian inference for beta mixed models using an MCMC algorithm. The Bayesian approach is attractive but requires specification of prior distributions, which is not straightforward, in particular for variance components. The main goal this paper is describe Bayesian inference for beta mixed models using INLA. We discuss the choice of prior distributions and measures of model comparisons. Results obtained from INLA are compared to those obtained using a Bayesian MCMC algorithm and a purely likelihood analysis. The modelling is illustrated through the analysis of a real dataset from a study on a life quality index of industry workers, with data collected according to a hierarchical sampling scheme. Additional care is given to choice of prior distributions for precision parameter of the beta law.
The structure this paper is the follows. In Section 2, we define the Bayesian beta mixed model, Section 3 we describe the Integrated Nested Laplace Approximation (INLA). In Section 4 the model is introduced for the motivating example and the results of the analyses are presented. We close with concluding remarks in Section 5.
Bayesian beta mixed model
Bayesian beta mixed regression extends the beta regression model, as proposed by Ferrari and Cribari-Neto (2004), by adding Gaussian distributed random effects to the linear predictor. Consider the response Y ij from group i = 1, . . . , N and replication j = 1, . . . , n i . Y i is a n i −dimensional vector of measurements on the i th group. Given a q-dimensional vector b i of random effects distributed as N (0, Q(τ )), the responses Y ij are conditionally independent with density function given by (1) where 0 < µ < 1 is the mean of the response variable and φ > 0 is a dispersion parameter. Let g(.) be a known link function with g(µ ij ) = x T ij β +z T ij b i , where x ij and z ij are vectors of covariates with dimensions p and q, respectively, and β is a p-dimensional vector of unknown regression parameters. Assume that , where the precision matrix Q(τ ) depends on parameters τ . The model specification is completed assuming prior distributions for all parameters in the model, say θ = (β, φ, τ ).
A flat improper prior is assumed for the intercept β 0 . All other components of β are assumed to be independent N (0, σ 2 ) with fixed precision σ −2 = 0.0001. For the parameters in the precision matrix (τ ) we follow an approach adopted by Fong et al. (2010) based on Wakefield (2009). The basic idea is to specify a range for the more interpretable marginal distribution of b i and use this to derive the specification of the prior distributions. The approach is based on the result that if b|τ ∼ N (0, τ −1 ) and τ ∼ Ga(a 1 , a 2 ) then b ∼ t(0, a 2 /a 1 , 2a 1 ). To decide upon a prior, we define a range for a generic random effects b and specify the degrees of freedom, d, and then solve for a 1 and a 2 . The solution for a generic range, say (−R, R), is a 1 = d/2 and a 2 = In linear mixed effects model, b is directly interpretable, while for beta models, it is more appropriate to think in terms of the marginal distribution of exp ( A flat Ga(a 1 = 1, a 2 = 0.001) prior is choosen for φ as no result is known to aid its specification. The sensitivity to prior assumptions on the precision parameters of the beta distribution and of the random effects is a potentially a delicate issue under beta mixed models. Figueroa-Zúñiga et al. (2013) considers several choices of prior distributions to φ but no sensitivity analysis is performed. The idea here is to specify this Gamma distribution as the default choice and then to assess the sensitivity.
(MCMC) technique is the standard approach to fit such models (Figueroa-Zúñiga et al., 2013). In practice, this approach comes with a wide range of problems in terms of convergence and computational time. Moreover, the implementation itself can be problematic, especially for end users who might not be experts in programming. Software platforms for fitting generic random effects models via MCMC, include JAGS (Plummer, 2003) Roos and Held (2011) develop a general sensitivity measure based on the Hellinger distance to assess sensitivity of the posterior distributions with respect to changes on the prior distributions for precision parameters. Such methods is adopted here to assess the sensitivity to the choice of the prior distribution for φ and for the precision of the random effects.
denote the relative change on the posterior distribution with respect to changes on the prior distribution as measured by the Hellinger distance H, where pri(θ) is the prior distribution, post(θ) is the corresponding posterior distribution and The Hellinger distance is symmetric and measures the discrepancy between two densities f and g. It takes a maximal value of 1 if BC is equal to 0 and is equal to 0 if and only if both densities are equal. The latter happens whenever the density f assigns probability 0 to every set to which the density g assigns a positive probability and vice versa. For more detailed description see Roos and Held (2011).
Income and life quality of Brazilian industry workers
The Brazilian industry sector worker's life quality index (IQVT, acronym in Portuguese) is computed combining 25 indicators from eight thematic areas: housing, health, education, integral health and safety in the workplace, development of skills, value attributed to work, corporate social responsibility, stimulus to engagement and performance. The index is constructed following same premises as for the united nations human development index 1 . The resulting values are in the unit interval and the closer to one the higher the worker's life quality in the industry.
A poll was conducted by Industry Social Service (Serviço Social da Indústria -SESI ) in order to assess worker's life quality in the Brazilian industries. The survey included 365 companies on eight Brazilian federative units among the total of 26 states plus the Federal District. The data analysis considers two covariates related to the companies for which the impact on IQVT is of particular interest, namely, company average income and size. The first is given by the total of salaries divided by the number of workers expressing the capacity to fulfill individual basic needs such as food, health, housing and education. The second can be indirectly related to the capability of managing and providing quality of life.
The relevant question for the study and main goal here is to specify a suitable model to assess the influence of these two covariates on the IQVT. The federative unit where the company based is expected to influence the index considering varying local legislations, taxing and further economic and political conditions. This is accounted by including a random effect, regarding the eight states as a sample of the federative units.
Relations between the IQVT and the covariates income, size and with the states included in the survey are shown on Figure 1 which suggests all are potentially relevant. The income is expressed in logarithmic scale centered around the average. The Bayesian beta random effects model for IQVT is given by parametrize with β 0 associated with large size companies with differences β 1 and β 2 to the medium and small size, respectively. Random effects include an intercept b i1 and a slope b i2 associated with the covariate income. The vector of model parameters are the regression coefficients (β 0 , β 1 , β 2 , β 3 ), the random effects covariance parameters (τ 2 1 , τ 2 2 , ρ) and dispersion parameter φ from beta law. The logit g(µ ij ) = log{µ ij /(1 − µ ij )} link function is used. The specification the Bayesian beta mixed model is completed by specifying the prior distributions for the model parameters. Following the remarks at Section 2 a flat improper prior is assumed β 0 . All other components of β are assumed to be independent zero-mean N (0, σ 2 ) with fixed precision σ −2 = 0.0001. For the parameter φ we assumed a flat Ga(a 1 = 1, a 2 = 0.0001) distribution. For the parameters indexing the random effects Σ = Q −1 , we assumed that Q ∼ W q (r, S), where W q (r, S) denote the Wishart distribution, r and S to be chosen as in the univariate case. Specifically, we assumed that r = 5 and a diagonal S with elements 0.001487 and 0.005, reducing to a Ga(a 1 = 0.5, a 2 = 0.001487) when fitting the random intercept model.
A sequence of sub-models are defined in order to assess the effects of interest. Model 1 is a null model with just the intercept coefficent. Model 2 and 3 adds the covariates size and income, in this order. Model 4 and 5 adds random effects related to the States to the intercept and the income coefficient, respectively. The latter is the largest model considered here. A sequence of nested models are defined for comparison and detection of the relevant effects. Large size companies are considered as the baseline for the categorical covariate size. Table 1 shows the posterior means for the model parameters and model fitting measures given by the deviance information criterion (DIC), log marginal likelihood (LML) and conditional predictive ordinate (CPO), all obtained with INLA.
Results for models 1-3 confirm the relevance of the covariates. The increasing values for average posterior of φ, from 53.92 on model 1 to 72.16 on model 3, confirms further explanation of the data variability by the covariates. The random intercept clearly improves the model fit, capturing the variability of the IQVT among the states. The addition of random slope did not prove relevant. All model fitting measures favors model 4 for which we report further analysis. Figure 2 shows posterior distributions from INLA and a MCMC output from JAGS running three chains of 500,000 samples with a burn-in of 10,000 interactions and saving one of each 100 simulations. We also compared INLA results with likelihood point estimates and profile intervals. Figure 2 suggests that all approaches produced similar results. This is also assured by the results in Table 2 We conclude the analysis assessing sensitivity to the choice the prior distributions. Following Roos and Held (2011), we investigate the sensitivity by measuring the Hellinger distance and focusing only on the parameters φ and τ since the choice of prior is standard for the regression coefficients Table 3 Hellinger distances between the prior and posterior distributions from the ones obtained with the default prior. Table 3 shows the hyperparameters obtained for priors with Hellinger distances from about 0.1 to 0.6 and the corresponding Hellinger distances between priors, posteriors and S(·, ·). The distributions are plotted in Figure 3.
The results show that the models are more sensitive to the choice of prior for the parameter τ . For the parameter φ even with the rather large distance of 0.6 between prior distributions the corresponding distance between the posterior distributions is substantially reduced to 0.1628. The same distance between priors for the parameter τ still reduces to 0.2827. The posterior distributions in Figure 3 are similar for all prior distributions. Comparatively, the parameter τ is more sensitive to the choice of prior distribution, however still with similar posterior distributions even with large difference between priors. Table 4 compare summary results of models with default prior default and with the largest Hellinger distance from the default prior. For φ the posterior mean changed from 93.37 to 90.18, a difference is only 3.53% whereas for τ they change from 63.65 to 41.86 with a difference of 52.05%. Despite such differences, the practical conclusions on effects of relevance are unchanged since the changes are very small for the regression parameters. The relevance of the random effects in the model remains important.
Conclusion
This paper reports results of a Bayesian analysis of beta mixed models comparing results obtained with the INLA method with the ones obtained with an MCMC algorithm and purely likelihood analysis. Emphasis is placed on the specification and sensitivity of priors for the Beta dispersion parameter and the precision of the random effects.
Results of the analysis of the index of life quality for the worker's on the Brazilian industrial sector indicates company size and average income are both relevant for the quality of life, as well as the effect of the states captured by adding a random intercept to the regression model. The analysis consisted of fitting several models with one final model chosen according to three criteria of model comparisons -LML, DIC and CPO. All criteria points to the same model choice. Summary results obtained with INLA are similar with the ones obtained with MCMC and likelihood analysis showing the substantial gain in the computational burden makes INLA an attractive choice for inference which allowing for several modeling alternatives to be investigated.
The sensitivity analysis was conducted for the dispersion parameters in the Bayesian beta mixed model using the Hellinger divergence as a measure of the distance between prior and posterior distributions. Our results show that the Beta dispersion parameter φ is insensitive to the choice of prior. Slightly more sensitive is the parameter τ related to the random effects, but the overall results and conclusions remains unchanged for the alternative priors.
|
2014-02-10T12:47:47.000Z
|
2014-01-13T00:00:00.000
|
{
"year": 2014,
"sha1": "023208d3fcfba3f6a116cbfa092201b96c4907d4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "023208d3fcfba3f6a116cbfa092201b96c4907d4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
253721836
|
pes2o/s2orc
|
v3-fos-license
|
Asymmetric gain–loss reference dependence and attitudes toward uncertainty
This paper characterizes a model of reference dependence, where a state-contingent contract (act) is evaluated by its expected value and its expected gain–loss utility. The expected utility of an act serves as the reference point; hence, gains (resp., losses) occur when the act provides an outcome that is better (worse) than expected. The utility representation is characterized by a belief regarding the state space and a degree of reference dependence; both are uniquely identified from behavior. We establish a link between this type of reference dependence and attitudes toward uncertainty. We show that loss aversion and reference dependence are equivalent to max–min and concave expected utility.
Introduction
In many circumstances, a decision maker (DM) may evaluate an uncertain prospect not only in absolute terms but also in relative relation to some reference point. Kahneman and Tversky (1979) first introduced the notion of reference dependence, in the seminal Prospect theory, to explain experimental violations of expected utility. Within Prospect theory, deviations from the reference point are weighted by a gain-loss value function, which has the feature, referred to as loss aversion, that losses have more negative value than equal sized gains have positive value.
A different resolution for empirical deviations from expected utility proposes models of multiple priors, in particular MaxMin Expected Utility (MMEU). MMEU, axiomatized by Gilboa and Schmeidler (1989) as an explanation of the Ellsberg Paradox, 1 considers a DM who holds a family of beliefs regarding the likelihood of events. She evaluates uncertain prospects by the minimum expected utility consistent with any of her beliefs. As such, a MMEU DM displays uncertainty aversion (or, ambiguity aversion), the feature that she prefers to minimize her exposure to uncertainty.
At a purely intuitive level, there seems to be a connection between loss aversion and uncertainty aversion; both behaviors characterize some form of pessimism in comparison with a subjective expected utility (SEU) maximizer. A loss averse DM places more weight on the utility of "bad" events but leaves the probabilities undistorted, whereas an uncertainty averse DM places more weight on the probability of "bad" events but leaves the utilities undistorted. We show in this paper that this connection is more than superficial; there exists a formal connection between reference dependence and attitude toward uncertainty. In particular, we axiomatize a simple class of referencedependent preferences, called asymmetric gain-loss (AGL) preferences, which can be equivalently represented by a MMEU functional. Within our framework, loss aversion and uncertainty aversion produce identical choice data.
AGL preferences
In addition to formalizing the connection between reference dependence and ambiguity aversion, AGL preferences provide a simple model of endogenous reference dependence. Our object of choice is a state-contingent contract, or act, which is an assignment of consumption (in utility terms) to each state of the world. AGL preferences evaluate an act according to two components: consumption utility and gain-loss utility. The expected consumption utility of an act is as in the standard SEU model, where the decision maker holds a subjective belief, μ, over the state space. Her expected consumption utility of an act f : S → R is (1.1) where S is the state space and s ∈ S is a generic state. The AGL DM takes this assessment of acts both as the reference point by which gains and losses are measured, and as the baseline level of utility on which gains and losses act as distortions. The main result of this paper is the behavioral characterization of asymmetric gain-loss preferences, which are preferences that can be represented by the functional Expected Gain/Loss Utility . (1. 2) The first term is the DM's subjective expected utility without any reference considerations, and the second is the gain-loss utility. The gain-loss term captures the reference effects. When λ < 0, the DM is loss averse and receives a utility penalty when the realized utility falls short of her expectation. This utility penalty is linearly scaled by λ. In our representation results, all the elements are identified from choice behavior: μ and λ are identified uniquely. 2
Reference point formation
There are two alternative views on the formation of endogenous reference points. In the first, the DM forms a reference point before making a choice, based on the set of options that she faces. 3 As such, her beliefs about her own actions will affect the reference point, leading these papers to generally require an equilibrium condition to account for the mutual relationship between reference points and choices.
In the second, the DM's chosen action completely determines the reference point. Thus, each element of a choice set is associated with its own reference point. 4 AGL preferences fall into this second category. In particular, the AGL representation is closely related to the notion of choice acclimating personal equilibrium defined by Kőszegi and Rabin (2007), excepting that there the domain of uncertainty is objective risk. A main contribution of our paper is that, by considering the case where probabilities are subjective, we show how it is possible to simultaneously identify both the reference attitude and the beliefs of the DM.
It is worth noting that when the reference point is defined using equilibrium conditions, as in Kőzsegi and Rabin (2006), the joint identification of beliefs and reference effects is generally not possible. The feedback loop between choices and the reference point can lead to intransitivity of the revealed preference, as shown in Gul and Pesendorfer (2006).
Because of this identification problem, and the intrinsic complexity surrounding equilibrium conditions, the latter notion or reference point determination has proved more suitable for applications. Indeed, many applications use AGL preferences: Lange and Ratan (2010) explore how reference dependence can increase the optimal bid in sealed bid auctions (to be more in line with empirical evidence); Herweg et al. (2010) show that loss aversion can explain prevalence of binary incentive schemes (i.e., bonuses) in moral hazard environments; Abeler et al. (2011) show that, in an effort provision experiment, expectations-based reference dependence best explains their data; Karle and Peitz (2014) consider the competition of differentiated firms when buyers exhibit loss aversion. Each of the above-mentioned papers assumed a kinked, piecewise linear gain/loss function and assumed the reference point was the expected consumption utility-exactly the characterization given here. Our work provides the foundational restrictions for such consumer behavior, and shows that beliefs regarding uncertainty and reference effects can be jointly identified.
A simple example of AGL preferences
We employ the following numerical example to explain the intuition behind the representation, and show how asymmetric gain-loss preferences can explain different types of behavior regarding uncertainty.
Consider the environment of a seller selling a single good to a buyer who makes a take-it-or-leave-it offer. The value to the seller isv which the buyer believes takes the values {5, 3, 2}, with probability .2, .3, and .5, respectively. The buyer has an independent private value for the object given by v = 10. The buyer will submit her offer, b, and the seller will accept or reject the offer. The seller will accept any offer which (weakly) exceeds her value. The buyer's utility associated with the bid b is given by It is obvious that the optimal bid will always be in {5, 3, 2}. If the buyer is a risk neutral expected utility maximizer, her optimal bid solves max b∈{5,3,2} The optimal bid is b R N = 3, which has an expected utility of 5.6 before the bid is placed. Now suppose that the buyer has gain-loss preferences: in addition to the expected value she wants to avoid losses, so she subtracts any expected losses from the expected consumption utility to determine the valuation (this corresponds to the parametrization λ = −1). A loss for her is any outcome where her ex-post utility is worse than the expected value; therefore, outcomes in which she does not obtain the item are considered losses.
Her expected AGL utility, taking into account her gain-loss preferences, of making the bid b = 3 is given by: which is equal to 4.48. The optimal AGL bid, however, is b AG L = 5, which provides a constant utility, and hence an AGL utility of 5.
When the buyer in this simple take-it-or-leave-it example takes into consideration expected gains and losses in addition to the standard expected utility, she is better off increasing her bid. Intuitively, she sacrifices her payoff in good outcomes (where she obtains the item) in order to decrease the chance of bad outcomes (not obtaining the item). While her payoff is smaller contingent on obtaining the good, the outcome is favorable more often and she increases her ex-ante utility.
As such, the buyer chooses a bid according to: It is straightforward to check that the optimal bid is b M M EU = 5. Therefore, a AGL bidder with μ = [.2, .3, .5] and λ = −1 and a MMEU bidder with multiple priors given by C will make the same optimal bid. While at first glance this connection may seem contrived, in fact, the AGL and MMEU DMs choose identically not only in this bidding game but in all decision problems-they have identical preferences. 5 We show in Sect. 3 that AGL behavior can always be equivalently described by a MMEU decision maker.
Structure of the paper
The rest of the paper is structured as follows. Section 2 provides an axiomatic characterization of the preferences, discusses the concept of the alignment of acts, which is instrumental for the endogenous determination of a reference point, and formally defines the utility representation. Section 3 explores the link between gain-loss and ambiguity attitudes. Section 4 puts forth comparative statics results. Section 5 contains a literature review. All proofs are contained in the "Appendices."
Axiomatization
In this section, we formally present the choice environment and a set of axioms which prove to be necessary and sufficient for the representation presented later in this section. Let S = {s 1 , s 2 , . . . , s n } be a finite set of states of the world that represent all possible payoff-relevant contingencies for the DM; any E ⊆ S is called an event. Define E = P(S)\{∅, S} as the set of all non-trivial events. Denote by F = R S + the set of all acts, that is, functions f : S → R + (endowed with the standard Euclidean topology). We interpret the act f as providing the payoff f (s) in state s ∈ S and assume it is the utility received by the DM when f is chosen and s is realized. 6 Take the mixture operation on F as the standard pointwise mixture, where for any Abusing notation, any c ∈ R + can be identified with the constant act c(s) = c for all s ∈ S. Let F c ∼ = R + be the set of constant acts. Preferences on F are denoted by the binary relation ; and ∼ represent, respectively, the asymmetric and symmetric components of . For each f ∈ F, if there is some c f ∈ F c , such that f ∼ c f , then call c f the certainty equivalent of f . Before we can specify the behavioral restrictions on preference that correspond to the AGL utility representation, we need to consider some particular structures in the choice domain.
Balanced pairs of acts
A particularly important type of act to study AGL preference is given by those that provide perfect hedges against uncertainty. Hedging gets rid of uncertainty, and therefore, it also removes all possible gain-loss considerations from the act. Call a pair of acts ( f ,f ) balanced if they provide a perfect hedge and are indifferent to each other. 7 The importance of balanced acts is that eliminating subjective gain-loss considerations allows an analyst to identify beliefs from preferences.
Definition 1 Two acts f andf are balanced if f ∼f , and for any states s, s ∈ S 6 Note, we are tacitly assuming the decision makers cardinal utility has already been identified via standard means, i.e., the examination of preferences over objective lotteries. We could just as easily add a second stage of objective randomization into acts, à la Anscombe and Aumann (1963), but this would require additional notation, and the elicitation of utility values is not central to our model. 7 Siniscalchi (2009) calls a pair of acts that provide perfect hedging as complementary acts. We strengthen the definition of complementary acts to further require the acts to be indifferent.
If there exists e f ∈ F c such that e f = 1 2 f (s) + 1 2f (s) for all s ∈ S, we call e f the hedge of f . ( f ,f ) is referred to as a balanced pair, andf is a balancing act of f (and vice-versa).
When the notationf is used, it is always in reference to the balancing act of f ∈ F. The conditions imposed on preferences below guarantee that c f and e f are unique and well defined for each f .
Act alignment: separating positive and negative states
Balanced acts will provide a behavioral way of separating gains and losses. We require that when the outcome in state s is considered a gain for f , the outcome on state s is considered a loss forf . This is a natural requirement given that f andf provide a perfect hedge to the DM. Hence,f has the exact opposite gain-loss composition of f . For an act, define positive states as those states that deliver gains, and negative states as those states that deliver losses.
Definition 2 Let ( f ,f ) be a balanced pair. Say s ∈ S is a positive state for f if f (s) ≥f (s), and a negative state for f iff (s) ≥ f (s). If a state is both positive and negative (i.e., f (s) =f (s)) say s is a neutral state for f . Any balanced pair of acts induces a set of partitions, each of which splits the state space into two events: one event that contains only positive states for f ({s ∈ S| f (s) ≥f (s)}) and one event that contains only negative states for f ({s ∈ S|f (s) ≥ f (s)}). We use the convention that neutral states can be labeled as either positive or negative (but not both). 8 When there are no neutral states, each act has a unique way of partitioning the states into positive and negative. These partitions associated with each act are called the alignment of the act. We use the convention that the alignment of the act is represented by the event that includes the positive states E (the complement is the negative states) rather than saying that the alignment is represented by the partition {E, E c }.
For every E ∈ E, there is a set of acts that is aligned with E.
Definition 4 Given any event E ⊂ S, define F E be the set of acts where the positive states are contained in E, i.e., wheref is the balancing act to f .
Note that any constant act is its own balancing act, and therefore, constant acts are aligned with all partitions of the state space. It is useful to consider acts that have only one alignment, which are called single-alignment acts. These acts are important because they are acts where small perturbations on outcomes do not change the alignment.
Definition 5 Let ( f ,f ) be a balanced pair. Then, f is single-alignment act if for no s ∈ S, f (s) =f (s).
If the event E represents an alignment of f , every subset of E or E c is called a non-overlapping event. These are the events where all the states are either all positive or all negative for f , so there is no overlap between positive and negative states for f . Non-overlapping events provide a way of specifying situations where there is no tradeoff between positive and negative states, only across one type of state.
Definition 6 Given f ∈ F, event F ⊂ S is a non-overlapping event for f if every state in F is aligned in the same way.
If follows that
Abusing terminology, we say that F is non-overlapping for E whenever F is nonoverlapping for all f ∈ F E .
With these definitions in mind, we can now specify the behavioral restrictions that are necessary and sufficient to be represented by the AGL functional, as given by Eq. (1.2).
Standard axioms
The first 3 conditions, A1-A3, are standard axioms in the literature of choice under uncertainty. 9 A1. (Weak Order). is complete and transitive. A2. (Continuity). For all f ∈ F, the sets {g ∈ F : g f } and {g ∈ F : f g} are closed.
for some s and f (s) ≥ g(s) for all s, then f g.
New axioms: mixture conditions
The standard subjective expected utility model from Anscombe and Aumann (1963) is characterized by some version of A1-A3, plus the independence axiom. Independence requires that f g if and only if α f + (1 − α)h αg + (1 − α)h for any h ∈ F, and any α ∈ (0, 1).
The independence axiom does not hold for AGL preferences because it does not allow for gains and losses to be evaluated differently; convex combinations of acts can change the gain-loss composition of acts, therefore changing the assessments as well. AGL preferences relax independence, but impose three consistency requirements for mixtures of acts.
The first new axiom states that as long as the alignment of acts remains the same when mixing, then independence is preserved. If two acts have the same alignment, taking any mixture of them does not change the composition of gains and losses, so the tradeoff between gains and losses should not change.
When considering the families of acts that have the same alignment, Axiom 4 implies the regular independence axiom holds, imposing an expected utility representation over such acts. Further, because constant acts are mutually aligned with all other acts, this provides a connection between the different expected utility representations.
Under the full independence axiom, the preference between an α mixture of f and h or f and h would depend only on the preference between h and h . Here, on the other hand, mixing acts may change valuations in nonlinear ways: This could happen if either (i) the mixture changes the alignment of states, or (ii) the mixture provides an opportunity to hedge by improving loss states and worsening gain states (or the opposite, if the DM is gain seeking).
The next axiom, Local Mixture Consistency, states that these are the only two reasons for the nonlinearity; Axiom A5 states that the effect of adding a small amount of noise, entirely contained within the positive or negative alignment of an act, depends only on the expected consumption utility of the noise. Because of continuity, a sufficiently small amount of noise will not change the alignment of a single-alignment act. Moreover, if the noise is added only to states that are positive or states that are negative, the mixture does not allow for the possibility of hedging (since any decreasing in loss utility in one negatively aligned state must be added to another negatively aligned state).
A5. (Local Mixture Consistency).
For any single-alignment acts f ∈ F E and g ∈ F E , any event F, which is non-overlapping for both E and E , and any To see how the mechanics capture the intuition above: because h and h are equal to their hedge-except on a subset of E ∩ E -they do not contribute any gain/loss considerations except on F. Since all of the variation of h and h take place within F, α f +(1−α)h and α f +(1−α)h will all be aligned the same way for any state s / ∈ F. Of course, we might still worry that mixing f with h or h will alter the alignment differently within F. However, because F ⊂ E and f was a single-alignment act, for s ∈ F, f (s) is strictly better than the expected consumption utility of f . Hence, mixing with very little weight on h or h will still not distort the alignment. All of the same considerations hold for g, indicating that the final preference over mixtures depends only on h and h .
The last axiom imposes a consistency condition on mixtures of acts when reversing the role of gains and losses. Intuitively, the condition requires that the effect of mixing h and f is the opposite to the effect of mixing h andf . This condition is called Antisymmetry.
Since f is indifferent to g, the DM has a strict preference between the mixtures only if h is changing the gain-loss component of utility. Consider the case where the DM is loss-biased. Then, αh + (1 − α) f αh + (1 − α)g whenever f "smooths" out consumption of h more than g doesf provides a better hedge against the loss states of h. Of course, when f and g are replaced withf andḡ, the losses and gains are reversed, and so,f now exaggerates the loss states of h, breaking the indifference between f and g in the opposite direction. The same intuition applies when h is replaced withh. Notice, when h is constant, there is no room for hedging, and so, the mixtures with f and g will be indifferent (as dictated by Alignment Independence).
Representation results
This section provides the main representation results of the paper. Theorem 2.1 introduces the AGL representation as characterized by the above axioms. This section also outlines important preliminary results that highlight the role of particular axioms and elucidate the relation between the AGL representation and other decision theoretic models.
Theorem 2.1 The preference satisfies A1-A6 if and only if there exists a probability
The bound on λ is a consequence of strict monotonicity: Increasing the payoff in any state must increase the valuation of the act. The bound on λ ensure that the marginal increase in consumption utility outweighs any negative marginal decrease in gain/loss utility. The parameter λ captures the difference between the weight placed on gains and the weight placed on losses, which for the representation is unique. An important application of the representation result from Theorem 2.1 is that it provides an index for reference dependence (λ) that is decoupled from risk attitudes, and can be easily estimated. The fact that μ lies in the interior (S) is also a consequence of strict monotonicity.
Sketch of the proof and preliminary results
The result is proven in two steps. First, Lemma 2.2 provides a SEU representation on F E established by Axioms A1-A4 (that is, excluding Local Mixture Consistency and Antisymmetry), which can be extended to aggregate preferences across families of mutually aligned acts. Then, we utilize the properties of Axioms A5 and A6 to generate the final result.
Moreover the set {μ E } E∈E is unique.
Every prior in {μ E } E∈E is different, and Alignment Independence does not imply any structure on the priors. To derive the main result from the representation of Lemma 2.2, Local Mixture Consistency and Antisymmetry are used to guarantee that every prior in the set {μ E } E∈E can be written as functions of one unique prior μ.
If we mix f with a small amount of noise (either the act h or h ), where the noisy acts exhibit variation only on F non-overlapping with E, then Local Mixture Consistency guarantees that the preference over the two mixtures depends only on the expected consumption utility of the noise. Hence, for all E ∈ E, the conditional distributionsconditioned on F non-overlapping with E-are the same for all μ E . That is, for E and E ∈ E, μ E (·|F) = μ E (·|F) whenever F is non-overlapping for E and E . In fact, there is a single μ such that μ(s|F) = μ E (s|F), keeping the condition of F.
Since E is non-overlapping with itself, this last point implies that for any s, s ∈ E, Then, the distribution μ E can be written in the following way where γ + E represents how the original prior is perturbed on the positive states (i.e., E), and γ − E represents how the original prior is modified on the negative states. Both γ + E and γ − E are positive numbers from monotonicity of , where γ + Antisymmetry implies a particular relationship between distributions indexed by complementary alignments, which is that for any E, F ∈ E, Using this observation, we show that for s ∈ E the distributions on μ E and μ E\s depend only on s. In particular, whenever s ∈ E ∩ F, Then, using these conditions about the family {μ E } E∈E we show that the difference between the distortion on negative states and positive states is always constant, thus Therefore, it is possible to characterize any μ E , as functions of μ, and this constant λ that captures the difference between the negative and positive distortions. That is, Finally, this representation of μ E is used to rewrite the representation from Lemma 2.2 in terms of μ, which yields the desired result.
Maxmin expected utility
According to Theorem 2.1, the DM who abides by the AGL axioms is probabilistically sophisticated but displays some reference effect. That is, she holds some unique belief, μ, regarding the state space, and evaluates each act according to this belief and her preferences for outcomes. Nonetheless, Lemma 2.2 states that the same preferences can be represented by a family of distributions, {μ E } E∈E , each of which is a distortion of the original belief, μ. This alludes to a possible relationship between reference effects and attitudes toward uncertainty, which has classically been modeled by a DM who considers a (non-singleton) set of priors.
Definition 7
has an MMEU representation if there exists a convex set of priors C ⊆ (S) such that represents .
MMEU, axiomatized by Gilboa and Schmeidler (1989), is characterized by two key conditions: certainty independence and uncertainty aversion. Certainty independence requires that f g if and only if for all α ∈ [0, 1], where c ∈ F c . Mixing two acts with a common constant act does not reverse the preference between them. Since constant acts are aligned with every E ∈ E, Alignment Independence implies certainty independence. Uncertainty aversion requires that for all f , g such that f ∼ g for any α ∈ (0, 1), α f + (1 − α)g f . If uncertainty aversion is changed for uncertainty seeking preferences, 10 then the representation is a Maxmax representation, where the DM evaluates an act according to the prior that maximizes her expectations. It is clear from the representation that if λ ≤ 0 then the DM is uncertainty averse, and if λ ≥ 0 then she is uncertainty seeking.
Uncertainty aversion can be characterized as a preference for hedging, as hedging reduces the exposure to uncertainty. Moreover, hedging reduces the exposure to negative states. Pushing the utility value in each state closer to the average has more effect on the negative states (because of the loss bias) and hence weakly improves the act.
The formal connection is captured by the following result, which states that asymmetric gain-loss preferences always admit a Maxmin or Maxmax representation and that the set of priors C has a specific structure that is related to the distortion of the (unique) beliefs of the DM. Theorem 3.1 has several implications. First, it shows that this form of reference dependence is always tied to a particular attitude toward uncertainty. 11 So, preferences studied in this paper will always be either uncertainty averse or uncertainty seeking. Second, it gives a precise form to the belief distortion that takes place when gain-loss consideration affect a probabilistically sophisticated DM.
While every AGL representation can be faithfully captured within the MMEU framework, the converse is not true. In the AGL framework, the distorted beliefs keep the relative likelihood of states among gains and among losses unchanged, but, depending on the sign of λ, increase or decrease the total weight given to gains (and losses) proportional to the baseline belief. This distortion is a function only of the degree of reference dependence, λ, and the baseline prior μ. In addition, Antisymmetry implies the set of priors is symmetric with respect to all hyperplanes (in the |S| − 1 dimensional simplex) which divide the state space into positive and negative states and which pass through the baseline prior. See Fig. 1; the dashed lines show such symmetries.
Intuitively, this additional symmetric structure imposed on MMEU stems from the fact the reference effects distort utility relative to a reference point. Hence, when translating the utility distortions in the AGL model into the equivalent probabilistic distortions, the symmetries around some reference point is preserved. An arbitrary convex set of priors would not necessarily admit such a baseline prior, and so, could not be translated into a model of reference effects.
Concave expected utility
Alignment Independence imposes more structure than certainty independence, and therefore, AGL also shares a connection to a class of ambiguity models outside of MMEU. In particular, any loss adverse AGL preference is also a concave expected utility (cavEU) preference. CavEU is a capacity-based model, which considers all possible decompositions of an act into bets over events (where a bet of magnitude a E ∈ R ++ on E, is an act that is constant on E and 0 off E, i.e., a E 1 E where 1 E is the characteristic function on E). The preference is cavEU if it can be represented by a concave integral introduced by Lehrer (2009).
Definition 8
has a cavEU representation if there exists a capacity v : represents .
The concave integral returns the maximum value of all possible decompositions, when aggregated according to the capacity v. Lehrer and Teper (2015) show that is cavEU if and only if it satisfies A1-A3 plus uncertainty aversion, independence with respect to the constant act 0, and co-decomposable independence. This last requirement states, for every non-bet act f , there exist a bet a E and an act f such that (i) f = αa E + (1−α) f for α ∈ (0, 1), and (ii) satisfies independence over {αa E +β f |α, β ∈ R + }.
Theorem 3.2 Suppose
admits an AGL representation (μ, λ), with λ < 0, then admits a cavEU representation with v : The property that AGL preference admits cavEU representations stems from the fact that each act, f ∈ F E can always be decomposed into a bet on E and another act in F E . Since all these acts share the same alignment, independence holds within the convex-cone generated thereby. As with the set of priors in the MMEU representation, the capacity v is characterized by the lower envelope of the distorted beliefs arising themselves from Lemma 2.2. Of course, this must be, since these functionals represent the same preferences! Note, cavEU and MMEU are not nested models; AGL preferences reside in the non-trivial intersection. 12
Comparative gain/loss attitudes
This section advances comparative statics results relating behavior to elements of the AGL representation. For an act f , recall the hedge, e f , is the constant act which provides the expected consumption utility in every state; the constant equivalent, c f , is the constant act which provides the expected total utility in every state, in other words taking into account gain/loss considerations.
A natural measure for the degree and direction of reference effects is the gap between e f and c f , the hedge and the certainty equivalent. For a loss averse DM, the difference between the hedge and the constant equivalent is how much, in utility terms, she is will sacrifice to avoid having to feel a loss. In the standard SEU model, e f = c f , so the SEU model is the baseline case for reference effects.
Definition 9 Let be a preference over F. Say is gain-biased if for all f ∈ F, c f e f . Say is loss-biased if for all f ∈ F, e f c f .
Remark 1 follows immediately from the observation that E μ [ f ] = e f and examination of the representing functionals.
Problematically, however, the hedge and constant equivalent of an act depend on the DM's beliefs, so if we want to be able to compare two DM's degree of reference dependence we want to disentangle reference dependence from beliefs. To do this, we define f ∨f , the join of a balanced pair ( f ,f ), as the act that gives the DM the best outcome between f andf for each s ∈ S. Definition 10 Given any balanced pair ( f ,f ), define the act f ∨f , the join of ( f ,f ) as From the AGL representation, gain-loss utility depends on how much the act deviates state by state from e f . f ∨f provides the absolute value of the state by state deviations of f from e f . Thus, the hedge of the join, e f ∨f , is the average deviation of f from e f . Then, to capture reference dependence behaviorally across DMs, we focus on acts that have the same hedge: If acts have different hedges, the reference effects can be confounded by the beliefs.
The intuition behind our comparative notion of "more loss biased" is that, holding the hedge constant, the DM prefers an act f with smaller expected losses. Conversely, a DM with gain-bias prefers acts with larger gains. Since we want to consider acts that have the same hedge, the comparative notions of "more gain-biased" and "more loss-biased" depend on a possibly different act for each DM: f for DM 1 and g for DM 2. We use the notation where e i f denotes the hedge of f for DM i. If, in addition, e 1 f ∨f = e 2 g∨ḡ , then g (evaluated according to μ 1 ) has the same variance as g (according to μ 2 ). So we say DM 1 is more loss-biased than DM 2, if the exposure to the same variance, keeping the expected consumption utility the same, produces a harsher utility penalty.
Definition 11
Given two preference orders 1 and 2 , say that 1 is more lossbiased than 2 (and 2 is more gain-biased than 1 ) if for any f , g with e 1 f = e 2 g and e 1 f ∨f = e 2 g∨ḡ , then for any c ∈ F c , f 1 c implies g 2 c and f 1 c implies g 2 c.
Theorem 4.1 Let i admits an AGL representation given by (μ i , λ i ) for i = 1, 2. Then, 1 is more loss-biased than 2 if and only if λ 1 ≤ λ 2 .
These addition equivalences stem from the fact that when DMs have the same belief, then for any f ∈ F, e 1 f = e 2 f . In such circumstances, the degree of loss bias is equivalent to the comparative notion of ambiguity aversion from Ghirardato and Marinacci (2002). Remark 2 furthers this link: Whenever i is gain-biased or loss-biased for both DMs, the notion of loss bias is consistent with the representation of comparative ambiguity aversion derived from Gilboa and Schmeidler (1989) (that the more ambiguity averse DM should have a larger set of priors). This observation establishes a clear connection between the idea of "loss aversion" that has been prevalent since Prospect theory, and uncertainty aversion.
These comparative statics results establish an unexplored link between the absolute and comparative notions of gain or loss bias, and existing notions of uncertainty aversion which is worth further exploring. The initial motivation for studying uncertainty was due to the Ellsberg (1961) idea that DMs are not able to formulate unique probabilities over uncertain events. Many models with multiple priors have been developed to capture what is considered "Ellsbergian behavior." Nonetheless, even if the DM is able to form a unique prior, having gain-loss considerations can appear to contaminate her prior in a way that gives rise to behavior embodied by some multiple priors model. Hence, for AGL preference a probabilistically sophisticated DM can appear to have multiple priors due to gain-loss asymmetry.
Related literature
This paper links reference dependence and attitudes toward ambiguity. We show the notion of choice acclimating personal equilibrium (CPE) from Kőszegi and Rabin (2007)-where the reference point is the expectation of consumption utility-provides a clean way to link these two concepts in the domain of choice under uncertainty. Bell (1985), Loomes and Sugden (1986) and Kőzsegi and Rabin (2006) also provide various models where the DM is loss averse with respect to a reference point given by her expected consumption utility of an uncertain prospect.
In many decision theory models, the status quo has been interpreted as a reference point. Giraud (2004a), Masatlioglu and Ok (2005), Sugden (2003), Sagi (2006), Rubinstein and Salant (2007), Apesteguia andBallester (2009), Ortoleva (2010), Riella and Teper (2014) and Masatlioglu and Ok (2013) provide models of reference dependence, where the reference point is exogenously given. Along with Kőzsegi and Rabin (2006) and Kőszegi and Rabin (2007), other papers that tackle the problem of endogenous reference point determination are Giraud (2004b), Sarver (2011), Ok et al. (2014, and Werner and Zank (2017). The approach in Ok et al. (2014) investigates reference point determination problem under a very general framework, where they do not need an equilibrium condition to characterize reference dependence. Nonetheless in their framework it is impossible to identify reference points and reference effects uniquely.
In Gul (1991), outcomes of a (n objective) lottery are considered either a disappointment or an elation depending on whether they are less than or more than the certainty equivalent. The DM suffers a utility penalty when an outcome is considered disappointing. In contrast, we assume an outcome is disappointing if it is dispreferred to the hedge, rather than the certainty equivalent. Blavatskyy (2010) extends this to a domain where certainty equivalents need not exist. Dillenberger (2010) shows that Gul's disappointment averse preferences satisfy negative certainty independence and so admit a cautious expected utility representation a la Cerreia-Vioglio et al. (2015). The later paper also shows that cautious expected utility is, in the objective risk domain, the analogy to MMEU in the subjective uncertainty domain. The connection between AGL and MMEU is therefore the subjective counterpart to the connection between disappointment aversion and cautious EU. AGL preferences (with λ < 0) satisfy negative certainty independence. 13 In a similar spirit to our paper, Masatlioglu and Raymond (2016) provide an complete characterization of CPE within the domain of objective risk. They show that CPE is exactly the intersection of quadratic preferences and rank-dependent expected utility preferences.
For AGL preferences, the evaluation of acts depends on the state by state variation of the act. Although some papers have studied attitudes toward variation in the context of risk and uncertainty, none relates such attitudes to reference dependence. In the risk domain, Chambers (1998, 2004) measure attitudes toward risk, which depend on the expectation of the lottery and a risk index of the lottery that depends on the variation of the distribution.
From the vantage of attitudes toward ambiguity, AGL preferences are a clear special case of mean-dispersion preferences: Grant and Polak (2013) axiomatize a very general model of mean-dispersion preferences, where an act is evaluated by the representation where μ is the expected consumption utility of f with respect to a given probability, d is the vector of state-by-state utility deviations from the mean, and ρ(·) is a measure of (aversion to) dispersion.
Many well-known families of preferences such as Choquet EU (Schmeidler 1989), Maxmin EU (Gilboa and Schmeidler 1989), invariant biseparable preferences (Ghirardato et al. 2004), variational preferences (Maccheroni et al. 2006), and Vector EU (Siniscalchi 2009) belong to this family of preferences. Our paper (under loss aversion) corresponds to the specification where ρ = λE(min{d(s), 0}). The interest in studying this special case is twofold. First, mean-dispersion preferences are so general that it is predominantly not possible to identify the DM's baseline prior. (Although some authors do provide various additional restrictions that facilitate identification.) The additional structure imposed in this paper precipitates not only the identification of beliefs, but also the comparative statics results presented in Sect. 4. By taking a stand on way dispersion affects utility (i.e., via linear loss aversion), we can more thoroughly relate the parameters of the representation to behavioral patterns. The second motivation is the ubiquity of AGL (or very similar) preferences in applications. As outlined in Sect. 1.2, linear loss aversion with respect to expected consumption 13 Negative Certainty Independence (adapted to our domain): for f , g ∈ F , c ∈ F c and α ∈ (0, 1). Assume f c. Notice that when λ ≤ 0 the AGL functional is concave. So we have where the last equality is a consequence of A4 (that independence is preserved over similarly aligned acts, and in particular, constant acts).
utility has proven to be a popular way of representing reference dependence in applied work. This paper, therefore, precisely outlines the tacit assumptions made in such applications.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
A Appendix: Proofs of the main results
This section provides proofs for the main results. The proofs for the auxiliary lemmas and propositions are in Appendix B.
Proof of Lemma 2.2
This is an obvious consequence of the Herstein and Milnor (1953) Mixture Space Theorem. Fix E ∈ E. satisfies Alignment Independence, F E is convex, and it includes all the constant acts. Therefore, E and F E define a mixture space, so by the Mixture Space Theorem, the conditions for a SEU representation of E are satisfied. Therefore, there exists a cardinally unique expected utility function U E : R → R, and an unique probability distribution μ E : 2 S → [0, 1], such that for Where By strict monotonicity, μ E (s) > 0 for all s, so every state is non-null. Moreover, by strict monotonicity, U E = id R clearly represents E over the constant acts, and therefore, such a normalization is without loss. Since any constant c ∈ F c is in F E for all E ∈ E, and every f ∈ F has a certainty equivalence c f , (and is complete and transitive), we have, for any proving the result.
Proof of Theorem 2.1
Start with the representation of Lemma 2.2, which is guaranteed by A1-A4. Hence, there is a set of probability distributions over S, indexed by E, {μ E } E∈E .
Step 1: Show that for every E, E the conditional distributions of μ E and μ Econditional on any event F which is non-overlapping for E and E , are the same. And show that there is a unique distribution μ over S that generates all the conditionals.
Proof In Appendix B.
By Proposition A.1, given E ∈ E, for any s, s ∈ E, μ(s) μ(s ) = μ E (s) μ E (s ) . This holds if for all s ∈ E, μ(s) = γ μ E (s), where γ ∈ R ++ . Then, the distribution μ E can be written in the following way where γ + E represents how the original prior is perturbed on the positive states (i.e., E), and γ − E represents how the original prior is modified on the negative states (both positive by monotonicity).
Lemma A.2 Given
Proof In Appendix B.
Step 2: Adding Antisymmetry yields some consistency between the distributions induced on F E and F E c . In which the average probability attached to each s is always the same for the pair μ E , μ E c , or in other words the distortions on E and E c exactly balance out.
Proposition A.3 Let satisfy A1-A6, then for any E, F ∈ E,
Proof In Appendix B.
Lemma A.4 Let satisfy A1-A6, then for all E ∈ E,
where μ E is the distribution from Lemma 2.2 that represents preferences over F E .
Proof This is an immediate consequence of Propositions A.1 and A.3.
From Lemma A.4, further conclude that γ + E + γ − E c = 2 for all E ∈ E. A more relevant implication is that μ, uniquely defines e f for all f ∈ F. Recall that e f is defined as the constant where e f = 1 2 f (s) + 1 2f (s) for all s ∈ S. Let f ∈ F E and hencef ∈ F E c .
Proposition A.5 Let satisfy A1-A6, then for every f ∈ F, e f ∈ F c is an act such that e f = E μ [ f ].
Proof In Appendix B.
Step 3: Show that the distribution induced on F E and F F only depend on the states they do not have in common.
Proposition A.6 Let satisfy A1-A6, then for any E, F ∈ E such that |E|, |F| ≥ 2 and s ∈ E ∩ F, Step 4: Based on the previous results, provide a characterization of the distortions γ + E and γ − E , as functions of μ and μ E . Further, show that for any particular E ∈ E, the difference between the negative and the positive distortion is always constant.
Proposition A.7 If satisfies A1-A6, then for any E, F
The next step is to characterize λ.
Proof In Appendix B.
Step 5: Use the definition of μ E from Proposition A.8 into the representation from Lemma 2.2.
The representation follows from the observation that Step 6: Establish the claims on μ and λ.
The uniqueness of μ and of every μ E follows from the uniqueness in the SEU model. λ = γ + E − γ − E is unique as well from the definition of γ 's from (7.13) and (7.13). Finally, the bound is given by the following proposition. Proof In Appendix B.
Proof of Theorem 3.1 Let
Then, from Eq. (6.3), we know, for any f ∈ F and E ∈ E, In either case, this is exactly (AGL).
Proof of Theorem 3.2
That satisfies A1-A3, uncertainty aversion and independence with respect to the constant act 0 is immediate. So it remains to show satisfies codecomposable independence. Fix some non-bet act f ∈ F E and assume without loss of generality that E includes all neutral states for f . It is clear that the bet a E = e f (that is equals e f on E and 0 otherwise) is also in F E . For each α ∈ (0, 1) denote f as Likewise, for s ∈ E c , g(s) < e g . Hence, g ∈ F E . So by Alignment Independence satisfies independence over {αa E +β f |α, β ∈ R + }. Finally, to characterize v, notice that the capacity is fully determined by its valuation over all bets and that it is unique. Further, by Theorem 3.1, V M M (a E ) = min F∈E μ F (E)a E , for any bet a E . Hence, v : E → min F∈E μ F (E) induces that same ranking over bets as V M M (·), and therefore represents .
Proof of Theorem 4.1 Use the notation that superscripts denote the DM, e.g., e i f is the hedge of f for DM i.
(i) ⇒ (ii). Let 1 be more loss-biased than 2 . Consider any f , g ∈ F such that e 1 f = e 2 g and e 1 f ∨f = e 2 g∨ḡ . So, by Proposition A.5, observe, by definition f ∨f = e f + | f − e f |: Since, e 1 f = e 2 g , this implies Suppose further, f 1 c for any c ∈ F c , implies that g 2 c. Clearly, this is true if and only if V 2 (g) ≥ V 1 ( f ). We can write V 2 (g) ≥ V 1 ( f ) as defined in (AGL) as, Canceling according to (6.4) and , and e 1 f ∨f = e 2 g∨ḡ . Suppose for some c ∈ F c , f 1 c. Therefore, using the associate given by Proposition A.5, Since g |] by the same logic of (6.4). So,
B Proofs of lemmas and propositions
Proof of Proposition A.1 Consider any E, E ∈ E, and single-alignment acts f ∈ F E , g ∈ F E . Let F be a non-overlapping event for both E and E . Consider two distinct h, h ∈ F, such that h(s) =h(s) = h (s) =h (s) for all s / ∈ F, which means that their alignment is neutral in F c . Moreover suppose that for every s ∈ F, h(s) >h(s) or h(s) <h(s), and h (s) >h (s) or h (s) <h (s), so that on every state in F, h and h are either strictly considered positive or negative. Further assume that for all s ∈ F, 0 < h(s) < h (s) or 0 < h (s) < h(s) (hence on F the acts are always different in terms of preferences). By Strong Monotonicity and Continuity, there is always possible to find such acts h and h .
Local Mixture Consistency guarantees for f , g, h there exists some α hh such that for any α ∈ (α hh , 1), Moreover, since f and g are single-aligned continuity of preferences, it implies that alignment does not change for small perturbations around f and g. Hence, the for α close to one, α f + (1 − α)h ∈ F E , and αg + (1 − α)h ∈ F E . From the representation of 2.2, (7.1) implies
Which by linearity and the fact that h(s)
Normalize μ E and μ E conditional on F to be probability distributions over F, then Eq. (7.2) becomes Since all states are non-null, μ E (·|F) and μ E (·|F) are strictly positive |F|dimensional vectors. These vectors are normal to (h − h ) ∈ R |F| , which consists of nonzero elements by the assumption that h and h are different for all s ∈ F. Therefore, μ E (·|F) and μ E (·|F) are collinear vectors in R |F| , with norm 1. Then, for all It remains to show that if (7.3) holds there exists a unique distribution μ that generates the conditional distributions, i.e., such that for all E ∈ E and non-overlapping Hence, it suffices to show that for the set {E i } i=1,...,n , a unique distribution exists such that (7.4) holds. Note that (7.5) implies that for any i = 1, 2, . . . n − 1, These n − 1 equations, along with the necessary condition to be a probability distribution: gives n equations and n unknowns (the μ(s i )'s), which can be written in the following form: ⎡ Equation (7.6) has a unique solution if and only if the matrix A n is invertible. We will prove the stronger condition that det(A n ) > 0 instead, by induction on |S|. Let |S| = 3, then need to show that det(A 3 ) = 0. Let a i j = Therefore, det(A k ) = det(A m−1 ) + a 12 (a 23 det(A m−3 )) > 0, from the induction hypothesis that for all k < m, det(A k ) > 0. Hence, the system from Eq. (7.6) has a unique solution, μ. From the previous result, for any E ∈ E such that |E| > 2, μ E is also generated by μ. Hence, there exists a unique μ : 2 S → [0, 1] such that every conditional distribution of μ (conditional on event F), is the same as the conditional distribution of μ E provided that F and E are non-overlapping.
Proof of Lemma A.2
Suppose there exists s ∈ S such that μ E (s) = μ E (s) for some E = E . Then, by Proposition A.1, there are two cases: For the second case, the argument is symmetric (replacing γ + E for γ − E ).
Proof of Proposition A.3
Consider some single alignment f ∈ F E , and g ∈ F F such that f ∼ g, wheref ∈ F E c andḡ ∈ F F c are the respective balancing acts. Given s ∈ S, consider some h ∈ F such that h(t) = 0 for all t = s and h(s) > 0. Suppose From the definition of alignment and continuity of , for α close to 1,
by the representation result from Lemma 2.2, from Antisymmetry μ E (s) > μ F (s) if and only if μ F c (s) > μ E c (s).
Suppose μ E + μ E c = μ F + μ F c . It must be the case that the following two conditions hold: for some θ < 1 (7.7) Let h ∈ F be such that h (t) = 0 for all t = s, s , and h (s), h (s ) = 0. According to the above argument, for single alignment f ∈ F E and g ∈ F F where f ∼ g, for α close to 1, we can appeal to Antisymmetry and to obtain,
s)h(s) + μ E c (s )h(s ) > μ F c (s)h(s) + μ F c (s )h(s ).
(7.8) In other words, there is no solution to the system obtained from Eqs. 7.7 and 7.8.
Proof of Proposition A.5
Recall e f is defined as the constant such that for a balanced pair, ( f ,f ), 1 2 f (s)+ 1 2f (s) = e f for all s ∈ S. From lemma A.4 and the representation from Lemma 2.2, Proof of Proposition A. 6 We will prove the claim in steps. First, for all such E and F such that E ∪ F = S and E ∩ F c = ∅ and F ∩ E c = ∅ and E ∩ F = I = ∅, we claim μ E − μ E\I = μ F − μ F\I . Indeed, by definition F\I = E c = ∅ and E\I = F c = ∅. So, Also, from Proposition A.3, μ E + μ E c = μ F + μ F c for all E, F ∈ E. This and the above observation imply μ E − μ E\I = μ F − μ F\I .
Next, we claim, for all such E and F such that E ∩ F c = ∅ and F ∩ E c = ∅ and E ∩ F = I = ∅, μ E − μ E\I = μ F − μ F\I . To see this, notice that (E, E c ∪ I ), (E c ∪ I , F c ∪ I ), and (F c ∪ I , F) all satisfy (as pairs of subsets), the conditions to apply the first claim. So, Finally, we use this second claim to prove the proposition. Let E and F be such that s ∈ E ∩ F. Notice, if E ∩ F = s we can apply the second claim directly. So assume s E ∩ F. There are two cases. (i) E c ∩ F c = ∅. Then, (E, (E c ∩ F c ) ∪ s), and ((E c ∩ F c ) ∪ s, F), satisfy the conditions of the second claim so, μ E − μ E\s = μ (E c ∩F c )∪s − μ E c ∩F c = μ F − μ F\s . (ii) E c ∩ F c = ∅. Then, (E, E c ∪ s) satisfy the conditions for the second claim: μ E − μ E\s = μ (E c ∪s) − μ E c . Now notice that it must be that E c ∪ s ⊂ F, hence (E c ∪ s) c ∩ F c = ∅. Applying case (i) provides μ (E c ∪s) − μ E c = μ F − μ F\s . This completes the proof.
Proof of Proposition A.7
Consider 3 different cases, (i) E = F c , (ii) F E and (iii) E ∩ F = ∅ and E c ∩ F = ∅, and E ∩ F c = ∅. It suffices to consider these three conditions since whenever E ∩ F = ∅, and E c ∩ F c = ∅, Lemma A.4 will get the result for E and F, from E c and F c .
First note that the case where E = F c the result follows straightforwardly from Proposition A.3. For cases (ii) and (iii) notice that there exists some s ∈ E ∩ F. It is without loss of generality to assume that |E|, |F| ≥ 2, 14 for this s and any t ∈ S, we can divide (6.1) (from proposition A.6) by μ(t) > 0 and obtain (7.10) Now, consider the case where F E. By the definition of γ + E = μ E (s) μ(s) for s ∈ E, and γ − E = μ E (s) μ(s) for s ∈ E c . Suppose s ∈ E ∩ F, then using (7.10) and the definition of states as positive or negative (when viewed from E, F, E\s, and F\s). Since F E, there exists some s ∈ E ∩ F and t ∈ E c ∩ F c .
Define φ : R → R as φ = max{0, x}. Since φ is the maximum of two linear, hence convex, functions, it is convex. Notice we can rewrite V , so V is convex. From this perspective, we will show that every element of ∂ V ( f ), the sub-differential at f , is strictly positive (for f in the interior of R n + ). We where φ + = 1 and φ − = 0 are the maximum and minimum s-components of ∂φ, respectively.
|
2022-11-21T15:10:21.629Z
|
2018-07-17T00:00:00.000
|
{
"year": 2018,
"sha1": "15501a2fb3c230442f458ed418816cdf52fcfb33",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00199-018-1138-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "15501a2fb3c230442f458ed418816cdf52fcfb33",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
44023580
|
pes2o/s2orc
|
v3-fos-license
|
Clathrin Heavy Chain Expression and Subcellular Distribution in Embryos of Drosophila melanogaster
Tubular organs are essential for organisms to establish transport systems for nutrients, liquids and gases. The development of tubes requires endocytosis of bound ligands, receptors and proteins at the plasma membrane (Bonifacino and Traub, 2003; Nelson, 2009). Clathrin coated vesicles (CCVs) organize major routes of cargo selective endocytosis in higher eukaryotic cells (Conner and Schmid, 2003). The formation of CCVs requires clathrin molecules. During CCV budding, clathrin molecules assemble to form a cage-like coat around the nascent vesicle membrane. Clathrin assembly is assisted by numerous adaptor proteins. After inward budding, CCV scission from the membrane is mediated by the large GTPase Dynamin. Released CCVs diffuse from the membrane and undergo uncoating, whereby Clathrin molecules disassemble from the vesicles. The uncoating process is mediated by the ATPase function of the Heat shock cognate protein (Hsc70), which interacts with Chc and DnaJ adaptor proteins. The released Clathrin molecules reassemble for subsequent rounds of endocytosis while vesicles fuse with acceptor compartments, such as early endosomes (Conner and Schmid, 2003; Kirchhausen, 2000; Ungewickell and Hinrichsen, 2007).
Introduction
Tubular organs are essential for organisms to establish transport systems for nutrients, liquids and gases.The development of tubes requires endocytosis of bound ligands, receptors and proteins at the plasma membrane (Bonifacino and Traub, 2003;Nelson, 2009).Clathrin coated vesicles (CCVs) organize major routes of cargo selective endocytosis in higher eukaryotic cells (Conner and Schmid, 2003).The formation of CCVs requires clathrin molecules.During CCV budding, clathrin molecules assemble to form a cage-like coat around the nascent vesicle membrane.Clathrin assembly is assisted by numerous adaptor proteins.After inward budding, CCV scission from the membrane is mediated by the large GTPase Dynamin.Released CCVs diffuse from the membrane and undergo uncoating, whereby Clathrin molecules disassemble from the vesicles.The uncoating process is mediated by the ATPase function of the Heat shock cognate protein (Hsc70), which interacts with Chc and DnaJ adaptor proteins.The released Clathrin molecules reassemble for subsequent rounds of endocytosis while vesicles fuse with acceptor compartments, such as early endosomes (Conner and Schmid, 2003;Kirchhausen, 2000;Ungewickell and Hinrichsen, 2007).
Clathrin is a three-dimensional array of so-called triskelia that possesses the intrinsic ability to form a cage-like lattice around the vesicles (Brodsky et al., 2001).The Clathrin triskelion, a three-legged structure, is composed of three clathrin-heavy chain (Chc) and three Clathrinlight chain (Clc) subunits.Thus, Chc provides a basic component of the Clathrin coat (ter Haar et al., 1998;Kirchhausen, 2000).Evolutionarily, Chc and Clc are highly conserved from yeast to human (Wakeham et al., 2005).In the human genome two isoforms of chc and clc have evolved by gene duplication (Wakeham et al., 2005).For example the human clathrin heavy chain comprises of CHC17 (genomic location 17q23.2) and CHC22 (genomic loccation 22q11.21),which show distinct expression patterns (Dodge et al., 1991;Sirotkin et al., 1996;Kedra et al., 1996;Long et al., 1996).
2
Drosophila melanogaster is a well-established model organism to study gene and protein expression and function in tubular organs.During development of the Drosophila respiratory system, tracheal tube lumina undergo airway liquid-clearance, to enable liquid-air-transition at the end of embryogenesis.This occurs also in the vertebrate lung (Behr, 2010;Olver et al., 2004).Previously, we demonstrated in Drosophila the requirement of clathrin-mediated endocytosis for airway clearance and air-filling at the end of embryogenesis (Behr et al., 2007).However, though Drosophila chc gene function has been analyzed in a number of other genetic studies (reviewed in Fischer et al., 2006), the Chc expression, localization and dynamics remained elusive.Recently, we have characterized the chc mRNA and protein expression throughout Drosophila development (Wingen et al., 2009).In consistence with data of vertebrate Chc (Kirchhausen, 2000), we showed, using a specific purified anti-Chc antibody, Chc overlap with the trans-Golgi network, and co-localization with markers for early endocytosis (Wingen et al., 2009).In summary, the anti-Chc antibody is a new tool to analyze Clathrin heavy chain positive vesicles in Drosophila.
In order to analyze subcellular Clathrin distribution, we performed fluorescence labeling studies of endogenous Chc in Drosophila embryos.Immunofluorescent co-labeling studies demonstrate asymmetrical Chc distribution in epidermal cells and cells of tubular organs, such as the tracheal system, the salivary glands, and the gut.We show that Chc is enriched at the apical cell cortex and at the apical cell membrane, where it overlaps with the apical membrane organizer Crumbs (Crb).In consistence, we observed Chc mis-localization in airway cells of crb null and tracheal specific crb knock-down mutants.Furthermore, we show that the Crbmediated apical membrane organization is involved in Chc-mediated airway-clearance at the end of embryogenesis.As Chc and Crb are highly conserved and broadly expressed in epithelial tissues (Wingen et al., 2009;Bulgakova and Knust, 2009), this new molecular mechanism of crb controlling apical Chc endocytosis is of general importance.
Results and discussion
In order to characterize Chc expression in Drosophila e m b r y o s , w e u s e d t h e a n t i -C h c antibody for immunofluorescent stainings on whole mount embryos.At late embryogenesis, stage 14 until stage 16, Chc was strongly enriched in the epidermis and tube forming organs, such as the foregut, the hindgut, the tracheal system and salivary glands (Fig. 1A-D).
At the end of embryogenesis, additional Chc enrichment was found in other organs, such as the midgut and secretory prothoracic glands (Fig. 1E,F).In Drosophila, foregut, hindgut, trachea, salivary glands and epidermis are of ectodermal origin.These organs are primary epithelia, which receive their epithelial character from the blastoderm epithelium (Tepass et al., 2001).Ectodermal epithelial cells display an asymmetric architecture of apical-basal polarity, where the apical cell membrane faces the tube lumen (Tepass et al., 2001).
In order to investigate subcellular Chc distribution, we analyzed immunofluorescent stainings, by using the anti-Chc antibody.In confocal sections of late wild-type embryos Chc was found in a vesicle-like punctuate pattern in the cell cortex as well as at distinct sites at the plasma membrane.This pattern was characteristic for cells of the foregut, hindgut, trachea, and salivary glands (Fig. 2A-D).Next, we generated confocal Z-stacks to generate three-dimensional projections of those organs.These projections revealed Chc accumulation at the apical cell cortex and plasma membrane (Figure 2A'-D').In summary, the All pictures here and in other Figures show anterior at the left.Immunofluorescent stainings using the anti-Chc antibody revealed strong Chc enrichment in ectodermally derived epidermis (ep) and tube forming epithelial organs, such as the tracheal system (ts), the hindgut (hg), the foregut (fg) and salivary glands (sg).(E,F) At the end of embryogenesis, at stage 17, additional Chc enrichment was detectable in the midgut (mg) and the prothoratic glands (pg).asymmetrical distribution suggests that CCVs are most prominent at the apical membrane of tubular organs at late embryogenesis.
As Chc was apically enriched in tubular organs, we performed double immunofluorescent labeling studies, using anti-Chc together with an anti-Crb antibody, an apico-lateral cell membrane marker (Tepass and Knust, 1990;Wodarz et al., 1995).We analyzed single confocal sections of late wild-type embryos.In the cells of tubular organs, Chc accumulated adjacent to the Crb expressing apical cell membranes of foregut, hindgut, trachea, and salivary glands (Fig. 3A-D).Next, we generated confocal Z-stacks, which were used for orthogonal projections and reconstruction of tube lumen and surrounding cells.Crb function during tracheal development has been recently studied.Crb is involved in determining apical polarity, apical membrane growth, cell-invagination, cell-intercalation, tube size control and airway liquid-clearance (Kerman et al., 2008;Laprise et al., 2010;Letizia et al., 2011;Stümpges and Behr, 2011).The orthogonal projection showed Chc enrichment at the Crb expressing membrane (Figure 3A'-D').In summary, we have strong evidence, that Chc positive vesicles are asymmetrically distributed and accumulate at the apical cell cortex and cell membrane, which faces the tube lumen.
As crb null mutants show severe developmental defects of the tracheal system and other ectodermal tissues (Tepass and Knust, 1990), we tested tracheal specific crb knockdown embryos for Chc localization in tracheal cells.In Drosophila, organ specific expression experiments can be performed by the use of the UAS-GAL4 system (Brand and Perrimon, 1993).
In order to generate crb knock-down mutants, we mated flies bearing a UAS-RNAi-crb transgene with flies bearing the tracheal driver line breathlessGAL4 (btlG4).This crossing resulted in a tracheal specific knock-down of crb (Stümpges and Behr, 2011) in the offspring.In wild-type embryos Chc staining is enriched at distinct sites towards the apical membrane of tracheal cells (Fig. 4A).In contrast, the tracheal crb knock-down, led to intracellular accumulation of the Chc staining in tracheal cells (Fig. 4B).Consistently, an intracellular accumulation of Chc staining was also observed in crb null mutant tracheal cells (Fig 4C).Next, we tested the Chc localization upon tracheal Crb overexpression, using the btlG4 driver and the UAS-crb full length transgene.Crb overexpression resulted in strong co-localization of Crb and Chc (Fig. 4D).These findings indicate that Crb is involved in the apical Chc localization.
As Chc and Crb are involved in airway liquid-clearance and air-filling (Behr et al., 2007;Stümpges and Behr, 2011), we tested whether they act together in this process.At the end of embryogenesis, airways undergo lumen clearance, which is accompanied by air-filling in order to enable respiration to conduct oxygen from spiracular openings to the internal tissues (Behr et al., 2007;Stümpges and Behr, 2011;Tsarouhas et al., 2007).Transition from liquid-to air-filled airways can be monitored in vivo by bright field microscopy in wildtype embryos (Fig. 5 A-A'''', Stümpges and Behr, 2011).In contrast, in chc and crb null mutants air-filling is defective (Fig. 5B; Behr et al., 2007;Behr et al., 2007;Stümpges and Behr, 2011).Next, we tested chc and crb genetic interaction for air-filling.One test for genetic interaction is the analysis of trans-heterozygous mutants.A 50% reduction of two genes, which interact in a common process, results in a phenotype, which cannot be observed in individual heterozygous animals.In contrast to wild-type and chc or crb heterozygous mutant embryos (Fig. 5A,D, not shown; Stümpges and Behr, 2011), severe air-filling defects were observed in the trans-heterozygous chc and crb mutants (Fig. 5 C,D).In summary, we provide evidence that chc and crb act in a common process for airway liquid-clearance and air-filling.Thus, the Crb-mediated Chc localization is involved in airway clearance and may result in air-filling defects upon mis-localization in crb mutants.
Immunofluorescent labeling and confocal microscopy
For immunostainings embryos were dechorionated with 2.5% sodiumhypochloride (5 min) and fixed in 2ml 4% PFA (paraformaldehyde) and 3ml heptane for 20 min.Embryos were devitellinized in a mixture of 3ml heptane and 10ml methanol and stored in methanol at -20°C.Afterwards embryos were washed in PBT (PBS, Tween20).Primary antibodies were incubated at 4°C overnight and secondary antibodies were incubated at room temperature for two hours.Finally embryos were mounted in Vectashield (Vector Laboratories) and analyzed with a Zeiss LSM 710 confocal microscope (Zeiss MicroImaging GmbH, Jena, Germany).For confocal sections we used standard settings (Zeiss Zen software, pinhole airy 1 unit).Sequential scans of individual fluorochromes were performed to avoid cross-talk between the channels.Subcellular studies were analyzed by using a Zeiss 63x LCI Plan Neofluar objective.The confocal areas were scanned 16-times using a minimum scan time, suggested by the Zeiss-Zen software.Z-stacks were performed using the suggested optimized distance (between 0,5 -1µm).The ZEN software was used for the projection of the orthogonal sectioning.Images were cropped and analyzed in Adobe Photoshop CS5; Figures were designed with Adobe Illustrator CS5.
Airway liquid-clearance and air-filling assay
Embryos were collected for 3 hours and grown at 25°C until stage 17.Embryos were dechorionated in 2.5% sodiumhypochloride for 5min, washed in distilled water and transferred to a thin apple-juice-agar layer.The living embryos were monitored for gas filling by brightfield-microscopy (Zeiss Axiovert) and documented with the Zeiss Axiovision software (release 7.1).The statistical analysis was performed with Microsoft Excel 2010.P-values were determined by using standard setting (2;2) in Excel 2010, which assume two data sets from distribution with same variants.
Fly stocks
The following fly stocks were obtained from the Bloomington stock center and are described in flybase (http://flybase.bio.indiana.edu/):w 1118 (here referred to as wild-type), btlG4, chc 1 , crb 2 , UAS-crb wt30.12e were obtained by the Bloomington stock center.The UAS-crbRNA 39178i was obtained by the Vienna Drosophila RNAi stock center (Dietzl et al., 2007).For overexpression experiments, we used the Gal4/upstream activator sequence system and the tracheal specific btlG4 driver.For all experiments adequate balancer strains (FM7 and TM3) carrying a GFP transgene were used to recognize individual genotypes.For genetic interaction experiments, heterozygous chc mutant females, bearing the FM7-actinGFP, were mated with TM3-twistGFP balanced crb 2 heterozygous males, in order to recognize the non GFP expressing trans-heterozygous animals.
Conclusion
We have analyzed the subcellular distribution of Chc in epithelial tube organs in Drosophila embryos.Our confocal analysis and three-dimensional reconstructions demonstrate the specific apical accumulation of Chc from stage 14 of embryogenesis onwards when the tracheal system, foregut, hindgut and salivary glands differentiate and mature for physiological functions.Genetic analysis shows that the apical membrane organizer Crb is involved in apical Chc distribution in tracheal cells and that normal Chc localization is required for airway liquid-clearance and air-filling at the end of embryogenesis.This is consistent with previous observations (Behr et al., 2007;Tsarouhas et al., 2007;Stümpges and Behr, 2011), suggesting that apical Clathrin-mediated endocytosis is essential for airway-clearance.Important roles of Clathrin-dependent endocytosis for the internalization of the cystic fibrosis transmembrane conductance regulator (CFTR) and for the activity of the epithelial sodium channels (ENaCs), which are involved in liquid-clearance in the vertebrate lung, have been shown (Lukacs et al., 1997;Shimkets et al., 1997).Thus, up-regulation and apical accumulation of Chc positive vesicles is essential for the development of the tracheal system and other tube forming organs.As Chc and Crb are highly conserved and broadly expressed in epithelial tissues (Wingen et al., 2009;Bulgakova and Knust, 2009), this new molecular mechanisms of crb controlled apical Chc endocytosis is of general importance.
Fig. 1 .
Fig. 1.Chc is enriched in tube forming organs at late embryogenesis.(A-D) Confocal images of whole mount late embryos between stage 14 and stage 16.The left panels illustrate the lateral views, the right panels dorsal views of different embryos.All pictures here and in other Figures show anterior at the left.Immunofluorescent stainings using the anti-Chc antibody revealed strong Chc enrichment in ectodermally derived epidermis (ep) and tube forming epithelial organs, such as the tracheal system (ts), the hindgut (hg), the foregut (fg) and salivary glands (sg).(E,F) At the end of embryogenesis, at stage 17, additional Chc enrichment was detectable in the midgut (mg) and the prothoratic glands (pg).
Fig. 2 .
Fig. 2. Chc vesicles are apically enriched in cells of tubular organs.(A-D) The left panels show confocal images of the tubes of foregut (A), hindgut (B), tracheal dorsal trunk (C), and salivary gland (D) of embryos at stage 15 (A,C) and 16 (B,D).(A'-D') The right panels illustrate three-dimensional projections of confocal Zstacks across the tube of foregut (A'), hindgut (B'), tracheal dorsal main trunk (C'), and salivary gland (D').Using anti-Chc (red) and anti-α-Spectrin (green; Pesacreta et al., 1989; cell membrane marker), images and projections show apical accumulation of Chc vesicles in the tubular organs.Arrows point to the apical membrane.Scale bar = 10µm.
Fig. 3 .
Fig. 3. Chc vesicles are enriched at the apical cell membrane and apical cell cortex.(A-D) The left panel shows confocal images across the tubes of foregut (A), hindgut (B), tracheal dorsal trunk (C), salivary gland (D) of embryos at stage 15.Arrows point to the apical membrane of tube lumina.The vertical white bars indicate the selected regions which were used for orthogonal projections across the entire tube.Inlays in A-D indicate single Chc stainings in grey.(A'-D') The right panel illustrates three-dimensional reconstructions of orthogonal section of confocal Z-stacks across the tube of foregut (A'), hindgut (B'), tracheal dorsal trunk (C'), salivary gland (D').Using anti-Chc (red) and anti-Crb (green) antibodies, which mark the apical cell membrane, images and orthogonal projections, show apical accumulation of Chc positive vesicles (arrows) in the tubular organs.Yellow lines mark the basal cell membrane.Single orthogonal projections of Chc and Crb are indicated in grey in the right pannels.Scale bar = 10µm.
Fig. 4 .
Fig. 4. Chc mis-localization in crb knock-down and crb null mutants.Confocal immunofluorescent images of tracheal cells using the anti-Chc (red) and the anti α-Spectrin (green) and anti-Crb (green) antibodies.The α-Spectrin marks cell membranes and Crb indicates apical cell membranes.(A) In stage 17 wild-type embryos Chc (red) is distributed towards the apical cell membrane.(B,C) In stage 17 btlG4-driven UAS-RNAi-crb knock-down embryos and crb null mutant embryos, Chc showed intracellular mislocalization in tracheal cells (arrows).(D) Tracheal Crb overexpression led to intensive Chc co-localization with Crb (arrows).Scale bars=10µm.
|
2017-09-16T23:55:09.008Z
|
2012-04-20T00:00:00.000
|
{
"year": 2012,
"sha1": "2b85a71885c6a1acdd7dcf4373d28a0c9ff66190",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/35559",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9eb986ef3058391259cd24bf157445626fec1a09",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
256325605
|
pes2o/s2orc
|
v3-fos-license
|
Recent Updates of Immunotherapy for Allergic Rhinitis in Children
Purpose of Review Allergen immunotherapy (AIT) is a novel treatment approach with disease-modifying and preventative benefits that are not shared with other strategies for treating allergic illnesses. It has been demonstrated to be safe and effective in children. This review provides the most recent information on AIT in children as well as any pertinent updates. Recent Findings Although there is not a standard way to begin AIT, there are clear indications for AIT. Each case needs to be evaluated on its own by weighing the pros and downsides. AIT has been proven to significantly improve symptoms and quality of life in children with allergic illness, reduce medication use, stop the development of new allergen sensitizations, and stop the progression of allergic rhinitis to asthma. Novel approaches are under investigation to overcome some known AIT disadvantages. Summary This review provides a thorough summary of the most recent research and updates on AIT in children.
Introduction
Around the world, reports of allergy disorders as allergic rhinitis, asthma, and atopic dermatitis have increased and are highly prevalent [1][2][3][4]. There are 10 to 30% of adults and up to 40% of children impacted, according to epidemiologic research [3]. Pharmacotherapy, allergy immunotherapy, and education about allergen-specific avoidance precautions are possible treatment options for these illnesses [5••, 6]. To achieve a more comprehensive approach, common clinical diagnosis and management algorithm was summarized as Fig. 1. Pharmacotherapy is usually the first step of the management for pediatric patients with allergic rhinitis. However, advantages and disadvantages exist between different treatment options. We listed the pros and cons of current treatment modalities in Table 1.
For individuals with these cross-linked allergy disorders, allergen immunotherapy (AIT), which has been used as a treatment for allergic disease for more than a century, has been shown to be safe, efficient, and potentially disease-modifying. Patients with moderate to severe allergic rhinitis who do not react well to medical treatment are candidates for AIT. The hazards and benefits of each case should be carefully weighed. The use of fewer medication, a considerable improvement in symptoms and quality of life, the prevention of the emergence of new allergen sensitizations, and the prohibition of progression of allergic rhinitis to asthma are all advantages of AIT in children with allergic illness. Severe systemic allergic reactions are a rare but possible risk of AIT.
Mechanism
AIT normalizes allergen-specific T and B cells, controls IgE and IgG production, and modifies mast cells, basophil activation thresholds, and dendritic cell phenotypes through general processes of immunological tolerance to allergens. To decrease type 2 immune responses and allergic inflammation, the major objectives are to retain regulatory T cells (Tregs), regulatory B cells (Bregs), and several other regulatory cells [7•]. The regulation of antigen-specific immune cells, including T and B cells, was assumed to be AIT's main mechanism of action since it operates in an antigen-specific manner. Innate lymphoid cells, monocytes/macrophages, natural killer cells, and dendritic cells are examples of non-antigen-specific immune cells that may be modulated by AIT, according to recent research. The amelioration of symptoms following AIT may also be attributed to these effects [7•]. Possible mechanism of allergen immunotherapy was illustrated as Fig. 2.
Indications
Patients who exhibit allergen-specific IgE antibodies as determined by serum specific IgE laboratory testing or skin prick testing and have allergic rhinitis with or without conjunctivitis, allergen-induced asthma, or stinging insect hypersensitivity should consider AIT [8,9]. Children with allergic rhinitis frequently acquire asthma over time since the two diseases are closely related. However, there are still a lot of unanswered questions regarding whether allergen immunotherapy for allergic rhinitis can prevent asthma. These questions concern the age groups, how to prepare allergens, how to administer AIT, and how long to treat patients [10].
Contraindications
Communication difficulties and a few medical illnesses are contraindications to AIT. A rare but potential risk of AIT is the development of severe systemic allergic reactions [11,12]. Patients chosen for AIT should be able to verbally and physically express to the medical care team any discomforts and symptoms that might point to an adverse reaction.
Starting AIT with children under the age of 5 is a topic of some discussion. Although there is a benefit to starting AIT before the age of 5 years old due to the preventative effect of AIT on the development of new aeroallergen sensitizations and the progressive march to asthma, each case to start AIT should be carefully assessed by evaluating the severity of disease and benefits/risks ratios. Because there is a higher risk of systemic reactions to AIT injections in individuals with uncontrolled labile asthma, allergen immunotherapy is not advised for these patients. According to survey studies, people with uncontrolled and/or labile asthma were more likely to die from AIT; hence, asthma control must be attained before beginning immunotherapy [13]. Medical diseases that make it more difficult for the patient to overcome the systemic allergic reaction or the subsequent treatment are also relative contraindications for AIT. Heart disease, significantly reduced lung function, and ailments needing beta-blockers and angiotensin-converting enzyme inhibitors (ACEI) are among these medical disorders. These comorbidities are present in children even if they are less common than in adults.
Route for Administration
AIT can be given sublingually or subcutaneously, and new delivery methods including intra-and epicutaneous are continuously being researched. AIT attempts to alter innate and adaptive immunologic responses to induce allergen tolerance. Induction of diverse functional regulatory cells, such as regulatory T cells (Tregs), follicular T cells (Tfr), B cells (Bregs), dendritic cells (DCregs), innate lymphoid cells (IL-10 + ILCs), and natural killer cells, is the primary mechanism of AIT for controlling type 2 inflammatory cells. For AIT, subcutaneous delivery (SCIT) was the usual route of administration. The typical SCIT regimen for allergen extracts involves dose titration by once-weekly injection, followed by maintenance dose injections at intervals of 4 to 8 weeks, continuing for at least 3 to 5 years. By using cluster or rush protocols to help the patients reach maintenance, the build-up period can be cut short [14]. These accelerated AIT offer patients quicker relief from allergy symptoms while maintaining comparable safety to standard regimens. However, compared to typical timetables, these protocols require more time commitment initially, but they ultimately save time and money in the long term. In order to reduce the frequency of systemic allergic reactions during accelerated AIT, premedication, which typically only requires an H1 antihistamine 1 h before the treatment, is advised. In appropriately selected patients, the risk for severe systemic reactions during accelerated AIT is low, but life-threatening reactions can occur.
Sublingual immunotherapy (SLIT) tablets serve as another allergen immunotherapy option for clinicians. Nowadays, there are five SLIT tablets that have been licensed for the treatment of allergic rhinoconjunctivitis in North America. These tablets are directed against home dust mites, ragweed, Timothy grass, and other allergens. On the other hand, the FDA has not yet approved any SLIT drops products. In SLIT, allergens are often given daily under the tongue. Large, double-blind, placebo-controlled trials involving both patients who were monosensitized and those who were polysensitized found that SLIT tablets consistently demonstrated therapeutic efficacy [15]. Patients who are allergic to pollen during their individual pollen seasons have showed success with treatment with house dust mite SLIT tablets [15]. Efficacy studies of SLIT drops demonstrate substantial heterogeneity of treatment effect, in contrast to SLIT tablets [15,16]. Although data are limited, studies that compared the efficacy of SLIT tablets versus pharmacotherapy generally indicated that SLIT tablets had a greater benefit than pharmacotherapy when compared with placebo, particularly for perennial allergic rhinoconjunctivitis. When compared with subcutaneous immunotherapy, the results showed that SLIT tablets were superior to subcutaneous immunotherapy in terms of safety but somewhat less superior in terms of efficacy [15]. Additionally, there is no build-up phase necessary with SLIT, and it may be done securely and successfully at home. An intricate immunological network that includes the mouth mucosa and local lymph nodes is a necessary requirement for SLIT [17]. The efficient dosing range of allergy management is another obvious distinction between SCIT and SLIT. For many allergens, SCIT employs a small effective dose range of 5 to 25 μg of allergen per injection, but SLIT needs at least 50 to 100 times more allergen than SCIT to be equally effective [18].
Direct injection of allergens into the lymphatic system is known as intra-lymphatic immunotherapy (ILIT). By reducing the number of treatment applications and the length of the therapy, attaining good compliance and quick symptom relief, and demonstrating safety, ILIT tend to increase the efficiency of AIT. Only three low allergen dosage injections into the inguinal lymph nodes under ultrasound guidance, spaced 1 month apart, are needed for ILIT. When compared to SCIT, the cumulative allergen dose can be reduced 1000-fold [19,20]. The demand for experienced professionals for injection under ultrasound guidance, which may make this procedure less practical, is the drawback of ILIT.
A unique therapy being researched right now is epicutaneous immunotherapy(EPIT). EPIT involves applying allergens to the skin and antigen-presenting cells in the superficial layers of the skin repeatedly. Electronic spreading, ablative fractional laser, and microneedle arrays are examples of epidermal allergen powder delivery technologies [21]. In contrast to mast cells or the vasculature, epidermal Langerhans cells are the focus of EPIT, which can lessen both local and systemic side effects [22]. The following benefits have been noted for EPIT: (1) a high safety profile due to the application of the allergen into the non-vascularized epidermis and subsequent delivery of the allergen to the less-vascularized dermis, (2) increased patient convenience due to the non-invasive (needle-free) and self-administrable application method, likely improving compliance, (3) absence of additional potential irritant constituents (e.g., alum, preservatives), and (4) less expensive. Regarding patients with AR and indoor allergen sensitivity, further information is required.
Local nasal immunotherapy(LNIT) appears to be only beneficial on rhinitis symptoms, according to considerable research conducted over the past 40 years. Local nasal LNIT, however, is not well accepted by patients due to its difficulties in use and local adverse effects that must be prevented using topical nasal premedication [23]. LNIT is not advised for clinical use at this time.
Efficacy
It has been demonstrated that pediatric immunotherapy is both efficient and well tolerated. By reducing symptoms and medication use, SCIT and SLIT have been shown in numerous clinical trials to be helpful for allergic rhinitis and asthma. One study in children aged 5 to 10 years found that both SCIT and SLIT significantly reduced the overall score for rhinitis and asthma symptoms, the overall medical score, and skin reactivity to house dust mites when compared to pharmacotherapy [24]. Another study from 2017 showed that patients with AR who received AIT for 3 years had a considerably lower probability of developing asthma [25]. The impact persisted for up to 2 years after the end of treatment, but it was unable to draw any meaningful conclusions about whether it would last for longer. According to several studies, there might be a lower prevalence of allergy in children born to mothers who underwent AIT during pregnancy. AIT's effectiveness is influenced by the allergen dose and length of treatment. The clinical findings revealed a significant amount of heterogenicity and responsiveness in people. The personal dose was associated to the immunological response, and the length of the treatment was related to long-term recovery after stopping it. Current practice advises doctors to stop AIT if there is no clinical response after 18 to 24 months because there are no reliable diagnostic methods or markers for identifying responder patients [26]. Each country's extracts vary in terms of their strength, allergen dosage, allergen combinations, and adjuvants.
Safety
Although AIT is regarded as a safe treatment, it can have unfavorable side effects, including local, large local reactions (LLRs), systemic reactions, and, in rare instances, anaphylaxis. Within 30 min following injection, the majority of the severe systemic reactions will manifest. Severe systemic reactions like anaphylaxis must be promptly identified by the medical care team which is also necessary while administering injections for AIT. Because SLIT has fewer systemic adverse effects than SCIT and no fatalities have been documented, it offers a higher safety profile [27]. One prospective study that looked at the safety of AIT in children under the age of 5 reported that out of 6689 injections in 239 individuals, there was just one systemic reaction. The authors came to the conclusion that AIT is a safe treatment for children under the age of 5 [28]. AIT frequently has side effects that are localized. In a survey study of 249 individuals receiving AIT, 71% of the participants said their AIT caused a local reaction. In 96% of patients who reported local reactions, it was indicated that the local reactions would not induce them to cease AIT. Individual local reactions may not necessarily portend future systemic or local reactions [29].
Duration of AIT
Many randomized controlled trials show long-term efficacy in improving clinical and immunological change following SCIT and SLIT. When AIT was used for less than 3 years, allergy symptoms typically returned 1 year after treatment ended. In a thorough 5-year prospective controlled trial comparing 3-and 5-year HDM SCIT, it was discovered that after 3 years, both groups had significantly lower rhinitis severity scores, asthma severity scores, and visual analog scales. Additionally, both groups continued to receive the treatment benefit after 5 years [30]. For long-term clinical benefit, SCIT and SLIT should both be at least 3 years long. Numerous factors, including the inconvenience of repeated injection visits, unfavorable side effects, and expense, which are the main causes of cessation, have an impact on AIT adherence [27].
Particular Considerations
AIT has a number of drawbacks, including the prolonged duration of therapy necessary to attain better efficacy, high cost, systemic allergic reactions, and the lack of a biomarker for identifying treatment responders. To address the issues related to AIT, supplementary medicines, vaccination adjuvants, and innovative vaccine technologies are currently being researched. All are not in the same developmental stage. For instance, allergoids have not yet received US FDA approval in the USA despite being used in clinical trials in Europe. Since the effects of using biologics to minimize the systemic reaction have been minimal, the expense is not justified. In Europe, modified recombinant proteins and peptides are being developed, but thus far, their level of efficacy has been disappointing [31 •]. Before being prepared for future usage or regulatory approval, all require additional research.
COVID-19 Pandemic Attack
COVID-19 is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and AR is not a risk factor for severe disease. There is currently no immunologic or clinical proof of an interaction between AIT and SARS-CoV-2. Patients who have been diagnosed as confirmed COVID-19-positive cases should stop receiving AIT, and those who have recovered from COVID-19 and are asymptomatic can resume receiving AIT as planned. With SLIT, patients can selftreat at home rather than traveling to or staying at an allergy hospital or clinic. Regarding patients who receive AIT and contract COVID-19 infection, more information is required.
Conclusion
In practice, allergen-specific immunotherapy has been advised for the treatment of severe AR patients who do not respond to standard medication therapies. In order to reduce type 2 inflammation, AIT produces allergic immunological tolerance by increasing many regulatory cells. AIT has been demonstrated to be helpful in easing allergic symptoms, decreasing the need for medicine, lowering allergen reactivity, enhancing quality of life, and preventing the onset of asthma. However, the drawbacks of conventional SCIT include the need for many injections and clinic visits, a high cost, and systemic allergic reactions. In terms of safety, SLIT tablets outperformed SCIT, although with a little lower benefit in terms of efficacy. AIT can be administered through a variety of methods, which offers options and enhances patient compliance and safety. To increase the efficacy of AIT even more, new approaches, adjuvants, adjunctive therapies, biologicals, and novel technologies are being investigated.
Conflict of Interest
The authors declare no competing interests.
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors.
|
2023-01-29T05:06:48.703Z
|
2023-01-27T00:00:00.000
|
{
"year": 2023,
"sha1": "e2f46b67d2da42b9e46f75f6760b3539843acd95",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40136-023-00440-x.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d05b8cedde18f2a4df38a764088774602159d58",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
98196709
|
pes2o/s2orc
|
v3-fos-license
|
Chemical Structure and Morphology of Magnetic Ultrafine Particles Synthesized from a Ternary Gaseous Mixture Involving Cobalt Tricarbonyl Nitrosyl
From a ternary gaseous mixture of cobalt tricarbonyl nitrosyl (Co(CO)3NO), iron pentacarbonyl (Fe(CO)5), and 2-propenyltrimethylsilane (allyltrimethylsilane) (ATMeSi), magnetic black fibrous material composed of amorphous ultrafine particles were produced under irradiation with intense Nd:YAG laser light at 355 nm. Chemical structures were studied from FT-IR and Raman spectra. It was shown that Co(CO)3NO and Fe(CO)5 molecules evolved terminal C≡O groups, and Co and Fe atoms were connected via bridging C=O groups. ATMeSi also coordinated to Co atoms via C=C double bond of allyl group. The chemical compositions and the morphology of the magnetic particles were analyzed by scanning electron microscopy/energy dispersive spectroscopy (SEM-EDS) and HRTEM images. Small amorphous particles with sizes of less than 50 nm joined together to form fibers, and crystalline spheres similar to the structure of Co0.7Fe0.3 were involved in some particles. Magnetization of the ultrafine particles was measured with a SQUID magnetometer. Magnetic susceptibility, , of the ultrafine particles was evaluated to be ~2×10 emu/g, and temperature dependence of supported the ferromagnetic behaviors of the particles. Under a magnetic field of 1-5 T, super-paramagnetic ultrafine particles were also produced in addition to ferromagnetic particles. Existence of several kinds of crystalline spheres was responsible to magnetic properties of the ultrafine particles.
Introduction
Metal containing ultrafine particles are suitable for fabricating nano-patterns in nano-lithography and can be utilized as building blocks of nano-devices [1,2]. These particles were synthesized from organometal compounds such as iron pentacarbonyl (Fe(CO) 5 ) and cobalt tricarbonyl nitrosyl (Co(CO) 3 NO) [3,4] using the photochemical method where photochemical reactions of reactive molecules initiated nucleation reactions during aerosol particle formation [5,6]. 2-Propenyl-trimethylsilane (allyltrimethylsilane) (ATMeSi) can ligate to metal atoms via -coordination of allyl group, and can incorporate into the nucleation reaction during aerosol particle formation [7,8].
From a ternary gaseous mixture of Co(CO) 3 NO, Fe(CO) 5 , and ATMeSi, spherical aerosol particles with a mean diameter of 0.36 m were produced under UV light irradiation [9]. Addition of ATMeSi accelerated the chemical reactions of Co(CO) 3 NO to produce aerosol particles efficiently, and decelerated those of Fe(CO) 5 to inhibit the formation of crystalline deposits which were mainly composed of Fe 2 (CO) 9 structure involving Fe-C(=O)-Co bond. ATMeSi molecules played an essential role to form spherical particles being rich in Co species.
Magnetic ultrafine particles have been produced successfully under irradiation with intense Nd:YAG laser light [6]. From a gaseous mixture of Fe(CO) 5 and trimethylsilyl azide (TMSAz), magnetic ultrafine particles which were composed of ferromagnetic and super-paramagnetic particles were prepared [10,11]. From a ternary gaseous mixture of Co(CO) 3 NO, tetraethylgermane (TEG), and ATMeSi, ferromagnetic ultrafine particles were also produced under irradiation with intense Nd:YAG laser light [12]. From the analysis of HRTEM images, ultrafine particles were composed of several kinds of particles which involved both the polycrystalline micro-domain and amorphous micro-domain. Crystalline micro-domain was mainly composed of Co atoms, and responsible for ferromagnetic properties of the particles.
In the present study, magnetic ultrafine particles were prepared from a ternary gaseous mixture of Co(CO) 3 NO, Fe(CO) 5 , and ATMeSi under intense laser light irradiation with an Nd:YAG laser. The chemical structure of the magnetic particles was studied from FT-IR and Raman spectra, and the chemical compositions and the morphology of the magnetic particles were analyzed by scanning electron microscopy/energy dispersive spectro-scopy (SEM-EDS) and HRTEM images. Magnetic properties were discussed briefly by measuring the magnetic susceptibility of the ultrafine particles.
Experimental
Co(CO) 3 NO (Gelest, 95%), Fe(CO) 5 (Kanto, 95%), and ATMeSi (Tokyo Kasei, G. R. grade) were degassed by freeze-pump-thaw cycles in the dark and purified by vacuum distillation immediately before use. To prepare a gaseous mixture of Co(CO) 3 NO, Fe(CO) 5 , and ATMeSi, each vapor was introduced successively into a cross-shaped Pyrex cell (volume 168 cm 3 ) having a long (length 160 mm, inner diameter 35 mm) and short (length 80 mm, inner diameter 20 mm) arms or into a small cylindrical Pyrex cell (length 160 mm, inner diameter 20 mm, volume 50 cm 3 ) equipped with a couple of quartz windows through a vacuum line equipped with a capacitance manometer (Edwards Barocel Type 600). The background pressure of the irradiation cell was less than 1×10 -4 Torr (1 Torr = 133.3 Pa). The partial pressures of Co(CO) 3 NO, Fe(CO) 5 , and ATMeSi in the irradiation cell were determined from the diagnostic band intensities of FT-IR spectra at 2108 cm -1 for Co(CO) 3 NO, 645 cm -1 for Fe(CO) 5 , and 854 cm -1 for ATMeSi.
The gaseous samples were irradiated with the third harmonics (355 nm) of pulsed Nd:YAG laser light (Continuum Surelite I-10, pulse width 6 ns, repetition rate 10 Hz) (energy, 35~38 mJ/pulse). Absorbance of 1 Torr of Co(CO) 3 NO and of Fe(CO) 5 is 0.19 and 0.07, respectively, at 355 nm in 10 cm light path length. ATMeSi does not absorb any light at longer wavelengths than 220 nm in case of one-photon excitation. Sedimentary particles were deposited on a glass plate and/or Cu substrate placed at the bottom of the irradiation cell.
Scanning electron microscope (SEM) images were recorded with a JEOL JSM 6060 scanning electron microscope. SEM-EDS analyses were performed using a Philips XL30 CP EDAX scanning electron microscope, and HRTEM images were recorded with a JEOL JEM 3010 high resolution transmission electron microscope with LaB 6 cathode operating at accelerating voltage of 300 kV. FT-IR spectra of the gaseous mixtures and of the deposited particles embedded in KBr pellets were measured with a Nicolet NEXUS 470 FT-IR spectrometer. Magnetization of the deposited particles was measured with a SQUID magnetometer (Quantum Design MPMS-5S). Magnetic field was applied by a helium-free superconducting magnet (Toshiba TM-5SP).
Magnetic particle formation under light irradiation with an Nd:YAG laser
Under irradiation with intense Nd:YAG laser light at 355 nm for 3 min, a ternary gaseous mixture of Co(CO) 3 NO (3.5 Torr), Fe(CO) 5 (1.4 Torr), and ATMeSi (8.0 Torr) produced coagulated black fibrous material composed of ultrafine particles with sizes of less than 50 nm (Fig. 1A). Black particles were magnetic as discussed later. FT-IR spectrum of the deposited particles is shown in Fig. 2A. Under Nd:YAG laser light irradiation at 355 nm through a concave lens with a focal length of 40 mm, a ternary gaseous mixture of Co(CO) 3 NO (3.5 Torr), Fe(CO) 5 (1.4 Torr), and ATMeSi (8.0 Torr) produced spherical particles with a mean diameter of 0.60 m (Fig. 1B). FT-IR spectrum of the deposited particles is shown in Fig. 2B. Morphological change of the deposits clearly showed that multiphoton processes due to intense Compared to the spectrum of the deposited particles shown in Fig. 2B, black ultrafine particles showed strong bands at 1867 and 1799 cm -1 assigned to a bridging C=O group, whereas C-O stretching bands at 2026 cm -1 assigned to the terminal C ≡ O group of Co(CO) 3 NO almost disappeared. Moreover, three bands characteristic of trimethylsilyl group at 1254, 841, and 756 cm -1 [13] became stronger, and a new broad band appeared at 1030 cm -1 . The 1030 cm -1 band was assignable to (Si-O) of siloxane structure [14]. From FT-IR spectra, it was strongly suggested that terminal C≡O groups of Co(CO) 3 NO and Fe(CO) 5 were evolved under intense laser light irradiation, and Co and Fe atoms were connected through bridging C=O groups. During ultrafine particle formation, ATMeSi coordinated to Co atoms via C=C double bond of allyl group [9,15].
Raman spectrum of the black ultrafine particles produced from a ternary gaseous mixture of Co(CO) 3 NO (3.5 Torr), Fe(CO) 5 (1.4 Torr), and ATMeSi (8.0 Torr) is shown in Fig. 3A. Due to small amount of the ultrafine particles deposited on a glass plate, background signal coming from the light scattering on a glass plate was relatively large (Fig. 3B). However, reproducible peaks were observed at 379 and 161 cm -1 assignable to (Co-CO) and (Co-Co), respectively, suggesting that Co-(CO)-Co and Co-(CO)-Fe structures which were observed such as in Co 2 (CO) 8 species were involved in the ultrafine particles. SEM-EDS analysis showed that atomic ratio of Co to Fe, Si, C, and O atoms of the ultrafine particles deposited from the ternary gaseous mixture of Co(CO) 3 NO (3.5 Torr), Fe(CO) 5 (1.4 Torr), and ATMeSi (8.0 Torr) was 1 : 0.06 : 0.23 : 1.78 : 1.18. Compared to the atomic ratio of Co to Fe and Si atoms (1 : 0.14 : 0.04) observed with the particles produced under irradiation with weaker YAG laser light through a concave lens with a focal length of 120 mm [9], the magnetic ultrafine particles involved much more Co atoms,
Morphological characteristics of magnetic ultrafine particles
To investigate the morphological characteristics of the magnetic ultrafine particles in more detail, HRTEM measurements were carried out for the ultrafine particles deposited from a ternary gaseous mixture of Co(CO) 3 NO (3.5 Torr), Fe(CO) 5 (1.4 Torr), and ATMeSi (8.0 Torr). As shown in Fig. 4, small amorphous particles with sizes less than 50 nm joined together to form fibers, and crystalline spheres were involved in some particles. Crystalline spheres are shown in Fig. 5. The distance between crystal planes was observed to be 0.193, 0.198, or 0.204 nm at several crystalline spheres, showing that atomic compositions and crystal structures were slightly different from each other.
Electron diffraction pattern of the crystalline phase was recorded, and fitted with data from XRD databases. The best fit was achieved with one of the Fe-Co alloys, i.e., Co 0.7 Fe 0.3 (Fig. 6) although the fit was not fully perfect. The results showed that the crystalline phase was mainly composed of Co atoms, and had not yet been included in the database.
HRTEM-EDS measurements were also carried out at several crystalline spheres. The average value of the atomic ratio of Co to Fe, Si, C, and O atoms was 1 : 0.15 : 0.21 : 0.13 : 0.16. HRTEM-EDS analysis was free from the contribution of surface contamination of Cu substrate frequently observed in SEM-EDS analysis. Hence, the values of the C and O atoms were reasonable. The results confirmed that terminal C≡O groups of Co(CO) 3 NO and Fe(CO) 5 were evolved almost completely under intense laser light irradiation and bridging C=O groups were involved to connect Co and/or Fe atoms.
Magnetic properties of magnetic ultrafine particles
In order to investigate the magnetic properties of the deposited particles, magnetic ultrafine particles were produced from a gaseous mixture of Co(CO) 3 NO (3.9 Torr), Fe(CO) 5 Magnetic susceptibility, of the ultrafine particles was ~2×10 -6 emu for the measured sample of ~3 mg. In the present experiment, the ultrafine particles of ~5 mg were deposited on a piece of Scotch tape of 18×146 mm and then, the Scotch tape was cut into 18×76 mm (~90 mg) after the deposition of ultrafine particles to prepare a sample for SQUID measurement. Therefore, we could not determine precisely the amount of particles used for SQUID measurement. As discussed in previous papers [11,12], the magnetization of the magnetic particles synthesized under irradiation with intense Nd:YAG laser light saturated at low magnetic field around 500 Oe. Hence, value measured under a magnetic field of 1.5 T was extrapolated to the value at 500 Oe to be ~6×10 -5 emu for the amount of ~3 mg. Thus, value was only roughly evaluated to be ~2 × 10 -2 emu/g or larger, suggesting that the deposited ultrafine particles were ferromagnetic as discussed in a previous paper [11]. As shown in Fig. 7, value gradually increased with decreasing temperature as was usually observed for ferromagnetic particles. In the low temperature region below 30 K, temperature-sensitive component appeared. This can be attributed to the existence of super-paramagnetic species in addition to the ferromagnetic species as in the case of the magnetic particles produced from a gaseous mixture of Fe(CO) 5 and TMSAz [11], showing that the magnetic ultrafine particles were composed of two kinds of magnetic particles. Crystalline spheres having slightly different distances of crystal planes may be responsible for the ferromagnetic and super-paramagnetic properties of the magnetic particles.
Conclusions
From a ternary gaseous mixture of Co(CO) 3 NO, Fe(CO) 5 , and ATMeSi, magnetic amorphous ultrafine particles were produced under irradiation with intense Nd:YAG laser light at 355 nm. From FT-IR and Raman spectra, Co and Fe atoms were connected via bridging C=O groups, and ATMeSi coordinated to Co atoms via C=C double bond of allyl group. From SEM-EDS and HRTEM analyses, it was shown that small amorphous particles with sizes of less than 50 nm joined together to form fibers, and crystalline spheres similar to the structure of Co 0.7 Fe 0.3 were involved in some particles.
Magnetic susceptibility, , of the ultrafine particles measured with a SQUID magnetometer was evaluated to be ~2 × 10 -2 emu/g, and temperature dependence of supported the ferro-magnetic behaviors of the particles. Under a magnetic field of 1-5 T, super-paramagnetic ultrafine particles were also produced in addition to ferromagnetic particles. Deposition of several kinds of crystalline spheres was responsible to magnetic properties of the ultrafine particles.
|
2019-04-06T13:11:05.375Z
|
2015-05-11T00:00:00.000
|
{
"year": 2015,
"sha1": "7f8fa8c7fa2037c5de38c4bcf7a25be93a0cf6c6",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/photopolymer/28/3/28_429/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9dfe6f86ceabe4b0e508725c2f9ef50d5ccd5e2a",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
244282752
|
pes2o/s2orc
|
v3-fos-license
|
Allergic dermatitis after knee arthroscopy with repeated exposure to Dermabond Prineo™ in pediatric patients: Two case reports
BACKGROUND Allergic contact dermatitis (ACD) secondary to Dermabond Prineo™ is rare, but documented. To our knowledge, there are no described reports of this ACD reaction within the pediatric population following arthroscopic surgery. CASE SUMMARY We report two cases of pediatric ACD upon second exposure to Dermabond Prineo™ after knee arthroscopy. Both cases presented within two weeks of the inciting second exposure. The cases resolved with differing described combinations of sterile cleaning, diphenhydramine, and antibiotic administration. No long-term sequelae were found. CONCLUSION This case report elucidates the rare complication of allergic dermatitis secondary to Dermabond Prineo™ repeat exposure use in pediatric arthroscopy.
INTRODUCTION
Efforts to decrease total operative time during a given surgical procedure are becoming more critical as both surgeons and administrators consider cost savings for hospital systems and surgical centers. It is estimated that one minute in the operating room can cost up to over $130 depending on the facility [1,2]. With the advent of rapid wound closure products such as Dermabond™ and Dermabond Prineo™ (Ethicon Endo-Surgery, Cincinnati, OH), operative times can be shortened, resources saved, and operative efficiency and post-operative patient comfort increased [3][4][5].
Prineo™ is a wound closure system that utilizes a self-adhering polyester-based mesh in combination with a monomeric 2-octyl cyanoacrylate formulation and the colorant D&C Violet No. 2. The wound closure system is intended to be used in conjunction with deep dermal stitches. Reported benefits of Prineo™ include a protective microbial barrier, greater skin holding strength when compared to skin staples or subcuticular sutures, more evenly distributed tension away from wound edges, easy removal, and reduction in overall wound closure time [3,[6][7][8].
While there are reported cases of post-operative allergic contact dermatitis (ACD) with the use of Dermabond™, there are few reported cases of such dermatitis associated with the Prineo™ wound closure system, and even fewer associated with a pediatric age group [9][10][11]. This case report describes instances of ACD following exposure to Prineo™ in a pediatric age group.
Chief complaints
Case 1: Six days after an arthroscopic left medial meniscus repair and bone marrow aspirate injection, a 15-year-old female reported increasing itching and a burning sensation around the incision sites that progressed to feeling like her left knee was "on fire."
Case 2:
The second patient is a 12-year-old female who presented one week after her left medial meniscal allograft transplantation and reconstruction of anterior cruciate ligament (ACL), posterior cruciate ligament (PCL), and medial collateral ligament (MCL) with complaints of two days of itching around her operative sites.
History of present illness
Case 1: The patient underwent an arthroscopic left medial meniscus repair and bone marrow aspirate injection in which the portal incision sites were closed with Prineo™. Thrombo-Embolus-Deterrent (TED) hose were applied after the surgical drapes were taken down. The procedure was uncomplicated. Upon the patient's return for her oneweek postoperative follow up appointment, she was noted to have large blisters covering the anterior portal sites ( Figure 1A). The Prineo™ mesh dressing was removed and it was noted that there were large blisters to the anterior left knee. All incisions and portal sites were closed with Prineo™. Surgical drapes were taken down and TED hose were applied bilaterally. The procedure was uncomplicated. The patient then returned for her one-week postoperative follow-up appointment with a red papular rash surrounding the anterior knee and surgical sites. She complained of itching around these sites.
History of past illness
Case 1: This patient had a right knee ACL reconstruction two years prior in which the incisions were closed with Prineo™. There was no allergic reaction to the closure device at that time. She then sustained a left knee injury while playing softball. She was found to have a medial meniscus tear that was subsequently treated surgically as presented in this case.
Case 2:
She had previously undergone a right medial meniscal allograft transplantation with ACL and MCL reconstruction a year and a half prior for congenital absence of these structures, performed by the senior author. Dermabond™ was used for wound closure during her first surgical procedure and Prineo™ was used in this case.
Personal and family history
Case 1: This patient had an unremarkable personal and family medical history.
Case 2:
This patient had an unremarkable family medical history with a personal medical history of congenital absence of bilateral ACL, MCL and medial meniscus.
Physical examination
Case 1: The blisters were intact and raised. She also had pruritic scattered papules on the thigh and lower leg. She had a negative Homan sign and the remainder of her physical exam was unremarkable for her postoperative course.
Case 2:
One week post-operatively, the dressings covering the operative knee were removed and she was noted to have significant skin inflammation with blisters and welts along the entirety of her surgical incisions (Figure 2A). She also had scattered papules from her groin to her left ankle that were erythematous but not draining nor pustular. The surgical incisions and portal sites were noted to be well approximated with no evidence of drainage.
FINAL DIAGNOSIS
The above cases demonstrate the occurrence of pediatric ACD upon second exposure to Prineo™. Both cases presented with this within one week of the surgical procedure. November
TREATMENT
Case 1: She was prescribed diphenhydramine 25 mg twice daily and placed on doxycycline 100 mg daily for seven days for treatment of concurrent folliculitis. Her wounds were cleaned with sterile water and patted dry. They were then redressed with a nonadherent dressing over the blistering area followed by soft dressings over top. Her TED hose were discontinued until the blisters dried up.
Case 2:
The patient's TED hose were discontinued on the operative side (left) and the skin was cleaned above and below the incisions. The incision sites were then redressed with a non-adhesive dressing followed by soft dressings. She was placed on diphenhydramine 25 mg twice daily which was increased to four times per day as needed for persistent itching. She underwent daily dressing changes through postoperative day nine when the blisters became flaccid.
OUTCOME AND FOLLOW-UP
Case 1: At her 2-wk post-operative visit, the raised plaques had flattened, the erythema had decreased, and her pruritis had resolved ( Figure 1B). She was followed weekly and noted to have significant improvement of the contact dermatitis at her three-week postoperative visit. Her blisters had resolved and no active drainage was appreciated on exam ( Figure 1C). The patient was followed two years postoperatively and had no recurrence of any skin reaction surrounding the surgical incisions or elsewhere on her body.
Case 2:
On postoperative day fifteen, all blisters had drained and epithelialization of the underlying skin was appreciated. At the patient's three-week postoperative appointment, she was instructed to shower and refrain from using any lotions or creams as scabbing of the blisters was noted. At her six-week postoperative appointment, the allergic dermatitis was completely resolved ( Figure 2B). The patient was followed regularly for two years postoperatively and had no recurrence of any skin reaction surrounding the surgical incisions or elsewhere on her body.
DISCUSSION
The occurrence of allergic reaction to Dermabond™ and Dermabond Prineo™ is rare and infrequently reported in the literature. Durando et al [12] reported an incidence rate of 1.7% (15 of 912 patients) over a two-year span involving 912 total knee arthroplasty (TKA) cases using Dermabond™. Of these 15 patients who developed a suspected ACD, three agreed to participate in patch testing to determine if they were allergic to Dermabond™ or 28 other possible allergens. Prineo™ was not used in these patient's cases and as such was not studied. Of the three who agreed to participate, November 18, 2021 Volume 12 Issue 11 two of the three developed a positive reaction to Dermabond™ [12]. Chan et al[13] reported 3 cases of allergic reaction to Prineo™ out of 366 patients (1.8%) that were managed by a single surgeon following TKA. Each of the cases presented within 4-9 d postoperatively and the reaction resolved between 4 wk to 12 wk postoperatively. Each patient was referred to a dermatologist and 2 of the 3 patients received a course of topical corticosteroids. Similar to our cases, no long-term sequelae, including recurrence or superficial or deep joint infection, occurred when followed for at least one year [13].
In a study examining wound complications after 2-octylcyanoacrylate skin closure following total joint arthroplasty, Michalowitz et al [14] found a 19.2% superficial wound complication rate in hip and knee arthroplasty cases when Dermabond Prineo™ was used. As a retrospective cohort study, the specifics of what defined a superficial wound complication were not described [14].
Davis and Stuart [15] reported a single case of a 72-year-old woman who was found to have severe ACD following a left TKA that subsequently was found to have an extreme patch test reaction to Prineo™ upon patch testing. This patient reported a similar but milder rash a year prior when she underwent right TKA. This case study provides further evidence that occurrence and severity of ACD to Dermabond Prineo™ may be related to second exposure. Similar to other reported cases, it should be noted that their patient's symptoms resolved over a 3-4 wk treatment of topical corticosteroids [15].
Regarding current treatment standards, once surgical site infection is ruled out, the treatment of ACD requires an accurate severity assessment. In the post-operative setting, orthopaedic surgeons need to have a high index of suspicion for any dermatitis following the use of skin adhesives and treat immediately based on the severity of the dermatitis. In accordance with the International Contact Dermatitis Research Group classification, a mild reaction (1+ grade) has light erythema and is nonvesicular [16]. Mild reactions can be monitored for progression and consideration can be given to remove the Prineo™ dressing [17]. Conservative treatment entails dressing removal and oral antihistamines for pruritus. A moderate reaction (2+ grade) has edema, erythema, and discrete vesicles [16]. The removal of the Prineo™ dressing is necessary and the ACD is treated with topical steroids and oral antihistamines [17]. A severe reaction has coalescing vesiculobullous papules and is treated the same as a moderate reaction with the additional consideration of oral steroids [16,17]. The current guidance from the American Academy of Allergy, Asthma, and Immunology is to use 0.5 to 1 mg/kg daily oral steroids for 7 d when more than 20% of the body surface area is affected [16]. The selection of topical steroid potency is based on the location of the dermatitis, the lesion size, and the severity of the reaction [16]. In orthopaedic cases that do not involve flexural surfaces, mid to high potency topical steroids, such as triamcinolone 0.1% or clobetasol 0.05%, are appropriate [18]. Topical steroids application should be after hydration of the skin for optimal effectiveness[16]. In our cases, we did not use corticosteroids due to concerns with wound complications that have been previously reported with steroid use and postoperative incisions [19,20].
In a case report by Dunst et al [21], a 44-year-old woman who underwent reduction mammoplasty with Prineo™ wound closure presented 10 d postoperatively complaining of severe itching with an extensive skin reaction in the vicinity of the Prineo™ skin closure device. She was referred to dermatology and underwent allergy testing where a moderate positive allergic reaction to both components of the Prineo™ wound closure device was noted. The authors described a noticeable reduction in operating time in their use of Prineo™ in over 50 cases of excisional body contouring procedures with this case being the only instance of any dermatitis complication [21].
While there are some reports of Prineo™ reactions, there are several studies demonstrating the benefits of shorter operative times. Shippert [1] performed a randomized controlled trial showing decreased operative time, leading to decreased costs [1]. Another randomized study concluded that Prineo™ has significantly faster closure and increased post-operative patient comfort [8]. The low risk of adverse reaction to Prineo™ combined with the benefits of increased patient comfort and operative efficiency provide rationale for its continued use. In the current era with a focus on cost savings, Prineo™ can significantly decrease operative times leading to overall cost savings for hospital systems and surgical facilities. Any previous occurrence of allergic dermatitis following use of Dermabond™ or Prineo™, however, should prompt a thorough history and further use of Prineo™ should be carefully considered, if not completely avoided.
|
2021-11-18T16:12:28.226Z
|
2021-11-18T00:00:00.000
|
{
"year": 2021,
"sha1": "2a91ad860e8529a7ff92177dee546221e974b20d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5312/wjo.v12.i11.931",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b52d7b0b8a77a0c4b0da712aa74224d4e34917f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
257921107
|
pes2o/s2orc
|
v3-fos-license
|
Women want male partner engagement in antenatal care services: A qualitative study of pregnant women from rural South Africa
Introduction Evidence strongly shows that a supportive, involved male partner facilitates maternal HIV testing during pregnancy, increases maternal antiretroviral (ART) adherence and increases HIV-free infant survival. Partner engagement in antenatal care (ANC) is influential; however, the most effective strategy to engage male partners is currently unknown. Engaging pregnant women to understand whether male partner involvement is welcome in ANC, what this involvement entails and how best to invite their partner is an important first step in determining how best to engage male partners. Methods We interviewed 36 pregnant women receiving ANC services at a district hospital in rural Mpumalanga, South Africa to assess the strengths and weaknesses of their current relationship, the type of partner support they receive, whether they would like their male partner to be involved in their ANC, and how best to invite their male partner to their appointments. We conducted a thematic analysis of the qualitative interviews using MAXQDA software. Results Financial, emotional, and physical support were noted as important aspects of support currently provided by male partners, with most pregnant women wanting their partners to engage in ANC services during pregnancy. Preferred engagement strategies included participation in couple-based HIV testing and counseling, regular ANC appointment attendance, and delivery room presence. Women who reported a positive relationship with her partner were more likely to prefer inviting their partner without health facility assistance, while those who reported challenges in their relationship preferred assistance through a letter or community health worker. Pregnant women perceived regular business hours (due to their partner being employed and unable to take off work) and having a partner involved in multiple relationships as barriers in getting their partner to attend ANC services. Discussion Rural South African women, even those in unsatisfactory relationships want their male partners to attend their ANC visits and birth. To make this possible, health facilities will have to tailor male partner engagement outreach strategies to the preferences and needs of the pregnant woman.
Introduction
Evidence strongly shows that a supportive, involved male partner facilitates maternal HIV testing during pregnancy, increases maternal antiretroviral (ART) adherence and increases HIV-free infant survival. Partner engagement in antenatal care (ANC) is influential; however, the most effective strategy to engage male partners is currently unknown. Engaging pregnant women to understand whether male partner involvement is welcome in ANC, what this involvement entails and how best to invite their partner is an important first step in determining how best to engage male partners.
Methods
We interviewed 36 pregnant women receiving ANC services at a district hospital in rural Mpumalanga, South Africa to assess the strengths and weaknesses of their current relationship, the type of partner support they receive, whether they would like their male partner to be involved in their ANC, and how best to invite their male partner to their appointments. We conducted a thematic analysis of the qualitative interviews using MAXQDA software.
Results
Financial, emotional, and physical support were noted as important aspects of support currently provided by male partners, with most pregnant women wanting their partners to engage in ANC services during pregnancy. Preferred engagement strategies included participation in couple-based HIV testing and counseling, regular ANC appointment attendance, and delivery room presence. Women who reported a positive relationship with her partner were more likely to prefer inviting their partner without health facility assistance, while those who reported challenges in their relationship preferred assistance through a letter or community health worker. Pregnant women perceived regular business hours (due to Introduction A supportive, involved male partner facilitates maternal HIV testing during pregnancy, improves maternal antiretroviral therapy (ART) initiation and adherence, HIV status disclosure, HIV prevention within couples, and decreases vertical HIV transmission [1][2][3][4][5]. Couples HIV testing and counseling has consistently led to improved maternal and infant outcomes among pregnant women in sub-Saharan Africa (SSA) because it allows the counselor time to address issues of concern (e.g., trust), and creates a space where both partners can be educated about the necessary treatment required for the person with newly diagnosed HIV [6][7][8][9][10][11][12][13]. In South Africa, clinical services in antenatal care (ANC) and maternity wings of most public hospitals do not include partner involvement in clinical care. Male partners are not invited to attend antenatal care (ANC) services or be present for birth, regardless of the pregnant woman's wishes. Pregnancy can stress intimate partner relationships, resulting in relationship conflict [14] and parental stress [15]. New couples often experience a decline in relationship satisfaction in the years following childbirth, in part because they may be focusing on the child at the expense of their intimate relationship [16,17]. A perceived decline in relationship dedication by one partner can result in decreases in personal confidence and relationship dedication by the other partner [17]. There has been considerable work in South Africa documenting how power differentials among young women and their male partners impact relationship behaviors [18,19], but this inequality also extends to older women and can worsen during pregnancy [20]. Power disparities between couples can exacerbate gender inequalities and hinder access to HIV testing and treatment, resulting in poorer health outcomes for both mother and child [10,21,22].
While studies have compared HIV testing and treatment outcomes among pregnant women randomized to different strategies-verbal invitations, invitation letters from clinic, community health worker (CHW) outreach to male partners, and non-financial incentives all increase couples HIV testing and counseling during pregnancy [5,6,8,13,[23][24][25][26]-little is known about how women prefer their male partners to be engaged [27]. We hypothesize that women in difficult relationships with poorer communication may prefer different outreach strategies than women in stable relationships with more supportive male partners.
Patient choice in clinical care decisions can have a positive effect on reducing loss to follow up, improving treatment retention, and clinical outcomes in specific patient populations [28,29]. Furthermore, patient preferred treatment has been associated with improved care outcomes including lower treatment non-adherence and an increased therapeutic alliance (i.e., agreement on goals of treatment, tasks, and reciprocal positive feelings between the provider and client) [28][29][30][31]. While male partner engagement in ANC is strongly correlated with increased uptake of HIV testing and counseling, as well as treatment (if applicable), the best strategy for engaging a male partner in pregnancy is likely context dependent and relationship dependent, therefore, women likely know which strategy will be most effective for them and their male partner.
In this qualitative study of women attending ANC services in rural South Africa, we elicited perspectives on the strength of their relationship, if they were interested in their male partners attending ANC appointments, and which-if any-engagement strategy was preferred. We aimed to gain a deeper understanding of how to deliver personalized, patient-centered care to pregnant women in rural South Africa.
Ethical approvals
We have approval from the Human Research Ethics Committee (Medical) at the University of the Witwatersrand (No. M200984, approved 20/11/2020) and Vanderbilt University Institutional Review Board (IRB #202035, approved 11/9/2020). The CEO of the study hospital approved our research protocol before recruitment. Consent to participate (include appropriate consent statements): Written informed consent was obtained from all participants in the study, including consent for publication.
Data collection
We recruited pregnant women attending ANC services at Tintswalo Hospital, an acute care hospital that sees 800 women for ANC services each month-31.1% with HIV-in Bushbuckridge, Mpumalanga province, South Africa. Pregnant, adult women (� 18 years old) attending ANC services were invited to participate in one in-depth interview after their ANC appointment. We approached and recruited 36 pregnant women from May 8, 2021, to June 8, 2021 using convenience sampling. None refused to participate.
We conducted interviews until we reached data saturation-initially estimated to be after 30 interviews [32]. We collaborated with local partners (authors TS and GHN) to avoid inadvertently stigmatizing participants with interview questions. Interviews initially assessed a participant's views on the strengths and weaknesses of their relationship with their male partner, and the types of support their partner provides. We then asked participants if they would like their male partner to accompany them to their ANC appointments and, if so, how they would prefer to invite their partner. We offered three options: 1) verbally inviting their male partner; 2) a letter of invitation sent from the health facility; or 3) a community health care worker (CHW) visit for counseling and invitation. Additionally, participants were encouraged to suggest other approaches. Finally, we asked how participants would like the health facility to include their partners during pregnancy. We suggested four options: 1) couples counseling, 2) participation during ANC visits, including education from the nurse/physician, 3) couple-based HIV testing and counseling, and 4) having their male partner in the delivery room. Notes were taken during each interview. Interviews ranged from 18 to 36 minutes long.
We interviewed pregnant women receiving ANC services at a public health facility that does not currently encourage male partners to attend ANC, in part due to a lack of suitable facilities (private space) to facilitate male partner presence. While this health facility is open to change, more information from women was thought to be useful to informing future clinical practice.
Eligible participants were interviewed face-to-face in a private room with an interviewer fluent in the participant's preferred language (generally xi-Tsonga). Interviews were recorded. Only minor children were allowed to accompany participants in the interview room. Author GHN (male) and interviewer EN (female) conducted all interviews. Neither interviewer had previously worked at the health facility nor were they acquainted with any participants. Both were employed as interviewers for the Agincourt Rural Health Unit and had experience conducting interviewers in the past. GHN identifies as a man and EN identifies as a woman. Neither had a specific interest in the topic, but both are from the community and were familiar with the difficulties women face during pregnancy. Interviews were then transcribed and translated, which GHN and EN performed independently to ensure reliable translations. Authors JH, CM, and CMA read through the transcriptions multiple times to familiarize themselves with the data and assess for data saturation.
Author reflexivity. The authors recognize that they have different lived experiences in their relationships than the participants. Apart from GN, the authors are not from this province of South Africa (and several of the authors from the US and Canada). Given that the authors perceptions of a "satisfactory" or "unsatisfactory" relationship were important in the development of our relationship codes, ensuring that they reflected local meaning was paramount. Team meetings were vital in discussing cultural norms and expectations of male partners and to understand how those fed into women's choices and preferences. Weekly meetings were held where codes were developed and, later, additional meetings were had to doublecheck the meaning of codes and themes, with full deference to those authors and interviewers from South Africa.
Interview analysis. After repeated reading of transcribed semi-structured interviews, authors CMA, CM, and JH used principles from thematic analysis to independently code interviews in MAXQDA 2022 [22]. Interview analysis sought to explore the perceived relationship strength, if relationship strength impacted a woman's interest in male partner engagement in ANC services, and how relationship strength impacted a woman's preferred strategy to invite her male partner into ANC. Authors CM and JH collaboratively generated 31 deductive codes from previous research about male engagement preferences and 33 inductive codes and placed them within 14 themes. The final framework had > 85% inter-rater reliability after seven meetings.
Results
We interviewed 36 pregnant women receiving ANC services at a rural South African district hospital. Participants were a median age of 28 years (IQR: 24, 34), completed a median of 12 years of formal education (IQR: 11,12), and 91% self-identified as Black or African (three refused to respond). Most lived with their parents/family (59%, n = 21), a third (33%, n = 12) lived with their male partners, and three lived alone (8%). Participants described themselves as single (58%, n = 21) or married (42%, n = 15). All except one participant were still in an intimate relationship with the father of their fetus.
Current relationship quality
Most participants spoke of their relationships in positive ways. Expectations for a good relationship were focused on four components: 1) financial support to cover the costs of food, transportation, baby clothes, and/or electricity, 2) emotional support through communication, trust, and encouragement, 3) physical support through shared household chore and childcare responsibilities, and 4) a good relationship with in-laws. One woman, who spoke positively about her relationship explained why she was happy with her partner.
Pregnant Woman: Everything is very well. My mother-in-law is very good to me as well as my sisters-in-law. Also, my husband is very focused he is not the kind of person to be all over the street [drinking or cheating].
Interviewer: How does the father of your child support you in this situation?
Pregnant Woman: He takes good care of me. We are staying together in Pretoria. I only came back [to Mpumalanga] because I want to give birth at home. When I don't feel well, he gets very worried. Even now I don't feel okay sometimes I feel like vomiting, even when I can call and tell him the doctor said I'm not okay I know he will get worked up.
(Married, 25 years old) Another woman reported being happy with her partner and highlighted the importance of communication and monogamy in her relationship.
We have never had serious problems, every time when we have a problem, we are able to speak about it and solve it. The relationship is good the way I see it, I am happy with him, and he is also happy, we haven't had issues of cheating ever since we started dating." (Single, 23 years old) While most women spoke positively about their relationships, a substantial minority, about 25%, reported being unhappy with their partner. Difficulties arose around finances, emotional and physical support, substance abuse, and issues with the partners extended family. One woman described the daily frustrations she has with her partner.
You have to nag, when you want him to do something, you must fight. If you don't fight, nothing will happen. Just like this morning I wanted money for transport, he says "why didn't you tell me yesterday". How would I have told him? When I got back from work, he wasn't home, since he left in the morning, and he didn't go to work. On the other side [at the house we share] there is no electricity and he went to his home [where his children from a previous relationship live], they called him [because] there are other issues there. But since he was there until I came back from work [name redacted] and I arrive at home, I found out that electricity wasn't fixed. On the other side we are out of gas, I am sure I have been telling him for a while and we use the gas when there is no electricity but because he went home to his kids, he didn't refill the gas. What would I have used to cook? Yooh it is difficult.
Relationship expectations
Financial support. Most women reported receiving money from their partner when they specifically asked for financial assistance, whether it be for food, transportation, or items needed once the baby arrived. Nevertheless, with high rates of poverty, about half of our participants relied on more than just the partner for financial stability. One woman explained, I would not say he supports me well financially because he does not work well. . .my brothers. . .buy use food . . .and also give the children money to carry to school.
(Single, 30 years old) There were a few women in more financially stable situations. One woman was married and was expecting her first child with her partner. Her partner worked out of town and only visited one weekend a month. Financially, she indicated satisfaction with the support he provided, explaining, I am the one handling finances and money, when his money comes in, it comes into my hands. I am happy and I don't stress, and I don't consult when I do something I just do what the money allows me to do.
(Married, 31 years old)
This level of financial control over family finances was unusual in our sample, only one other woman (married, aged 34 years) reported control over the money as it came in.
Emotional support. Even when their partners were living apart, most women reported that their emotional needs were met. Emotional needs included listening to her challenges and offering support, caring for her when she is feeling stressed, and being supportive of mood swings experienced during pregnancy. For example, one 21-year-old woman, who was expecting her second child, described her partner as supportive. She explained, "The relationship is good, we love each other, and we communicate well, we help each other to solve problems, when I tell him about my problems he assists and there is no problem" (Married, 21 years old).
Physical support. Physical support focused on assistance with household chores, including washing clothing, cooking food, and cleaning the house. Given that many women did not live with their male partners, physical support was only relevant for a subset of participants. Some noted that their partners were, willing to "clean, wash clothes and . . .cook." (Married, 21 years old) Others noted that their partners were less inclined to physically help them but did not want to see them struggle. One woman explained, "I once complained about my back hurting and I could not bend anymore, so he bought me a washing machine to replace his laziness. . ." (Single, 32 years old).
Preferences for male partner engagement in ANC and HIV testing
We asked participants if they were interested in their male partner engaging in four services: (1) ANC appointments, (2) couples HIV testing and counseling, (3) relationship counseling, and (4) the delivery. Eighty-nine percent of participants (n = 32) were interested in having their partner attend ANC services with them, 92% (n = 33) wanted couples counseling during the ANC period, 85% (n = 28), excluding those who reported already knowing their partner's status and disclosing their own, wanted couples HIV testing and counseling, and 81% (n = 29) wanted her partner to attend the birth of their child. Women believed that having her male partner present during clinical visits would provide an opportunity to complete couple-based HIV testing and counseling, which would improve trust and provide the partner an understanding of what happens during pregnancy and give him the opportunity to better understand her struggles during pregnancy. Among women who wanted their partner to participate in ANC services, 50% (n = 18) preferred to verbally invite their partner without assistance, 41% (n = 15) preferred an invitation letter from the health facility, and 27% (n = 10) preferred a trained male CHW undertaking counseling with her partner before that CHW issued an invitation (some women indicated a preference for more than one intervention).
The selection of a particular engagement strategy tended to reflect the strength of a couple's relationship (Fig 1). Those who reported a positive relationship with their partner typically preferred inviting him to attend ANC services without the assistance of the health system. One woman explained, "I prefer number 2 [verbal invite] because when I talk to him, he listens" (Married, 31 years old). Another woman stated, When there is communication there will not be a need for a community health worker or for a letter. When he does not understand [will not attend ANC services] then you [the health facility] can write him a letter or send a community health worker.
(Married, 27 years old) Pregnant women in unhappy, difficult relationships reported that if they extended the invitation to attend ANC visits, her partner may not attend clinic. A letter from the health facility, however, was seen as an official invitation that could not be easily ignored. A pregnant woman who reported being in an unhappy relationship with her partner indicated that she preferred an invitation letter "so that he can believe" [he should attend clinic with her] (Single, 35 years old). Another reported that "a letter can be very helpful, so if he does not show up, then he can come and explain himself why he did not honor the letter." (Single, 42 years old).
Factors other than relationship strength also played a role for some women who preferred the invitation letter. For example, several participants noted that South African cultural norms do not encourage for men to take time off work to attend ANC services. One woman elaborated that a letter from the health facility would provide her husband written documentation to show to his employer. She explained, "I would prefer that you write a letter, the problem that my husband has is that he works at the farms, and they would need proof that it is a matter of clinic. . ." (Married, 21 years old). Women not as comfortable speaking directly with their partner expressed a preference for having a CHW invite their partner to ANC. Three situations emerged where women preferred a CHW-initiated partner invitation: a partner with a history of violence, a partner who had abandoned her, or a new relationship where the couple have not yet established a strong relationship. One woman described her tumultuous relationship: Things are difficult I don't want to lie, he left me when I was 2 months pregnant. I had no money to come here I had to try somewhere. On the side I buy baby clothes, I just live a trying life. Nothing is going well, I am suffering. . . . you can send a community health worker to go and talk to him.
(Single, 25 years old) Women in this situation hoped that the CHW could convince her partner to provide some type of support, or at least to get him to the health facility where health care workers would provide guidance on how to support her during the pregnancy.
Will a partner attend an ANC appointment? A woman's perspective
Most women believed their partners would attend her ANC appointment if invited; however, financial and clinical structural barriers could limit male partner participation, even among interested partners. For example, given that clinical appointments are offered during typical working hours, local employers would have to allow men to miss work to attend appointments. One woman explained, I like them [the ideas of male partner participation in ANC and birth] but most of what you said like coming with me to [ANC] checkups, will be an issue because of the distance and he only gets paid when he clocks in, although he keeps saying that he is doing it for the child and his off days are usually on weekends and does not correspond with my hospital appointments (Single, 20 years old) While we did not specifically ask women where their partner worked, several women revealed that their partners worked in mining or farming industries some distance from both the hospital and their homes which resulted in the male partner only returning home once per month. Despite potential interest, the partner would likely not be able to attend ANC appointments unless clinical service times were adapted.
Others highlighted the challenges they experienced being the newest partner in polygamous relationships. The first wife was perceived as having control over the man's behavior, reducing the likelihood of his participation in ANC services for the wife being interviewed. One woman explained, It would not be possible because I am not always with him. The truth is that he supports me with everything, but I am his second wife, and we [do] get along with the sister wife, [but] there are fights. Before I became pregnant, he informed his wife there is another wife. . . I don't have my own house and I still live at home, sometimes he gets to be with me for a few days; sometimes we go to his home. If it was up to me, I would say I am interested [in him attending ANC appointments] but I can't be able to be with him when I am supposed to be. . .he won't have the time for them to be with me.
PLOS ONE
How women want male partner engagement in antenatal care services: A qualitative study
Discussion
Most of the interviewed pregnant women would like to engage their male partners in ANC services, even when their relationships are, in their own estimation, unsatisfactory. Issues that drove women to report frustrations with their partners included poor communication, a lack of financial or emotional support, polygamy, and misuse of alcohol. These issues reflect the results of similar studies among couples in South Africa [20,[33][34][35]. Women who report difficult relationships may benefit the most from male partner engagement in ANC. Couples with good communication skills may need little encouragement to test together or discuss their HIV status. Disclosure is correlated with improved health outcomes, condom use, and good relationship with an intimate partner [36,37]. Facilitating couple-based HIV testing and counseling in a supportive environment, particularly among those experiencing relationship challenges, would like lead to increased disclosure-and support-within intimate partner relationships [27,38]. It also may also reduce the risk of intimate partner violence (IPV) [39,40].
More than 80% of interviewed pregnant women wanted their partners to attend ANC appointments, complete couples HIV testing and counseling together, and provide support during the birth of their child. This support presents the health system with an opportunity to develop a partner engagement protocol, one where women can opt their male partner into participation in ANC, delivery, and post-natal care. Evidence-based couple-engagement programs that reflect the requests of our study population have been successfully implemented as clinical trials in South Africa, leading to increased uptake of male partner participation [26]. The remaining challenge is integration of these programs into the national health system.
Given the overwhelming interest in partner engagement, clinical practices will need to adjust. Effective implementation will take an investment in infrastructure, to ensure privacy for all women receiving ANC services or in active labor and will require additional staff training to provide effective based care. For women with migrant worker partners, clinical hours may need to be offered in the evenings or weekends to accommodate their needs. Programs expanding services to allow for men to attend clinic after hours have successfully implemented in Kenya and South Africa [41][42][43]. For women with self-reported difficult relationships, a counselor may need to be available to guide discussions, including around partner support and HIV testing and counseling.
Unsatisfactory relationships are likely to be put under increased pressure during the pregnancy and post-partum period in South Africa and around the world [15,33], leaving women with few avenues to secure their partners emotional and financial support. With 49% of the population living on less than 1,183 Rand (USD 70.90 in 2015) per month [44], women are particularly vulnerable during periods when employment is interrupted. Worldwide, women make only 77 cents for every dollar earned by men, and this inequity is exacerbated by restrictive parental leave which can push women into part-time or informal employment [45]. In South Africa, much like in other parts of the world, maternity leave support is limited or nonexistent, particularly in the informal employment sector. The need for support may lead some women to stay in unsatisfactory relationships, even if the benefits are limited and the risks are high. Pregnant women around the world would benefit if they had access to psychosocial support during ANC, including relationship counseling, to help them make difficult decisions during a time already characterized by change.
Conclusions
Rural South African women, even those in unsatisfactory relationships, want their male partners to attend their ANC visits and delivery. To make this possible, health facilities need to contextualize male partner engagement outreach strategies to the preferences and needs of the pregnant woman. Future studies are required to assess how personalized care to pregnant women can be delivered most cost-effectively to ensure they receive the support they need. To further partner involvement in clinical care, additional trainings in couples HIV testing and counseling for nurses as well as expanding and updating health facilities to accommodate partners in appointment rooms will likely be necessary.
|
2023-04-05T05:07:29.981Z
|
2023-04-03T00:00:00.000
|
{
"year": 2023,
"sha1": "781b994bed25fdbdf100e2901a253a2a4ede083d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "781b994bed25fdbdf100e2901a253a2a4ede083d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265604116
|
pes2o/s2orc
|
v3-fos-license
|
A putative E3 ubiquitin ligase substrate receptor degrades transcription factor SmNAC to enhance bacterial wilt resistance in eggplant
Abstract Bacterial wilt caused by Ralstonia solanacearum is a severe soil-borne disease globally, limiting the production in Solanaceae plants. SmNAC negatively regulated eggplant resistance to Bacterial wilt (BW) though restraining salicylic acid (SA) biosynthesis. However, other mechanisms through which SmNAC regulates BW resistance remain unknown. Here, we identified an interaction factor, SmDDA1b, encoding a substrate receptor for E3 ubiquitin ligase, from the eggplant cDNA library using SmNAC as bait. SmDDA1b expression was promoted by R. solanacearum inoculation and exogenous SA treatment. The virus-induced gene silencing of the SmDDA1b suppressed the BW resistance of eggplants; SmDDA1b overexpression enhanced the BW resistance of tomato plants. SmDDA1b positively regulates BW resistance by inhibiting the spread of R. solanacearum within plants. The SA content and the SA biosynthesis gene ICS1 and signaling pathway genes decreased in the SmDDA1b-silenced plants but increased in SmDDA1b-overexpression plants. Moreover, SmDDB1 protein showed interaction with SmCUL4 and SmDDA1b and protein degradation experiments indicated that SmDDA1b reduced SmNAC protein levels through proteasome degradation. Furthermore, SmNAC could directly bind the SmDDA1b promoter and repress its transcription. Thus, SmDDA1b is a novel regulator functioning in BW resistance of solanaceous crops via the SmNAC-mediated SA pathway. Those results also revealed a negative feedback loop between SmDDA1b and SmNAC controlling BW resistance.
Introduction
As a soil-borne bacterial disease, bacterial wilt (BW) is triggered by members of the Ralstonia solanacearum species complex (RSSC) [1]. it infects about 200 host plant species of 50 families, especially the Solanaceae family [2].Generally, R. solanacearum secretes extracellular polysaccharides and proteases and self-reproduction in the plant vascular bundle; consequently, the water transport is blocked, which leads to plant death [3].During crop production, bacterial wilt is difficult to control because R. solanacearum spreads through irrigation water and infected plants materials.Therefore, to investigate the genes involved in BW resistance is crucial in crop breeding.
Several genes regulating BW resistance have been identified in various plants.The first BW resistance gene is RRS1-R in Arabidopsis plants, it interacts with the matching PopP2 effector secreted by R. solanacearum, resulting in BW resistance [4].In Arabidopsis thaliana ecotype Wassilewskija, RRS1 and RPS4 was involved in resistance to BW resistance in cruciferous crops [5].When the elongation factor-Tu (EF-Tu) receptor (EFR) is ectopically expressed in potato (Solanum tuberosum) and in tomato (Solanum lycopersicum), the transgenic plants indicate reduced BW resistance [6,7].The histone deacetylase (HDAC)-mediated histone acetylation also suppress BW resistance in tomatoes [8].In tomatoes, the BW resistance is elevated due to overexpressed potato StNACb4 [9].In tobacco (Nicotiana tabacum), the transcription factor bHLH93 boosted BW resistance by interacting with the R. solanacearum effector Ripl [10].
Ubiquitination has vital functions in plant disease resistance.In eukaryotes, protein degradation is mainly regulated by the conserved ubiquitin/26S proteasome system (UPS).ubiquitinactivating enzyme (E1) activates ubiquitin, then the ubiquitin binds to a ubiquitin-conjugating enzyme (E2) through a thiol ester bond.The target proteins are recruited by ubiquitin ligase (E3), and then degraded by ubiquitin, which is transferred by ubiquitin ligase (E3) [11].The E3 ligases comprise three major groups: homologs to E6-associated protein C-terminus (HECT), really interesting new gene (RING), and plant U-box (PUB) [12].
Although most solanaceous crops are susceptible to BW, several eggplant (Solanum melongena) cultivars have shown high levels of BW resistance, making them ideal crops for BW resistance analysis.Some BW resistance-related genes or loci, including EBWR9 [22], SmSPDS, and SmMYB44 [23] are identified in eggplants.SmNAC play a negative role in resistance to BW by repressing SmICS1 expression in eggplants [24].
In this study, SmNAC protein was used as bait for screening the interactors in the eggplant cDNA library.The E3 ubiquitin ligase substrate receptor SmDDA1b was identified and found to positively regulate BW resistance and SA contents in eggplants.SmDDA1b also interacted with SmNAC to form a negative feedback loop (SmDDA1b-SmNAC) which regulated SA production, thus enhancing BW resistance in eggplant.
SmDDA1b physically interacts with SmNAC
Our previous study demonstrated that SmNAC negatively regulates the BW resistance of eggplants by inhibiting SA biosynthesis [24].As the bait protein, the 139-amino acid N-terminal portion of SmNAC (SmNAC 1-139 ), containing a non-self-activating NAM domain, was used to screen the interaction factors in the cDNA library of eggplant leaves after BW inoculated.A putative E3 ubiquitin ligase substrate receptor, LOC102586503 (SmDDA1b hereafter), encoding 167 amino acids residues, was shown to interact with SmNAC (Fig. 1A).Based on its phylogeny and protein structure, we named it SmDDA1b.The interaction was confirmed by yeast two-hybrid (Y2H) assay (Fig. 1A).The bimolecular f luorescence complementation (BiFC) and CoIP assays also confirmed the interaction between SmDDA1b and SmNAC (Fig. 1B and D), implying that SmNAC indeed interacts with SmDDA1b.
SmDDA1b is a homolog of AtDDA1, a substrate receptor protein of CUL4-DDB1 type E3 ubiquitin ligase (CRL4) [16].We retrieved DDA1 homologs from 15 representative dicotyledonous plants, and phylogenetic analysis showed that the DDA1 proteins clustered into two clades: the DDA1a lineage, which only had DDA1 domain in their protein, and the DDA1b lineage containing the DDA1 and SAP domains (Fig. S1A and B and Table S1, see online supplementary material).SmDDA1b and its homolog in solanaceous plants clustered into the DDA1b lineage (Fig. S1A, see online supplementary material).Moreover, green f luorescent protein (GFP)-tagged SmDDA1b was targeted to the nucleus (Fig. 1C).These results implied that SmDDA1b functions as a substrate receptor in eggplant.
Transcriptional analysis of SmDDA1b in eggplant
Because SmNAC regulates BW resistance in eggplants via the SA pathway [24], we evaluated whether SmDDA1b could also be involved in resistance to BW.No differential nucleotide sites were found between the SmDDA1b cDNA and genomic DNA sequences of BW-resistant line E31 (R) and BW-susceptible line E32 (S) of eggplants (Fig. S2A and C, see online supplementary material).While 270 bp, which contained three NAC binding cis-acting elements, were absent in the SmDDA1b promoter of E31 compared with E32 (Fig. S2B, see online supplementary material), this phenomenon was conserved in another four resistant and six susceptible materials (Fig S2D, see online supplementary material).The qRT-PCR results showed that SmDDA1b had high transcript accumulation in the leaves of both E31 (R) and E32 (S) plants, but a lower expression in the root, stem, and leaf of E32 plants compared with E31 (Fig. 2A; Figs S3 and S4A, see online supplementary material).The SmDDA1b protein level was also higher in E31 stem and root than in E32 (Fig S4D, see online supplementary material).The SmDDA1b was downregulated both in E31 and E32 plants from 1 h to 12 h R. solanacearum inoculation.Notably, SmDDA1b expression increased drastically in E31 plants but remained reduced in E32 plants after 24 h R. solanacearum inoculation (Fig. 2B; Fig. S4B, see online supplementary material).SmDDA1b was also induced in E31 plants but suppressed in E32 plants after 48 h following treatment with exogenous SA (Fig. 2C; Fig. S4C, see online supplementary material).Thus, these results demonstrated that SmDDA1b might involve in BW resistance.
SmDDA1b positively regulates BW resistance
To evaluate the function of SmDDA1b in BW resistance, we generated 10 lines of SmDDA1b-silenced plants from the BWresistant line E31 by virus-induced gene silencing (VIGS) in eggplant.SmDDA1b expression was reduced in the SmDDA1bsilenced eggplant plants (pTRV2-SmDDA1b) compared to the control plants (pTRV2) (Fig. 2D).All SmDDA1b-silenced eggplant lines displayed typical wilt symptoms with a high disease index after inoculation with R. solanacearum, while the control plants showed no significant wilt symptoms (Fig. 2E; Fig. S5A, see online supplementary material).To further determine the function of SmDDA1b, we overexpressed SmDDA1b in BW-susceptible tomato plants.Seven independent transgenic tomato lines highly expressing SmDDA1b were obtained and self-crossed to produce another generation for seed propagation and phenotypic characterization (Fig. S5B and C, see online supplementary material).Three representative transgenic tomato lines (OET 1-2 OET 1-4 OET 1-8 ) were selected from the new generation for further analysis (Fig. S5D, see online supplementary material).The WT tomato plants exhibited wilted phenotype 7 d after inoculation with R. solanacearum, while the transgenic tomato OET 1-2 OET 1-4 and OET 1-8 lines only displayed slight wilt in several leaves (Fig. 2G; Fig. S5D, see online supplementary material).We also measured the dynamic disease index and morbidity of WT and OE-SmDDA1b transgenic tomato plants after 14 days of R. solanacearum inoculation.The result showed that transgenic tomato OET 1-2 OET 1-4 and OET 1-8 lines invariably had lower disease index values and morbidity than the WT plants (Fig. 2H and I; Fig. S5E, Table S2, see online supplementary material).When 100 μM 1-aminobenzotriazole (ABT, a salicylic acid inhibitor) were pre-sprayed 24 h before R. solanacearum inoculation, the resistance of OET 1-4 plants to BW was weakened (Fig S5F and G, see online supplementary material).The results indicated that SmDDA1b have positive role in regulating BW resistance.
SmDDA1b inhibits the spread of R. solanacearum
Because self-reproduction and spread of R. solanacearum occur in the xylem of plants [3], to investigate the SmDDA1b resistance mechanism to R. solanacearum, we analysed R. solanacearum colonization in the root, lower stem, and upper stem of WT and transgenic plants after inoculation.Consistent with the BW-susceptible pTRV2-SmDDA1b eggplant plants, higher in vivo R. solanacearum concentrations were detected in the root, lower and upper stem of SmDDA1b-silenced eggplant plants compared with the control plants (Fig. 2J).Interestingly, when pTRV2-SmDDA1b eggplant plants were almost completely wilted (14 dpi), the bacterial concentrations in their lower stems were 10 7.42 CFU/g, significantly higher than that after 1 dpi and 7 dpi (Fig. 2E and J).Conversely, the control plants (pTRV2) showed robustness with extremely low bacterial concentrations in stems.Low bacterial concentrations were also observed in the stems of SmDDA1b-overexpressing tomatoes (OET 1-4 plants), which only showed minor wilting symptoms (Fig. 2G and K).However, the bacterial concentrations in the WT tomato stem increased over time after inoculation with severe wilting (Fig. 2G and K).These results indicated that SmDDA1b positively regulates BW resistance by inhibiting the spread of R. solanacearum within plants.
SmDDA1b positively regulates SA content and signaling pathway
Considering the important role of SA in BW resistance, we analysed SA levels in transgenic plants.The SA contents were repressed in SmDDA1b-silencing eggplant plants compared with the control eggplants (pTRV2) (Fig. 3A) but elevated in SmDDA1boverexpressing tomato plants (line OET 1-2 OET 1-4 OET 1-8 ) compared to the WT tomato plants (Fig. 3A).Moreover, the level of SA increased in control plants but declined in the SmDDA1bsilenced eggplant plants after R. solanacearum inoculation (Fig. 3A).The results showed that SmDDA1b positively controls SA levels in plants.We also detected the expression of SA biosynthesis-(ICS1) and signaling pathway-related genes (SmEDS1, SmGluA, SmNPR1, SmSGT1, SmPAD4).SmICS1, SmEDS1, SmGluA, SmNPR1, SmSGT1, SmPAD4 were decreased in the SmDDA1b-silenced eggplant plants compared with the control plants (Fig. 3B and C; Fig. S6, see online supplementary material).Contrarily, SlICS1 and the SA signaling genes were upregulated in the OE-SmDDA1b tomato plants compared with the WT plants (Fig. 3B and D; Fig. S7, see online supplementary material).These results demonstrated that SmDDA1b positively regulates the SA pathway.
SmDDA1b suppresses SmNAC protein level through degradation
Because SmDDA1b is a CRL4 substrate receptor, we tested the interaction between SmDDB1 and SmDDA1b or SmCUL4.The Y2H and BiFC assays indicated that SmDDB1 have interaction with both SmCUL4 and SmDDA1b (Fig. 4A and B), implying a possible ubiquitin ligase role of SmDDA1b in eggplants.SmDDA1b also interacted with SmNAC in the nucleus (Fig. 1B).After the proteasome inhibitor MG132 [25] treatment, YFP f luorescence signal increased in the nucleus (Fig. 1B), suggesting that SmDDA1b may interact with SmNAC in the nucleus.
To further confirm whether SmDDA1b degraded SmNAC through the 26S proteasome, we performed degradation assay in vivo.The pEAQ-Firef ly-SmNAC tobacco leaves showed normal firef ly f luorescence signals, while weakened after infiltration with pEAQ-SmDDA1b (Fig. 4C).However, when the proteasome inhibitor MG132 was co-infiltrated with the pEAQ-Firef ly-SmNAC and pEAQ-SmDDA1b, the firef ly f luorescence signal increased again (Fig. 4C).The firef ly luciferase activity also exhibited the same patterns (Fig. 4D).The Western blot (WB) results showed that when pEAQ-SmDDA1b and pEAQ-Firef ly-SmNAC were co-infiltrated into the tobacco leaf, only SmDDA1b protein bands were displayed (Fig. 4E).However, when the proteasome inhibitor MG132 was co-infiltrated, SmNAC protein bands (anti-LUC) were appearing, which demonstrated that SmDDA1b degrade SmNAC protein by the 26S proteasome (Fig. 4E).Additional in vivo degradation assays were performed with different mixture ratios of the solutions of recombinant plasmid carrying GFP protein.The SmNAC-GFP f luorescence signal significantly weakened when the concentration of SmDD1b was increased (Fig. 4F).However, the addition of MG132 enhanced the GFP f luorescence signal of SmNAC-GFP (Fig. 4F).The results of WB showed that with the increase of SmDDA1b protein level, the level of SmNAC protein EV indicates an empty vector, while MG132 is the proteasome inhibitor that inhibits protein degradation through the 26S proteasome.Data are expressed as mean ± SEM values of five biological replicates.Different letters indicate statistically significant differences among the groups (Tukey's honest significant difference test, P < 0.05).decreased (Fig. 4G).However, after the addition of MG132, the level of SmNAC protein increased.Those results show that SmDDA1b could inhibit the SmNAC protein level through degradation.
Discussion
DDA1 has been widely studied in Arabidopsis (referred to as AtDDA1 in the present work) [16] and rice (OsDDA1) [26].DDA1b negatively regulates the endogenous ABA-mediated developmental responses in plants.DDA1 can also interact with COP10 to inhibit photomorphogenesis [15].However, a few studies have been reported on the involvement of DDA1 in regulating the SA pathway.The present study found that the Arabidopsis AtDDA1 and eggplant SmDDA1b are evolutionarily distant (Fig. S1A, see online supplementary material), and high homology proteins of SmDDA1b have not been studied.We also found that SmDDA1b targeted SmNAC for degradation through the UPS, thus positively regulating the SA pathway and BW resistance.Thus, our study enriches the current understanding of the function of CRL4 E3 ubiquitin ligase and emphasizes the significance of the UPS in regulating the SA pathway and defense responses.
A lot of E3 ubiquitin ligases participate in plant disease resistance.For example, E3 ubiquitin ligases MIEL1 and GhPUB17 have a negative role in defense responses in Arabidopsis [27] and cotton (Gossypium spp.) [28], respectively.However, E3 ligase NbUbE3R1 and PUB4 have positive roles in immune responses in tobacco [29] and Arabidopsis [30], respectively.E3 ligase NtRNF217 and the ATL family gene StACRE have positive roles in BW resistance in tobacco [31] and potatoes [32], respectively.Our study found that SmDDA1b expression was significantly induced by both R. solanacearum and SA treatment (Fig. 2B and C; Fig. S4B and C, see online supplementary material), with the expression pattern resembling the pattern triggered immunity (PTI) and effector triggered immunity (ETI) in plant disease resistance (reviewed in [33] and [34]).
SmDDA1b was not expressed in BW-susceptible eggplants (E32) after 24 h inoculation with R. solanacearum or after 48 h SA treatment (Fig. 2B and C; Fig. S4B and C, see online supplementary material).Based on the difference expression of SmDDA1b in the BW-resistant and susceptible materials after treatment with R. solanacearum and SA, we hypothesized that SmDDA1b regulates the BW resistance via the SA pathway in eggplants.Indeed, SmDDA1b silencing plants showed reduced the BW resistance.The SA contents, and ICS1 and SA pathway signaling-related genes expression also deceased in SmDDA1b silencing plants.In contrast, SmDDA1b overexpression plants indicated increased BW resistance.The SA content, ICS1 and SA pathway signalingrelated genes expression also increased in SmDDA1b overexpression plants (Fig. 3; Figs S6 and S7, see online supplementary material).Thus, these results supported the hypothesis that SmDDA1b positively regulates BW resistance in an SA-dependent manner.The results further highlight the complexity and precision of the SA signaling pathway and disease resistance regulatory networks in plants.
CRL E3 ubiquitin ligase regulated the expression of SA pathway signaling genes.In Arabidopsis, CRL3 recognizes and degrades the SA pathway gene NPRs [35,36].In addition, the constitutive degradation of NPR3 monomers by CRL1 leads to preventing autoimmunity without the threat of pathogens [37].HOS15, a substrate receptor of CRL1, interacts and degrades NPR1; additionally, and NPR1 may interact with CRL4 E3 ligase in Arabidopsis.In this study, the SA pathway genes, such as NPR1, show differential expression in the silenced and overexpression of SmDDA1 plants (Fig. 3C and D; Figs S6B and D, S7B, see online supplementary material).Beside regulating SA synthesis by ICS1, the possible interaction between SA pathway signal genes and CRL4 E3 ligase, and the mutual regulation between CRLs, may be one reason for balancing SA pathway under normal environment and biotic stress.NAC transcription factors control gene expression and also associate with SA signaling; for example, the expression of ONAC122, ONAC131 [38], CaNAC035 [39], and StNACb4 [9] can be induced by SA.SA is generally considered as a major plant hormone associated with disease resistance, such as bacterial wilt (BW).Similar to endogenous SA, exogenous SA can also enhance BW resistance [40].In our previous study, SmNAC reduces BW resistance in eggplant by repressing the SA synthesis gene ICS1 [24].
NAC can also interact with E3 ubiquitin ligases.For example, SINAT5 ubiquitinate AtNAC which is a RING-type E3 ligase [41], and SINA recognizes and degrades NAC1 in tomatoes through the UPS [42].In this study, we found that ubiquitin ligase SmDDA1b interacts with SmNAC (Fig. 1A and B).Previous studies hypothesized that DDA1 acts as a substrate receptor for the multisubunit E3 ligase CRL4, promoting the target protein recognition by CRL4 [15].We confirmed that SmDDA1b is a component of CRL4 (Fig. 4A and B) and a homolog of AtDDA1 [16] (Fig. S1 and Table S1, see online supplementary material); thus, SmDDA1b can be reasonably inferred to act as a substrate receptor for CRL4.E3 ubiquitin ligase has specificity in recognizing target proteins; thus, to identify the target proteins are critical for dissecting the function of E3 ubiquitin ligases.We found that SmDDA1b can interact with its target protein SmNAC (Fig. 1A and B).Moreover, the target protein recognized by E3 ubiquitin ligase is degraded by 26S proteasome [11].In addition, it is interesting to observe that SmNAC targeted the SmDDA1b promotor and repressed its expression (Fig. 5).The ability of NAC binds to the E3 promoter has also been reported in other studies.In banana, MaNAC1 and MaNAC2 directly binding to the promoter of MaXB3 and repress its expression [43].
In general, all the results support the hypothesis that SmDDA1b can improve the BW resistance of eggplants by SmNACmediated SA pathway.For disease resistance plants, during R. solanacearum stress, the SmDDA1b proteins were induced by R. solanacearum.SmNAC were recognized by SmDDA1b and then degraded by the SmDDA1b-mediated ubiquitin/26S proteasome system (UPS).Consequently, the feedback regulatory of SmNAC on SmDDA1b were a failure and the suppression of SmNAC on ICS1 was also relieved, SA is accumulated and the SA signaling genes are activated, thus system-acquired resistance (SAR) is induced in plants.For susceptible plants, during R. solanacearum stress, SmDDA1b proteins were restrained, SmNAC cannot be recognized and degraded by the ubiquitin/26S proteasome system (UPS).The released SmNAC proteins inhibits the expression of SmDDA1b in return and the suppression of SmNAC on SmICS1 was enhanced, SA and SA signaling pathway was repressed (Fig 6).Similar molecular regulatory patterns have also been reported in other species.In Populus, PalWRKY77 was degraded by U-box E3 ligase PalPUB79 and PalWRKY77 directly represses PalPUB79 transcription [44].In Tartary buckwheat (Fagopyrum tataricum), FtMYB11 was targeted by E3 ligase FtBPM3, and FtMYB11 also repress FtBPM3 expression [45].And in banana, the RING type E3 ligase MaXB2 is responsible for degrading transcription factors MaNAC2 and MaNAC3, as well as ethylene biosynthesis proteins MaACS1 and MaACO3.Simultaneously, MaNAC2 and MaNAC3 act to inhibit the expression of MaXB2 [43], indicating a feedback regulatory mechanism between these genes that helps maintain a balance of gene expression levels.
The molecular mechanism of SmDDA1b regulating eggplant resistance to BW ultimately boils down to the regulation of the SA pathway.Therefore, SmDDA1b may also have other functions, such as regulating plant cold stress resistance.SA has been proven to alleviate and regulate various physiological and biochemical changes in plants caused by cold stress [46][47][48].Therefore, it is speculated that cold stress may induce SmDDA1b expression, which leads to an increase in the expression of genes related to the SA pathway and thus resistance to cold stress.
Our study identified 22, 34, 42, 89, 49, 31, and 27 putative NAC elements in the promoters of SmGluA, SmNPR1, SmPAD4, SmSGT1, .The SmDDA1b regulatory module enhances plant resistance to BW.For disease resistance plants, during Ralstonia solanacearum stress, the SmDDA1b proteins were induced by R. solanacearum.SmNAC were recognized by SmDDA1b and then degraded by the SmDDA1b-mediated ubiquitin/26S proteasome system (UPS).Consequently, the feedback regulatory of SmNAC on SmDDA1b were a failure and the suppression of SmNAC on ICS1 was also relieved, SA is accumulated and the SA signaling genes are activated, thus system-acquired resistance (SAR) is induced in plants.For susceptible plants, during R. solanacearum stress, SmDDA1b proteins were restrained, SmNAC cannot be recognized and degraded by ubiquitin/26S proteasome system (UPS).The released SmNAC proteins inhibits the expression of SmDDA1b in return and the suppression of SmNAC on SmICS1 was enhanced, SA and SA signaling pathway was repressed.
SmTGA, SmEDS1, ICS1, respectively (Tables S4-S9, see online supplementary material).Previous studies have shown that NAC transcription factors bind the promoters of ICS1, EDS1, PAD4 [49].SmNAC may also directly bind the promoters of SA pathway signaling-related genes.However, further clarifications on whether SmNAC directly binds to SA pathway-related genes are necessary.
Experimental materials
BW-resistant E31 (R) and BW-susceptible E32 (S), two inbred lines of eggplants (S. melongena) were used in this study (Fig. S9 and Table S10, see online supplementary material).Nicotiana benthamiana, S. lycopersicum cultivar 'Money Maker', and R. solanacearum strain GMI1000 were also used in the study.
Gene expression analysis
For plant total RNA isolation and complementary DNA (cDNA) synthesis, the Promega RNA extraction kit (Promega, Shuanghai, China) and the EZB reverse transcription kit (EZBioscience, Roseville, MN, USA) were used.For qRT-PCR, the Vazyme mix (Vazyme, Nanjing, China) was used.The qRT-PCR primers are listed in Table S11 (see online supplementary material).The reference genes used in eggplant were SmActin and SmCyclophilin.The reference genes used in tomato were SlActin and SlGAPDH.
Yeast two-hybrid assay
SmDDA1b and SmCUL4 CDS sequences were constructed into the pGADT7 vector.Thereafter, the N-terminal of SmNAC (1-139 aa) and the full-length SmDDB1 ORF which removed the stop codon, were cloned into the pGBKT7 vector.The specific primers are shown in Table S12 (see online supplementary material).The experiment was performed based on the manufacturer's instructions (Cat.No. 630489; Clontech, Mountain View, CA, USA).
Bimolecular fluorescence complementation analysis
The CDS sequences of SmDDA1b
Subcellular localization analysis
SmDDA1b CDS sequence without stop codons was constructed into the pEAQ-EGFP vector, and then introduced into A. tumefaciens strain GV3101(pSoup). A. tumefaciens cells containing DsRed protein (v:v, 1:1) mixture infected N. benthamiana leaves.The plants were cultivated in the dark at 22 • C for 3 d.A confocal f luorescence microscope (Carl Zeiss, Oberkochen, Germany) was used to detect the green f luorescent protein (GFP) f luorescence.The assays were repeated three times.The primers used are listed in Table S12 (see online supplementary material).
Phylogenetic analysis and sequence alignment
DDA1-containing sequences from15 dicotyledonous plants were acquired by blasting the whole-genome protein sequences (Table S1, see online supplementary material) in the NCBI RefSeq database using Hmmserch v3.3.Thereafter, the sequences were aligned using the "-auto" parameter of Mafft v7.455 software, and visualized by DNAMAN.The phylogenetic tree was built by default parameters of Iqtree v1.6.12.
Pathogen inoculation
R. solanacearum inoculation was performed according to our previous study [23].The experiment was conducted in three biological replicates under controlled conditions (30
Hormonal treatment
Eggplant seedlings at the four-leaf stage were sprayed with 1 mM of SA every 12 h for two days, spraying SA until all leaves of the plant were covered with hormone droplets [40,50].Water treatment was control.The plants were cultivated under normal conditions (26
Virus-induced gene silencing assays
A 300 bp fragment of SmDDA1b was constructed into the pTRV2 vector.pTRV1, pTRV2, and pTRV2-SmDDA1b vectors were infected the A. tumefaciens strain GV3101.A mixture of pTRV1 and pTRV2 or pTRV2-SmDDA1b (v:v, 1:1) infiltrated into the leaves of eggplant seedlings at four-or five-leaf-old stage.The plants were maintained at 16
SmDDA1b overexpression vector construction and transformation process
The full-length CDS of SmDDA1b was amplified and joined into the pCAMBIA-1380 vector.The Agrobacterium strain GV3101 with pCAMBIA-1380-SmDDA1b overexpression vector was transformed into the tomato cultivar 'Money Marker' [51].
Extraction of total plant protein and Western blot assay
Plant protein extraction kit (Solarbio, BC3720) was used to obtain total plant protein.Refer to [52] for the specific steps of western blot assay.The anti-SmDDA1b, anti-LUC antibody, and anti-GFP antibody were used for in vivo ubiquitination assay.The peptide sequence selected for the SmDDA1b antibody was: MEDTSSS IPPNNATTSGAAKYLAGLPSRGLFSSNVLSSTPGGMRVYICDHETSPPE DQFIKTNQQNILIRSLMLKKQRGDHSSKDGKGISSNDNGRKRAAEKT LDSRTSNKKATTSNQVASPQETSRIRTPDIQNMTVEKLRALLKEKGLSL RGRKDELIARLRGDT, and the catalog numbers of anti-Actin antibody, anti-LUC antibody, and anti-GFP antibody were AB_764433, AB_934495, and AB_950071, respectively.
Salicylic acid extraction and quantification
Leaves of SmDDA1b-silenced plants, control plants, SmDDA1boverexpressing lines, and WT before and after inoculation with R. solanacearum were collected for SA extraction and quantification [53,54].The catalog number for the standard SA was 69-72-7 (Tianjin Damao Chemical Reagent Co. Ltd, Tianjin, China).
SmDDA1b-overexpressing plants
The WT and SmDDA1b-overexpressing plants at four-leaf stage were prespayed with 100 μM 1-aminobenzotriazole (ABT, a salicylic acid inhibitor) 24 h before being inoculated with R. solanacearum.R. solanacearum inoculation was performed according to our previous study [23].
R. solanacearum isolation and quantification
Whole eggplants inoculated with GMI1000 were collected after 1, 7, and 14 d of inoculation with R. solanacearum.The roots, lower stems and the upper stems were successively washed.The samples were soaked in 75% ethanol for 30 s and washed twice with sterile water (ddH 2 O) under sterile conditions.The homogenized samples by sterile quartz sand and ddH 2 O were filled with ddH 2 O to 10 mL in a 50 mL tubes.The solution was then diluted to a 10 1 -10 6 gradient series.100 μL of each dilution was spread on TTC medium containing 50 mg/L rifampicin.The cells were counted after incubation at 30 • C for 2 d.The A. tumefaciens was not grown at this time on the TTC plate (Fig. S10, see online supplementary material) and PCR was additionally used to confirm the identity of putative R. solanacearum isolates (Table S12, see online supplementary material).At least three biological replicates per treatment.
The chemiluminescence imager was used to observe the leaf luminescence (Bio-Rad/ChemiDoc XRS+, USA), and an enzymelabeling instrument (Biotek/Cytation 5, Winooski, VT, USA) was used to detect firef ly luciferase activity.The mixed Agrobacterium cells carrying pEAQ-GFP-SmNAC and pEAQ-SmDDA1b were infected N. benthamiana leaves.The amount of pEAQ-GFP-SmNAC infiltrated into the leaves was fixed, while that of pEAQ-SmDDA1b was gradually increased.The ratio of infiltration is, respectively, pEAQ SmDDA1b: pEAQ GFP SmNAC = 0, 0.25, 0.5, and 1.After 36 h-48 h of the treatment, the same amounts of pEAQ-GFP-SmNAC, pEAQ-SmDDA1b, and MG132 were injected into the treatment group.The luminescence was then observed by f luorescence microscope 3 d after the second treatment.
Yeast one-hybrid assay (Y1H)
SmNAC CDS sequence was joined into the pGADT7 vector, SmDDA1b promoter sequence was constructed into the pAbAi vector.The Y1H assay was performed based on the manufacturer's protocol (Clontech, USA).The used primers are listed in Table S12 (see online supplementary material).
Figure 1 .
Figure 1.Interaction between SmDDA1b and SmNAC and the subcellular localization analysis of SmDDA1b.(A) Yeast two-hybrid (Y2H) assays of SmNAC and SmDDA1b.The co-transformed BD-53 and AD-T in the Y2H Gold strain were used as the positive control, while co-transformed BD-Lam and AD-T in the Y2H Gold strain were used as the negative control.SmNAC1-139 indicates the N-terminal 139 aa of SmNAC.(B) Bimolecular f luorescence complementation (BiFC) assays between SmDDA1b and SmNAC.YFP indicates the interaction between two proteins.NLS represents the nucleus location.(C) The subcellular localization analysis of SmDDA1b.GFP and NLS indicate the subcellular location of SmDDA1b in the nucleus.(D) Co-Immunoprecipitation (CoIP) analysis of SmDDA1b and SmNAC.Scale bar in (B-C) represents 50 μm.
Figure 2 .
Figure 2. SmDDA1b expression and phenotypic analysis of SmDDA1b-silenced and SmDDA1b-overexpression plants inoculated with Ralstonia solanacearum.(A) The expression pattern of SmDDA1b in E31 and E32 tissues.Data are expressed as mean ± SD values (n = 3) ( * P < 0.05; * * P < 0.01, according to the Student's t-test).The reference gene was SmActin.(B) The expression pattern of SmDDA1b in E31 and E32 after inoculation with R. solanacearum.The reference gene was SmActin.(C) Relative expression of SmDDA1b in E31 and E32 after salicylic acid treatment.Data are expressed as mean ± SEM values of the three biological replicates.Different letters indicate statistically significant differences among the groups (Tukey's honest significant difference test, P < 0.05).The reference gene was SmActin.(D) Relative expression of SmDDA1b in control plants and SmDDA1b-silenced plants.CK represents the group treated with water, while pTRV2 indicates the group treated with an empty vector solution.The pTRV2-SmDDA1b indicates the virus-induced gene silencing (VIGS)-treated plants.Each treatment had at least 10 biological replicates.Data are expressed as mean ± SEM values of the three biological replicates.Different letters indicate statistically significant differences among the groups (Tukey's honest significant difference test, P < 0.01).The reference gene was SmActin.(E) The phenotypes of the control (pTRV2) and SmDDA1b silenced plants (pTRV2-SmDDA1b) at 1, 7, and 14 d post-inoculation with R. solanacearum in eggplants.Scale bars indicate 5 cm.(F) The disease index of control plants and SmDDA1b-silenced 10 d after inoculation with R. solanacearum in eggplants.The ordinate represents the percentage of the plants at each disease level.A total of ten eggplant seedlings were silenced.(G) The phenotypes of WT and SmDDA1b-overexpressing plants (OET1-4) at 1, 7, and 14 d post-inoculation with R. solanacearum in tomatoes.Scale bars indicate 5 cm.(H-I) The morbidity (H) and disease index (I) of WT and OE-SmDDA1b seedlings after infected with R. solanacearum at over 14 d in tomatoes.(J-K) The R. solanacearum colonization of control plants (pTRV2), SmDDA1b-silenced plants (pTRV2-SmDDA1b) (J) in eggplant, WT, and SmDDA1b-overexpressing plants in tomatoes (OET1-4) (K).The samples (root, lower stem, and upper stem) were obtained at 1, 7, and 14 d post-inoculation with R. solanacearum.Data are expressed as mean ± SEM values of the three biological replicates.Different letters indicate statistically significant differences among the groups (Tukey's honest significant difference test, P < 0.05).
Figure 3 .
Figure 3. SmDDA1b-mediated positive regulation of SA content and signaling pathway.(A) The salicylic acid content of the control (pTRV2) and SmDDA1b-silenced eggplant plants (VIGS), the WT and SmDDA1b-overexpressing tomato seedlings with or without Ralstonia solanacearum inoculation.Samples (leaves) obtained 7 d after inoculation with R. solanacearum were used for analysis.Data are shown as mean ± SEM values of three biological replicates.Different letters indicate statistically significant differences among the groups (Tukey's honest significant difference test, P < 0.05).(B) Expression of ICS1 in SmDDA1b-silenced plants (VIGS) and OE-SmDDA1b plants.(C) Expression of SA signal pathway-related genes in the SmDDA1b-silenced and control plants.pTRV2 represents the control plants, while VIGS represents SmDDA1b-silenced plants.Data are shown as mean ± the SEM of three biological replicates ( * P < 0.05; * * P < 0.01, Student's t-test).(D) Expression of SA signal pathway-related genes (SlEDS1, SlGluA, SlNPR1, SlTGA, SlSGT1, and SlPAD4) in OE-SmDDA1b and the WT tomato plants.OET1 represents the T1 generation overexpression plants, including OET1-2, OET1-4, and OET1-8 lines.Data are expressed as mean ± the SEM of three biological replicates ( * P < 0.05; * * P < 0.01, Student's t-test).The reference gene was SmActin in eggplant.The SlActin was used as control gene in tomato.
Figure 4 .
Figure 4. SmDDB1 interacted with both SmCUL4 and SmDDA1b and SmDDA1b degradation of SmNAC through the proteasome.(A) Y2H assays indicating the interaction of SmDDA1b with SmDDB1, SmCUL4 with SmDDB1.The AD-T and BD-53 or BD-Lam co-transformed in the Y2H Gold strain was used as the positive or negative controls, respectively.(B) BiFC assays between SmDDA1b and SmDDB1, SmCUL4 and SmDDB1.Scale bars indicate 50 μm.(C) SmDDA1b-mediated proteasome degradation of SmNAC.For the four treatments of each tobacco leaf, the white dotted line indicates the outline of the tobacco leaf.The pEAQ-Firef ly-SmNAC+pEAQ and pEAQ+pEAQ-SmDDA1b treatments were used as the positive and negative controls, respectively.MG132 is a proteasome inhibitor that inhibits protein degradation via the 26S proteasome.(D) The activity assay of firef ly luciferase.The '+' or '-' symbol indicates a sample was added or omitted in each experiment, respectively.Data are expressed as mean ± SEM values of three biological replicates.(E) Western blot results.The '+' or '-' symbol indicates a sample was added or omitted in each experiment, respectively.Anti-SmDDA1b represents SmDDA1b protein antibody, anti-LUC represents Firef ly protein antibody, and anti-Actin represents plant Actin protein antibody.(F) SmDDA1b-mediated proteome degradation of SmNAC visualized via Merge 1 and Merge 2. Different numbers represent different injection ratios.The '+' or '-' symbol indicates a sample was added or omitted in each experiment, respectively.NLS indicates the nucleus localization, while Merge 1 indicates the imaging combination of NLS and GFP.Merge 2 represents the combination of all the above images.The scale bar indicates 1 mm.(G) Western blot results.Different numbers represent different injection ratios.The '+' or '-' symbol indicates a sample was added or omitted in each experiment, respectively.Anti-GFP represents GFP protein antibody.
Figure 5 .
Figure 5.The binding of SmNAC to the SmDDA1b promoter represses SmDDA1b expression.(A) The accumulation of SmDDA1b in SmNAC over-expressed (OE-SmNAC) lines.E31 indicates wild-type, and EGT0-43, EGT0-87, EGT0-145, EGT0-204 represent T0 generation OE-SmNAC plants.Data are indicated as mean ± SEM values of three biological replicates [ * * P < 0.01, two-way analysis of variance (ANOVA)].(B-C) Y1H assays between SmNAC and the SmDDA1b promoter.The AD-53 and pAbAi-p53 co-transformed in the yeast cells (Y1H Gold) served as the positive control, while the co-transformed pAbAi-p53 and AD were used as the negative control.(D-E) The repression of the SmDDA1b promoter by SmNAC.The regulation of promoter activity was according to the ratio of LUC to REN.The '+' or '-' symbols indicate a sample added or omitted in each experiment, respectively.EV indicates an empty vector, while MG132 is the proteasome inhibitor that inhibits protein degradation through the 26S proteasome.Data are expressed as mean ± SEM values of five biological replicates.Different letters indicate statistically significant differences among the groups (Tukey's honest significant difference test, P < 0.05).
Figure 6
Figure6.The SmDDA1b regulatory module enhances plant resistance to BW.For disease resistance plants, during Ralstonia solanacearum stress, the SmDDA1b proteins were induced by R. solanacearum.SmNAC were recognized by SmDDA1b and then degraded by the SmDDA1b-mediated ubiquitin/26S proteasome system (UPS).Consequently, the feedback regulatory of SmNAC on SmDDA1b were a failure and the suppression of SmNAC on ICS1 was also relieved, SA is accumulated and the SA signaling genes are activated, thus system-acquired resistance (SAR) is induced in plants.For susceptible plants, during R. solanacearum stress, SmDDA1b proteins were restrained, SmNAC cannot be recognized and degraded by ubiquitin/26S proteasome system (UPS).The released SmNAC proteins inhibits the expression of SmDDA1b in return and the suppression of SmNAC on SmICS1 was enhanced, SA and SA signaling pathway was repressed.
,
SmDDB1, and SmCUL4 without the stop codons were cloned into the pSPYNE-35 s/pUC-SPYNE (YNE) vector.The residue genes without the stop codons were constructed into the pSPYCE-35 s/pUC-SPYCE (YCE) vector.A. tumefaciens GV3101 (pSoup) with the construct, mixed with A. tumefaciens cells harboring DsRed protein (v:v:v, 1:1:1), and infiltrated into N. benthamiana leaves.Proteasome inhibitor MG132 (50 μM) was also infiltrated into N. benthamiana leaves and the plants were cultivated in the dark at 22 • C for 3 d.The GFP f luorescence was visualized by confocal f luorescence microscope (Carl Zeiss, Oberkochen, Germany).The assays were repeated three times.The primers used are listed in TableS12(see online supplementary material).• C. 1 × PBS buffer and was used to wash the Agarose beads, elution buffer [200 mM glycine (pH = 2.5) and 1 M Tris base (pH = 10.4)] was used to elute.The eluate proteins were used immediately or stored at −80 • C. Before the SDS-PAGE and IB analysis, 1 × SDS loading buffer was mixed into the samples and then boiled for 10 min.
• C during 16 h of light and 22 • C during 8 h of dark).For qRT-PCR analysis, leaf samples were used from three biological replicates at 0 h, 3 h, 6 h, 12 h, 24 h, and 48 h after SA treatment.
• C in the dark for 1 d, then all the plants were cultivated under normal conditions for one to two weeks (26 • C during 16 h of light, 22 • C during 8 h of darkness).There were 10 biological replicates for each treatment.The primers used are shown in TableS12(see online supplementary material).
|
2023-12-04T17:29:42.859Z
|
2023-11-27T00:00:00.000
|
{
"year": 2023,
"sha1": "e4573468fdc48578ad98ea6eacf2dac6155c2b94",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/hr/advance-article-pdf/doi/10.1093/hr/uhad246/53862990/uhad246.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90170fd2a3ecf2fb92ab622ef2601cc540d0816b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208300697
|
pes2o/s2orc
|
v3-fos-license
|
The Relationships between Physical Activity, Self-Efficacy, and Quality of Life in People with Multiple Sclerosis
Regular physical activity (PA) can enhance the physical and mental health of people with Multiple Sclerosis (MS) because of its impact on muscular strength, mobility, balance, walking, fatigue, pain and health-related quality of life (HRQoL). Previous studies have hypothesized that the relationship between PA and HRQoL is mediated by self-efficacy. The aim of this research is to evaluate whether self-efficacy in goal setting and self-efficacy in the management of symptoms, mediate the relationship between PA and HRQoL, in a similar way to exercise self-efficacy. A sample of 28 participants with MS (18 females) and different levels of physical activity have been recruited and completed the following measures: (a) physical activity (GLTEQ); (b) health-related quality of life (SF-12); (c) self-efficacy in the management of Multiple Sclerosis (SEMS) and, (d) exercise self-efficacy (EXSE). The statistical analysis highlighted that self-efficacy in goal setting mediated the relationship between PA and mental health better than exercise self-efficacy. Our findings suggest that self-efficacy in goal setting can contribute to the adoption and maintenance of regular physical activity for long-lasting times, supporting and increasing the mental quality of life of people suffering from MS.
Introduction
Multiple Sclerosis (MS) is a chronic, immune-mediated disease of the central nervous system (CNS), with neurodegenerative processes characterised by the loss of the myelin sheath in multiple areas of CNS and consequent formation of scar tissue or sclerosis.
Different symptoms and dysfunction are associated with MS, such as fatigue, muscle weakness, balance and motor disorders, pain, cognitive impairment, mood disturbances, and depression [1][2][3].
Participation in Physical Activity (PA), particularly exercise training, represents the single most effective non-pharmacological approach for managing symptoms and improving the health-related quality of life (HRQoL) of people with MS [4][5][6]. However, people with MS do less PA compared to non-diseased people [7][8][9], but in similar way as other people suffering from a chronic disease [7]. Thus, according to different studies, people with MS are not getting the minimal amount of daily activity that even sedentary adults without neurologic injury or disease are able to achieve [10][11][12][13].
Participants and Procedure
A sample of 28 participants (18 women and 10 men) was recruited from the Regional Reference Centre for Multiple Sclerosis, "Binaghi" Hospital, Cagliari (Italy). The predominance of females in our sample reflected the higher prevalence of MS associated with the individual's gender. The F:M ratio was estimated within the range 1.9 to 2.7, depending on the geographical latitude [38]. Age range was between 26 and 74 with a mean age of 51.6 years (SD = 14.9); the majority of the participants were married (60.7%), while 32.1% were single; 53.3% were high school graduates, 32.1% had stopped at middle school and 14.3% had completed university. About half of the sample was employed (46.4%) while 39.3% were retired. The sample was characterized by a predominance of individuals with moderate disability (median Expanded Disability Status Scale score 4.2) [39] who were able to walk independently with or without assisting devices. Participants were invited to answer some questionnaires, which took approximately 15 min to complete. Data collection took place on four occasions between 20 September 2018 and 5 October 2018.
Ethics
The study was carried out in compliance with the ethical principles for research involving human subjects expressed in the Declaration of Helsinki and was approved by the Ethics Committee of ATS Sardegna (approval no. 102/2018/CE, 11 September 2018). Written informed consent was obtained from all participants.
Measures
A leaflet composed of a cover page with demographic information (gender, age, marital status, education and employment), the Godin Leisure-Time Exercise Questionnaire (GLTEQ), the Self-Efficacy for Multiple Sclerosis Scale (SEMS), the Exercise Self-Efficacy (EXSE) and the SF-12 Health Survey were administered to all participants.
The Godin Leisure-Time Exercise Questionnaire (GLTEQ) [40] consists of two questions assessing physical activity. The first question asks the participant to write the number of times that he or she has completed at least 15 min of physical activity during the last 7 days. It has three open-ended items that measure the frequency of strenuous (e.g., jogging), moderate (e.g., fast walking), and mild (e.g., easy walking) exercise. Thus, the weekly frequencies of strenuous, moderate, and mild activities are multiplied by 9, 5, and 3 metabolic equivalents, respectively, and summed to form a measure of total leisure activity. The second question asks the participants how many times during the week he or she engages in activities that make them sweat and has three possible answers "always" "sometimes" and "never". In accordance with different studies (e.g., [23,41]) we chose not to consider this answer in the data analysis because of autonomic nervous system disturbances that result in sweating problems in people with MS. The GLTEQ is a simple, reliable, and valid measure of physical activity that has been widely used with young people in epidemiologic, clinical, and behavioral change studies [23,25,[42][43][44][45], concerning person with MS. Since only the first question of the GLTEQ was used, the Cronbach's alpha was not calculated.
The Self-Efficacy for Multiple Sclerosis Scale (SEMS) [31] assesses self-efficacy related to the management of MS. It comprises 15 items starting from the root statement "I am confident that I can . . . " and is rated on a 5-point Likert scale from 0 (Not at all confident) to 4 (Very confident). The items provide two sub-scores "Goal setting" (item 2, 3,4,5,6,7,8,13,14) and "Symptom management" (item 1,9,10,11,12,15). According to the authors [37], the scale is characterized by good item functioning, measurement invariance, and good concurrent validity (positive correlations with positive affect, sense of coherence and coping strategies and negative correlations with depression and negative affect). In the current research, the subscales of symptom management and goal setting obtained a good internal consistency (Cronbach's alpha = 0.83 and 0.86, respectively).
The Exercise Self-Efficacy [46] comprises six items commonly used to assess self-efficacy for physical activity. The participant had to report their level of confidence, ranging from 0% (not at all confident) to 100% (completely confident), about the sentence "I am capable of continuing to make a moderate intensity physical activity three times a week for more than 20 minutes, without interruption, for the next week". The number of weeks increases by one in every sentence and the last sentence asks the participant how confident he/she is about doing a moderate intensity physical activity three times a week for more than 20 min without interruption, for the next six weeks. In the current research, the scale obtained an excellent internal consistency (Cronbach's alpha = 0.99).
The SF-12 Health Survey [47] is the short version of the SF-36 already used to assess the quality of life in people with MS [37]. It is a self-report questionnaire composed of 12 items that assess two components: physical health (physical component summary = PCS) and mental health (mental component summary = MCS). Depending on the question, there are different possible answers, for example the first one asks "In general, would you say that your health is . . . " and the answers range from 1 (Excellent) to 5 (Poor); the second asks "Does your health now limits you in doing moderate activities?" and the possible answers are "Yes, limited a lot", "Yes, limited partially" and "No, not limited at all"; the fourth asks "Have you accomplished less than you wanted at work and in other activities because of your health status, in the last 4 weeks?" The answers are "Yes" or "No"; other items have a range of responses ranging from 1 (Always) to 6 (Never). The scores were calculated using specific automated algorithms [48]. In the current research, the internal consistency of the scale was found to be good for the global scale (Cronbach's alpha = 0.83), and for the subscales of physical health (PCS) (Cronbach's alphas = 0.80) and mental health (MCS) (Cronbach's alphas = 0.79).
Statistical Analysis
D'Agostino-Pearson tests of normality were performed on all test scores. Spearman rho rank-order correlation were used to examine the associations between variables. Based on the results of correlation analysis, mediation analysis was executed to examine the hypothesized mediation models. Significance was set at p < 0.05. Descriptive statistics and Spearman's rho correlations were calculated using IBM SPSS Statistics version 23.0 (IBM Corp, Armonk, NY). Mediation analysis was carried out by means of the suite Advanced Mediation Models (jAMM), based on R lavaan package [49] and included in Jamovi version 1.0.5 [50].
Results
Means and standard deviations of the raw item scores of the study variables are shown in Table 1. The GLTEQ scores confirm the low mean level of physical activity of people suffering from MS, coupled with a high heterogeneity of PA behaviors and exercise self-efficacy, testified by standard deviations of both variables. Spearman's rho intercorrelations among the studied variables are presented in Table 2. Physical activity, as measured by means of the GLTEQ, was significantly and positively related with exercise self-efficacy (rho = 0.519), but not significantly associated with self-efficacy in symptom management and quality of life; self-efficacy in symptom management was significantly and positively associated with self-efficacy in goal setting (rho = 0.764), exercise self-efficacy (rho = 0.435) and mental component of quality of life (rho = 0.472); self-efficacy in goal setting was positively related with mental component of quality of life (rho = 0.533), as well as with self-efficacy in symptom management. The correlation analyses revealed that physical component of quality of life (SF-12 PCS) was not significantly associated with physical activity, exercise self-efficacy and self-efficacy for Multiple Sclerosis. Therefore, this variable was not included in the subsequent mediation effect analyses.
Using a generalized linear model, we examined whether self-efficacy in symptom management, self-efficacy in goal setting and exercise self-efficacy all serve as individual mediators in the relationship between PA and mental component of quality of life (Figure 1).
The correlation analyses revealed that physical component of quality of life (SF-12 PCS) was not significantly associated with physical activity, exercise self-efficacy and self-efficacy for Multiple Sclerosis. Therefore, this variable was not included in the subsequent mediation effect analyses.
Using a generalized linear model, we examined whether self-efficacy in symptom management, self-efficacy in goal setting and exercise self-efficacy all serve as individual mediators in the relationship between PA and mental component of quality of life (Figure 1). The results indicated that PA did not directly predict mental component of quality of life (β = −0.07), but it did indirectly, through the mediation of self-efficacy in goal setting. Exercise selfefficacy, self-efficacy in goal setting and self-efficacy in symptom management are all appreciably affected by PA (respectively β = 0.46; β = 0.37; β = 0.32), but only self-efficacy in goal setting, which significantly affects the mental component of quality of life (β = 0.61; p < 0.001), seems to act as a mediator of the relationship among PA and the mental health of people suffering from MS (β = 0.23; p = 0.06).
Discussion
The aim of the current study was to examine, in people with MS, the relationships between PA and HRQoL, as mediated by different forms of self-efficacy, namely self-efficacy in goal setting, selfefficacy in symptom management, and exercise self-efficacy.
The results of the correlation analyses confirm that different forms of self-efficacy can play a different role on mental and physical components of quality of life. As already reported in previous studies, which considered self-efficacy for control and self-efficacy for functioning [26], self-efficacy for control was positively related with physical and psychological quality of life, while self-efficacy for functioning was significantly correlated with physical quality of life but not with psychological health-related quality of life. In our study, both self-efficacy in goal setting and self-efficacy in symptom management were positively related with psychological quality of life, while exercise self-efficacy was more related with The results indicated that PA did not directly predict mental component of quality of life (β = −0.07), but it did indirectly, through the mediation of self-efficacy in goal setting. Exercise self-efficacy, self-efficacy in goal setting and self-efficacy in symptom management are all appreciably affected by PA (respectively β = 0.46; β = 0.37; β = 0.32), but only self-efficacy in goal setting, which significantly affects the mental component of quality of life (β = 0.61; p < 0.001), seems to act as a mediator of the relationship among PA and the mental health of people suffering from MS (β = 0.23; p = 0.06).
Discussion
The aim of the current study was to examine, in people with MS, the relationships between PA and HRQoL, as mediated by different forms of self-efficacy, namely self-efficacy in goal setting, self-efficacy in symptom management, and exercise self-efficacy.
The results of the correlation analyses confirm that different forms of self-efficacy can play a different role on mental and physical components of quality of life. As already reported in previous studies, which considered self-efficacy for control and self-efficacy for functioning [26], self-efficacy for control was positively related with physical and psychological quality of life, while self-efficacy for functioning was significantly correlated with physical quality of life but not with psychological health-related quality of life. In our study, both self-efficacy in goal setting and self-efficacy in symptom management were positively related with psychological quality of life, while exercise self-efficacy was more related with physical quality of life. Moreover, in accordance with other studies (e.g., [25]), exercise self-efficacy was also associated moderately with PA. As for self-efficacy in MS, namely self-efficacy in goal setting and self-efficacy in symptom management, it showed a more positive relationship with the mental component of quality of life, than with the physical one. Previous studies [31,37] indicated that both forms of self-efficacy in MS correlate with adjustment, but the goal setting dimension has the highest inverse correlation with depression, and a remarkable relationship with sense of coherence, and, above all, with well-being.
People suffering from MS have to overcome more barriers to be physically active: self-efficacy and goal setting can aid them to be more purposeful in planning their activities and in engaging in regular lifestyle PA.
In fact, when considering the indirect relationship between PA and quality of life, as mediated by self-efficacy [27], only mental health appeared affected by self-efficacy and specifically by self-efficacy in goal setting. This result suggests the relevance for MS patients of planning meaningful and realistic goals for their everyday life. This finding is partially consistent with previous literature that states that self-efficacy in goal setting could account for a significant amount of variance in mental health scores of people suffering from MS [37]. Moreover, self-efficacy in goal setting was already correlated with adaptive strategies of problem-solving [31] and the ability to overcome the daily barriers to lifestyle PA was shown to support people with chronic diseases which need a regular and long-lasting commitment [15]. A low adherence rate lifestyle PA is often more related to self-efficacy to cope with the disease than to limitations due to functional incapacity, pain, or other symptoms from which the person suffers [51][52][53][54][55].
The study has some limitations. First of all, it involved a small convenience sample of people suffering from MS, which has limited the possibility of finding stronger and more significant relationships. Secondly, the study was essentially explorative, because little is known about the relationships between PA and quality of life, mediated by specific forms of MS self-efficacy other than exercise self-efficacy. Therefore, the study must be replicated, involving a larger sample, to evaluate if the results observed in this preliminary study are confirmed. Thirdly, the cross-sectional design suggests more caution in the causal interpretation of direct and indirect effects: PA, exercise self-efficacy and self-efficacy in MS are likely to affect physical and mental health-related quality of life; however, it cannot be excluded that the direction of this relationship can be reversed. A longitudinal research design, coupled with more specific statistical procedures, would allow a deeper analysis of the reciprocal relationships among these variables. Fourthly, concerning measurements, it is to notice that many of the participants had difficulties in assessing the level of their leisure PA using the GLTEQ and needed more instructions: for example, some activities used as examples (e.g., ski, snowmobile, hockey), were unusual for our sample, who had difficulty comparing these activities with what they usually do. Although the Godin Leisure-Time Exercise Questionnaire has a long tradition of use in studies on PA conducted with people suffering from MS, we suggest introducing more objective measures of PA, that should be coupled with GLTQ, as recommended for a more structured exercise training intervention [42].
Future studies should take advantage of objective assessment of amount and intensity of PA, which can be carried out on the basis of data collected by wearable devices (e.g., triaxial accelerometers). Such an approach, which is suitable for long-term measurement, has been already successfully tested in individuals with MS for this decade [56].
Conclusions
The people suffering from MS can benefit from regular PA. However, their rate of PA is lower compared to people suffering from other non-communicable diseases. In addition to the exercise self-efficacy, already investigated in the literature, it seems appropriate to take into account more specific forms of MS self-efficacy. Our results indicate that the self-efficacy in goal setting can mediate the relationship between PA and the mental component of HRQoL, but further studies, conducted on a large scale, are need.
|
2019-11-27T14:04:40.664Z
|
2019-11-21T00:00:00.000
|
{
"year": 2019,
"sha1": "cbb4c6069187d8d459fdb28b18f7751ba7c77033",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/bs9120121",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac19259222c800b62884a6d8e5f70df86ab558e9",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253349749
|
pes2o/s2orc
|
v3-fos-license
|
Fleet Management and Charging Scheduling for Shared Mobility-on-Demand System: A Systematic Review
Challenged by urbanization and increasing travel needs, existing transportation systems call for new mobility paradigms. In this article, we present the fleet management and charging scheduling of a shared mobility-on-demand system, whereby electric vehicle fleets are operated by a centralized platform to provide customers with mobility service. We provide a comprehensive review of system operation based on the operational objectives. The fleet scheduling strategies are categorized into four types: i) order dispatching, ii) order-dispatching and rebalancing, iii) order-dispatching, rebalancing and charging, and iv) extended. Specifically, we first identify mathematical modeling techniques implemented in the transportation network, then analyze and summarize the solution approaches including mathematical programming, reinforcement learning, and hybrid methods. The advantages and disadvantages of different models and solution approaches are compared. Finally, we present research outlook in various directions.
I. INTRODUCTION
W ITH the decarbonization trends in the transportation sector, electric vehicle (EV) becomes an important part in the road transportation system [1], [2]. Meanwhile, shared mobility-on-demand (MoD) system, such as Uber, Lyft, and Didi, can fulfill urban travel demand more efficiently compared with private vehicles. The utilization rate of vehicles, road, and parking facilities in shared transportation systems is much higher than that of private cars [3] and conventional taxi fleet [4]. For example, a shared MoD system could meet the mobility need of the same population with roughly 60% of the conventional taxi fleet [4] by adopting cloud-based fleet management and navigation platform. Therefore, the EV-based shared MoD system could play a more important role in the future urban transportation system. The shared MoD system is composed of three key components, which are platform, customer and driver. The platform connects customer requests and driver-owned vehicles, dramatically changing the transportation condition in real time [5]. Passengers submit the travel demand to the platform. The trip information includes pick-up and drop-off location, departure time and travel mode. Typically, there are two travel options for customers to choose: ride-sharing and ridesplitting [6]. To unify the statement, we define ride-sharing as the situation where passengers specify their departure and destination locations and wait for pick-up by the driver. In contrast, ride-splitting or car-pooling indicates that multiple passengers with similar routes and time-schedule share a single vehicle, thus fewer vehicles can satisfy more customer demand, which makes the fleet more affordable, sustainable and time-effective [7].
The proliferation of a shared MoD system relies on fleet management and charging scheduling schemes. There are multiple challenges, ranging from macroscope to VOLUME 9, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ microscope, shall be resolved to enable efficient fleet management and charging scheduling. We summarize three key operational challenges as follows.
(i) Efficiently match the travel request with available EVs.
(ii) Efficiently route the EV to a desire location either for serving its current order or preparing for future dispatch.
(iii) Choose the right time and location to recharge the EV. In general, the shared MoD system addresses those challenges from the perspective of the fleet platform operator. The fleet operator manages the MoD system by assigning passengers to EVs and routing them, rebalancing the idling fleet by relocating them to reduce the asymmetrical demand distribution, and scheduling the vehicle charging location and time [8]. These scheduling decisions are highly spatiallytemporally coupled with each other. Specifically, EVs are rebalanced to the high-demand area in advance to fulfill the order appearing in the future [9]. Meanwhile, vehicles are dispatched to the optimized charging station at the appropriate time and the charging demand profile varies in time and space sequentially. These fleet management strategies are coordinated by the operator centrally and simultaneously aiming at maximizing revenue or minimizing costs [10]. In this paper, we classify research into four categories based on the operational objective types.
The rest of this paper is organized as follows. The operation modeling of the shared MoD system is presented in Section II. The solving approach is discussed in Section III. The research outlook is discussed in Section IV and a summary of the paper is concluded in Section V.
II. PROBLEM MODELLING
We classify the modeling methods of the shared MoD system into four categories based on different operation objectives, which include 1) order dispatching, 2) order-dispatching and rebalancing, 3) order-dispatching, rebalancing and charging, and 4) other extended objectives, as shown in Fig. 1. The details are discussed in the following.
A. ORDER-DISPATCHING
In the order-dispatching problem, the shared MoD system optimizes the matching decisions in a dynamic way faced with time-varying demand and stochastic scenarios. Meanwhile, matching decisions in the current period will strongly affect demand and supply in subsequent periods. The platform should consider multiple short-term and long-term objectives, such as instant rewards from passenger pick-up time and fare charged, passenger and driver satisfaction and platform profit in the long-term, which may conflict with each other. In addition, since the matching process focuses on each vehicle and request, the scale of the problem could be huge, which causes the dimensionality curse. In real world, order matching is usually performed in real-time, which requires an improved solution time in the algorithm.
We can divide the order-dispatch problem in mobility service into two categories: matching and matching considering vehicle routing problem (VRP). For the former problem, there are three types of driver-passenger matching strategy:
1) PLATFORM MATCHING
The platform assigns the requests to vehicles centrally based on the vehicle distribution and travel demand. Under the situation where requests are processed as long as they occur, some follow the first-come-first-serve rule (or first-in-firstout, FIFO) [11], [12]. Passenger with earlier request receives priority in the response process and is assigned with the nearest available vehicle [11]. Other research utilizes the optimization model to formulate the problem [13], [14], [15], [16] based on a directed graph. Bipartite graph matching is implemented with the aim of minimizing maximal cost edge [14] or maximizing the system utility function, which is a combination of the total net profits of all taxis and the waiting time of passengers [15]. Meanwhile, the fluid-model and circle region-based model are proposed in [13] and [16], respectively.
However, compared with the situation when the system platform or operator responds to customer requests immediately, many ride-sharing platforms collect requests within a short time window and solve the problem at the end of each time window, called request batching [17], [18], [19], [20]. Queue-theory constructs the batching model in [17] and [18]. A queue aggregates the customer requests and the same number of vehicles are dispatched to the queue when it reaches a threshold size [17]. Bipartite graph models the batching riding as an integer programming with the aim of maximizing welfare for drivers and passengers [19]. [20] formulates the batch marching problem as a multi-objective optimization and develops an adaptive matching policy, which can achieve the target-based optimal solution.
2) ORDER-GRABBING
Different from the matching dominated by the platform, drivers in the order grabbing mode decentrally choose their orders provided by the platform [21], [22], [23]. In this mode, the behavior pattern of drivers can affect the matching results. The combination optimization in [21] optimizes the overall traffic efficiency and delivers the best user experience. [23] employs the multi-network flow model to obtain the sampling probability matrix of each vehicle with the aim of minimizing flow cost.
3) MUTUAL MATCHING
Since few studies have addressed the problem of passenger satisfaction from the perspective of passengers. If the problem of user satisfaction cannot be solved well, the platform will lose users and eventually lead to lower revenue. Therefore, [24] implements a hidden points-based bipartite graph to design matching and allocation mechanisms that significantly improve passenger satisfaction.
The latter problem which emphasizes routing strategy in the order-dispatch process is formulated as dynamic vehicle routing problems (DVRPs),part of the larger family of VRPs (Vehicle routing problems). VRPs are usually solved as static routing problems, whereby the origin and destination of trips are known in advance. However, customer demand in MoD systems is dynamic, leading to DVRPs. [25] presents a queueing approach to the task allocation and dynamic routing strategies of vehicles. [26], [27], [28] formulate the vehicle routing with pick-up and drop-off as a Markov dynamic process (MDP).
For the ride-splitting scenario, the policies need to provide assignments and routes dealing with multiple pick-up and drop-off locations and time window constraints. Graph-based modeling formulates the routing and matching strategy of each vehicle and request as a mixed-integer programming problem [29], [30], [31], [32]. [29] optimizes ride-splitting problem with the aim of maximizing the total profit while respecting the pick-up and drop-off time as well as maximum ride time.
B. ORDER-DISPATCHING AND REBALANCING
In addition to order dispatching, a critical operational objective focused by the MoD system operator is the repositioning of empty vehicles awaiting new passengers, which includes the vehicle routing process implicitly. Supply-and-demand mismatching challenges the shared MoD system and vehicle rebalancing appears as an efficient way to reduce the asymmetric demand in geography distribution. Empty vehicles drive for the high-demand area in advance to fulfill customer requests in time and reduce passenger waiting time. How to reposition the empty vehicles awaiting new passengers from a system-wide perspective is important to increase the system efficiency.
We categorize the related literature based on the modelling approaches. Basically, they can be divided into three groups: i) graph-based, ii) queueing theory-based and iii) grid-based. These three models are illustrated in Fig. 2.
1) GRAPH-BASED
For the graph-based models, the transportation network is often modeled as a directed graph including arcs and nodes. The node represents location such as a station or an area and the arc represents a combination of roads between two locations. Specifically, those graph-based models can be categorized into three major types of formulations, which are network flow, vehicle-centric and other techniques.
a: NETWORK FLOW FORMULATION
The vehicle fleet and passengers are modeled as flows and the fluid-dynamic approach is often adopted. Fleets and customers are not represented individually but as flows between nodes in this approach [33], [34], [35], [36], [37], [38], [39]. The main constraint is flow conservation and consistency, which requires the number of vehicles flowing in a certain node equal to that flowing out of this node at the same time. This modeling approach reduces the problem size, but the routes cannot be acquired for a specific vehicle directly. The problem could be divided into subproblems, rebalancing and order assignment with totally unimodularity [34], which extends the problem to large-scale and leads to a computationally-efficient scheduling algorithm for the vehicles.
Though vehicle repositioning reduces customer waiting time and increases passenger throughput, there is concern that shared vehicles result in congestion worse than personal vehicles due to empty repositioning. Thus, many papers consider endogenous congestion, which is affected by the operation of the shared MoD system. Congestion can be modeled by capacity constraints in the total traffic flow on the road [33], [36], [38]. The road-utilization-dependent travel times are captured via a piecewise affine approximation of the Bureau of Public Roads (BPR) model [38]. Within a capacitated transportation network, research shows that the rebalancing VOLUME 9, 2022 vehicles do not lead to an increase in congestion if properly coordinated under relatively mild assumptions [33].
b: VEHICLE-CENTRIC FORMULATION
Decision variables represent operational scheduling for a vehicle that will 1) wait at a node, 2) serve a customer, or 3) rebalance to another node [4]. In the simplest setting, routing of a vehicle is optimized via binary decision variables, taking value 1 if and only if the vehicle is assigned to the corresponding road link [40], [41].
c: OTHER FORMULATION TECHNIQUES
Traffic flow is represented by cell transmission model where each road is divided into an ordered set of cells, discrete spatial intervals that vehicles travel through in [42] and [43]. The sending flow and receiving flow are transition flows constrained by the kinematic theory. Different from the nodebased in the reference mentioned above, [44], [45] investigate a region-based model where the fleet operating regions are partitioned and discretized with demand estimation in ridesplitting mode. Taking [45] for example, in the across-region level, idle mileage induced by rebalancing vehicles is optimized and a robust dispatch strategy is designed. Within each region, pick-up and drop-off schedules for real-time requests are obtained for each vehicle with the objective of minimizing total mileage delay while serving as many requests as possible. [46] ignores the transportation network and only models AVs and customers.
2) QUEUEING THEORY-BASED
Queueing network is used to represent the critical performance metrics such as the availability of vehicles at stations and customer waiting time [4], [47], [48], [49], [50]. The road network is modeled as an abstract queueing network with infinite-server road queues when the road congestion is not considered. Queueing theory-based model formulates the matching process in [48]. [49] models the mobility-ondemand system as two coupled closed Jackson networks with passenger loss. [50] resolves the non-myopic idle vehicle relocation using queue delay as an approximation of the conditional expected cost under ride-splitting.
For the queueing modeling-based research, congestion is typically considered through capacity constraints on the queues [4], [47]. [4] proposes a queue network model with finite-sever within a Jackson network model. In [47], the MoD system is cast within the framework of closed, multiclass BCMP queuing networks. The framework captures stochastic passenger arrivals, vehicle routing on a road network, and congestion effects.
3) GRID-BASED
For the grid-based techniques, hexagonal grids are deployed to represent the transportation network and vehicle scheduling could be depicted as the following actions [51], [52]. The order-serving action picks up an available order from the platform and transports the passenger from the current location to the destination grid. The reposition action is moving to adjacent grids or wandering in the current grid. [52] realizes transfer between regions regarding order-dispatching and routing based on hexagonal grid modeling.
C. ORDER-DISPATCHING, REBALANCING, AND CHARGING
Based on Section II. B, vehicle energy-refueling is necessary after accomplishing travels. An intelligent fleet charging policy ensures that vehicles have adequate level of energy for future actions and virtually eliminates the ''range anxiety'' issue, which is a major barrier to EV adoption. Moreover, when vehicles are not adopted for fulfilling trip requests, they could be routed to charging stations in order to either absorb excess generated energy when power demand is low (G2V) or inject power in the power network when power demand is high (V2G). If charging scheduling is well-managed, it will not only benefit EV drivers with lower electricity costs, but also provide flexibility for grid operators to perform load balancing or renewable energy integration.
1) For the vehicle-centric modeling in the transportation network, the battery level of an individual vehicle and the energy availability in the power grid are accounted in [53]. The fleet charging/discharging and vehicle to grid (V2G) services are optimized on the energy layer with the aim of minimizing electricity cost over a long time scale [54], subject to vehicles' travel distances constraints. [7] proposes a joint rebalancing and V2G coordination, with the aim of vehicle utilization maximization.
2) For the network flow model, time-expanded network flow model is developed in [8], while considering road congestion and operational constraints in distribution and transmission power network. The charging characteristics of each node is introduced compared with that in the Section II. B, thus there are three kinds of vehicle flow in the transportation network: order-serving flow, rebalancing flow and charging flow. These flows satisfy the flow continuity and consistency constraints. That is to say, the number of flows leaving one node equal to that arriving at this node with the same charge level.
3) Agent-based model considers the vehicle charging potential to supply operating reserve in [55].
The articles which not consider the interaction between power and transportation network could be divided into three similar categories. [56], [57], [58], [59], [60], [61], [62]. a) For the vehicle-centric modeling, different time scales are considered to decide vehicle scheduling [57]. In [57], charging is optimized over longer time scales to minimize both approximate waiting time and electricity costs. Routing and relocation are optimized at shorter time scales to minimize waiting times, with the results of the long-time-scale optimization as charging constraints. b) For the network flow models, differential equations are utilized to model dynamic behavior of customers and vehicles [61]. The number of vehicles and customers at a node obeys the nonlinear timedelayed differential equations. In addition, the charging and routing problems could be decoupled under the assumption that electricity prices at the destination nodes of all current trips are unknown to the operator [59]. Electric traveling salesman with time windows is developed in [58] to solve customer routing and recharging with the aim of minimizing the total distance of the selected arcs and recharging paths. c) For the agent-based model, [56] predicts the battery range and charging infrastructure requirements of the EV fleet operating on Manhattan Island. [60] optimizes charging scheduling during rebalancing.
It is worth clarifying that there are some literatures considering order assignment and vehicle charging operation, while vehicle rebalancing is ignored. For the queue-based modeling, [63] formulates the dispatching problem as a stochastic queueing network and employs Lyapunov optimization technique, aiming at minimizing vehicles dispatch cost and customer waiting time. For the vehicle-centric modeling, [64] optimizes routing and charging strategies with given origin location, aiming at maximizing the energy efficiency.
D. EXTENDED OBJECTIVES
In addition to the system operation and fleet management, there are some extended operation objectives which account for the shared MoD system. We classify these works into four categories: intermodal, pricing, planning and battery swapping.
1) INTERMODAL
Operating a MoD system to cover the complete city-wide transportation demand would inexorably increase the number of operated vehicles and cause congestion again due to induced demand for transportation, as customers are shifted from public transit to shared vehicles. MoD system should intelligently cooperate with other modes of transportation, such as the public transportation network or private vehicle, in order to reduce the overall travel time and secure congestion-free urban mobility. Against this backdrop, some studies develop modeling and optimization methods to realize the benefits of the intermodal transportation system [65], [66], [67].
Multi-commodity network flow model is employed in [66] and [67] to capture the joint operation of MoD system and public transit, with the aim of reducing customers' travel time. Furthermore, the joint intermodal congestion-aware routing and rebalancing formulation of the vehicle fleet is extended to a mixed traffic setting capturing the interaction between MoD users and private vehicles in [67].
2) PRICING
Trip pricing policies play an important role as they modulate the inflow of customers traveling between regions in the network. As a result, the operator chooses prices such that the induced demand ensures a balanced load of customers and vehicles arriving at each location. Additionally, selecting prices enables the operator to modify demand such that the system can operate with smaller or larger fleet size [68], [69], [70], [71]. A joint dynamic pricing, dispatching, rebalancing strategies are optimized in [69] and [70].
From the perspective of charging network operator (CNO), optimized charging pricing guides vehicles to charge at approximate time and location considering electricity price purchased from grid, thus the charging station network could be optimized. [68] proposes a spatial-temporal charging pricing strategy to improve the operation efficiency of the integrated charging system and transportation.
3) PLANNING
Planning problems could be classified into fleet planning and charging infrastructure planning. Fleet planning optimizes the number of EV fleet and battery capacity of each class for heterogeneous fleet, and initial fleet distribution including charge level and vehicle location [72], [73]. Charging infrastructure planning determines charging station siting and the number of charging bays with different charging rates. The charging infrastructure planning requires optimization methods to consider the coupling between transportation and power network. The impact of vehicle charging behaviors on the fleet operation and charging system planning are effectively evaluated in the joint fleet sizing and charging system planning model [74].
Crucially, the operation of the shared MoD system will be strongly influenced by the available charging infrastructure, which in turn should be designed to accommodate the EVs' charging activities in the best possible way. Scheduling in operational problems is essential in planning problems so that the investment costs at the planning stage and the operation costs in the future could be balanced. At the operational level, scheduling strategies such as routing, dispatching, and charging are considered. Station siting and fleet operation could be jointly optimized and the expanded network flow model is employed in the transportation network [75].
4) BATTERY SWAPPING
Compared with the long charging time, the battery swapping method allows an EV to swap its depleted battery (DB) for a fully-charged battery (FB) at the battery swapping station (BSS) within several minutes. If battery swapping is adopted as an alternating energy refueling method in the shared MoD system, not only can it benefit the drivers with a fast energy refueling service, but also match available drivers with more demand during traveling time, thus the operation efficiency of fleet will increase. Therefore, battery swapping is appropriate for fleet which deals with more customer requests and trip demand compared with private cars. [76] proposes an operational framework in integrated shared MoD system and battery swapping station, determining the fleet scheduling and battery charging strategies. VOLUME 9, 2022
5) INTERACTION BETWEEN TRANSPORTATION NETWORK AND POWER DISTRIBUTION NETWORK
As the installed capacity of charging infrastructure kept increasing, the coupling between the transportation network and power distribution network becomes an important factor. The power system operation can be significantly influenced by the fluctuating charging loads in the vehicle charging or battery swapping stations, which are determined by the transportation network. Therefore, it is necessary to consider the interdependence of transportation flow and power flow, and to coordinate the optimization of coupled operation. For the operational model in power distribution networks, AC optimal power flow (OPF) or convex relaxation OPF models can be formed. To coordinate and optimize the operation of transportation network and power distribution network simultaneously, related studies usually combine proper traffic models and optimal power flow models to describe the operational problems of coupled traffic and power networks [77]. Appropriate electricity pricing schemes are used to influence traffic flows in MoD system and achieve economic energy dispatch [1,8]. A joint rebalancing and V2G coordination strategy for transportation system is proposed in [7], where the vehicle-to-grid is facilitated by parking lot.
E. DISCUSSION
The analysis of three modelling approaches is presented as follows. The graph-based models represent the road network topology clearly and vehicle routes correspond to that in the real world. We obtain the vehicle operation when solving the traffic flow on the road arc by three major types of formulations, which are network flow, vehicle-centric and other techniques. Taking network flow model for example, each node in the graph corresponds to a tuple with three dimensions: time, location and state-of-charge, which model the time-varying characteristics and battery charge level of the MoD fleet.
With the queue-based modelling method, the trip is modeled as a queue between nodes. Queuing theory in MoD system deals with randomly arising vehicles which travel on the road with a finite maximum capacity. When designing policies for MoD systems, we specify how vehicles move from one queue to another. The road network is modeled as an abstract queueing network with infinite-server road queues when the road congestion is not considered. It is intuitive and convenient to reflect the quality-of-service including availability of vehicles, the waiting time of both passengers and charging vehicles.
For the grid-based models, the study area is divided into hexagonal grids and each grid can serve as a trip origin or destination. The order-serving action picks up an available order from the platform and transports the passenger from the current location to the destination grid. The rebalancing or charging decision are modeled as an order and are assigned to EVs in the form of dispatch. The reposition action is moving to adjacent grids or wandering in the current grid. The state variable of available EVs (not in service or charging) consists of the current time step, location and battery charge. The grid-based approach is appropriate for integration with data-driven or reinforcement learning since the representation of action in this mode is more straightforward.
Therefore, we would suggest selecting the proper modeling approaches based on the objective of the study. If detailed road traffic analysis is desired, a graph-based approach might be a good choice. In contrast to that, a grid-based approach could be a better choice if the fast-solved fleet management decisions are the primary research objective.
III. SOLUTION METHODS
We categorize the solution methods for shared MoD system operation problems into three groups: mathematical programming, reinforcement learning, and hybrid approaches, as shown in Fig. 3.
A. MATHEMATICAL PROGRAMMING APPROACHES
We list the mathematical programming approaches based on different model formulations. The heuristic-based algorithm is a common way to solve dynamic traffic problems especially when the scale of the problem is large. Model predictive control (also known as receding horizon control) is a control technique whereby an open-loop optimization problem is solved at each time step to yield a sequence of control actions up to a fixed horizon, and the first control action is executed [78].
1) NETWORK FLOW FORMULATION
Dynamic problems are presented via a time-expanded network where nodes consist of locational, temporal and charge characteristics. An electric vehicle (i, t, c) indicates that the vehicle is at the physical node i at time t with charge level c. Accordingly, an edge between n 1 = (i, t 1 , c 1 ) and n 2 = (j, t 2 , c 2 ) exists if and only if j could be reach form i during time period t 2 − t 1 , with charge reducing from c 1 to c 2 . The optimization problem is formulated as a linear programming and resolved by the solver even for large-scale problems [8].
Heuristic algorithm is deployed in [53] and [58], resulting in a near-optimal solution within polynomial time. [36] utilizes Frank-Wolfe algorithm to solve the routing problem after reformulation. Congestion-aware routing scheme captures road-utilization-dependent travel times via a piecewise affine approximation of the Bureau of Public Roads (BPR) model [42]. For the online realization of the problem, the real-time MPC algorithm is utilized in [33], [34], [37], and [39].
2) VEHICLE CENTRIC FORMULATION
Small-scale problems could be resolved by the solver directly. [64] utilizes a solver to resolve the mixed-integer quadratically constrained programming. Alternating direction method of multipliers (ADMM) decomposes the pick-up, delivery, and rebalancing problem with time windows (PDRPTW) problem into each vehicle's routing. Binary variables are introduced to indicate if the vehicle is traversing a road link to optimize routes, which will construct a NP-hard programming as the number of vehicles increases. To tackle the scalability issues, a heuristic algorithm is deployed. Local neighborhood search is employed in [40] to find routes.
Real-time implementation in optimization problems is always combined with model predictive control or recedinghorizon algorithm [7], [30], [31], [54], [57], [78]. [30], [31] determine real-time dial-a-ride large-scale dispatching over a rolling horizon relying on column generation algorithm and backbone algorithm, respectively. Model predictive control (MPC) in parallel with different time-scales is implemented in [54] and [57]. Cascaded model predictive control is utilized and the problem is formulated as a mixed-integer linear programming in [54]. The first MPC scheme, called energy layer, abstracts the vehicle fleet as an aggregate storage system for the sake of model scalability, and it optimizes fleet charging and vehicle-to-grid services to minimize electricity cost over a long time-scale (hours). The second MPC scheme, called the transport layer, optimizes short-term vehicle routing and relocation decisions to minimize customers' waiting times while accounting for the charging constraints derived from the energy layer.
To tackle the scalability and realize online implementation together, [7] designs an effectively distributed heuristic based on model-predictive-control and the genetic algorithm in the integer linear programming.
3) QUEUE THEORY-BASED FORMULATION
[47] reformulates the capacitated routing and rebalancing problem as a linear programming. A heuristic algorithm based on Lagrangian decomposition is proposed to address the challenge which caused by increasing variables number [50].
For the real-time achievement, an online minimum drift plus penalty (MDPP) framework is deployed to obtain the real-time dispatching strategy in [63]. A real-time closedloop rebalancing policy for drivers is formulated as an integer linear programming, which reduces to a linear programming thanks to total unimodularity of two subproblems: rebalancing and assignment [49].
4) OTHERS
For the research where the transportation network is modeled as region-based, [62] studies a heuristic primal-dual method to optimize online charging scheduling. [45] employs receding horizon control to optimize idle mileage induced by rebalancing vehicles across regions towards current and predicted future requests.
For the research where the transportation network is modeled as cell-transmission, a traffic assignment simulator and a heuristic approach are implemented to solve dynamic ridesharing [42]. Tabu search heuristic is deployed to solve the dynamic traffic assignment problem, which is formulated as a mixed integer linear programming [43].
When solving large-scale problems, distributed implementation provides an alternative approach. There are two general types of distributed algorithms: gradient based and dual variable based. For the former, the gradient related step is taken and followed by averaging with neighbors. For the latter, at each step for a fixed dual variable, the primal variables are solved to minimize some Lagrangian related function, then the dual variables are updated accordingly. One of the well-known methods of this kind is the Alternating Direction Method of Multipliers (ADMM), which decomposes the original problem into two sub-problems, sequentially solves them and updates the dual variables associated with a coupling constraint at each iteration. ADMM decomposes the pickup, delivery, and rebalancing problem with time windows (PDRPTW) problem into each vehicle's routing [2].
B. REINFORCEMENT LEARNING APPROACHES
Reinforcement learning methods have significant advantages in solving large-scale, real-time problems that require complex and accurate models. We classify the scholars utilizing reinforcement learning approaches into three categories based on the methods of obtaining the optimal solution, the algorithm schemes of three methods are shown in Fig. 4.
1) VALUE-BASED
The value-based approaches deploy the deep neural network to estimate the value function of an action or a state and implicitly generate a deterministic policy through the value function. Actions are decided by choosing the best action VOLUME 9, 2022 in the state. Temporal difference (TD) error specifies how different the new value is from the old prediction. Deep Q-learning Network (DQN) is the most typical and widelyused algorithm. DQN-based algorithms are used in [9], [26], [27], [28], [52], [79], [80], [81], [82], [83], [84], [85], and [86] to solve the fleet operation and charging management problems. [52] constructs the vehicle dispatching and rebalancing problem as a semi-MDP model. The distribution of orders is estimated using the cerebellar value network (CVNet) and the map is divided into hexagonal grids to improve the efficiency and scalability of the solution. DQN is utilized for policy learning with the aim of maximizing drivers' revenue while minimizing the average pick-up distance for all orders. Order dispatching, rebalancing and charging strategies are formulated as a partially-observed Markov decision process in [9]. Meanwhile, a binary linear programming is embedded in the reinforcement learning process to select the globally optimal action, making it possible to form an online scheduling strategy which is suitable for large-scale fleet to maximize the overall revenue. The two-layer dynamic programming problem is transformed into a single layer in [70] and DQN realizes the dynamic mileage pricing of the order service to ensure the maximization of system income.
2) POLICY-BASED
The policy-based approaches fit the policy function instead of the value function through the neural network, which converges better and is applicable to higher dimensional action spaces. Actions are decided based on the probability distribution. The policy-based algorithm is deployed in [51], [69], [71], [87], and [88] to determine fleet operation and charging scheduling to maximize the overall social welfare. [51] divides vehicles into order-dispatching (OD) group for order serving and fleet management (FM) group for rebalancing to design a novel framework that learns to collaborate in a hierarchical multi-agent setting for ride-hailing platform. [88] constructs the fleet charging scheduling problem as a two-layer model, with one layer for transportation and the other for power. Deep Deterministic Policy Gradient (DDPG) is employed to form the electricity price and guide fleet charging decisions to achieve the joint optimization of the transportation-power network. Based on the principle of network flow, a MDP model is constructed in [69] to achieve fleet management by maintaining a queue of passengers in line. Proximal Policy Optimization (PPO) algorithm is utilized to dynamically price order revenue to maximize driver profits.
3) ACTOR-CRITIC [68], [89] [90], [91] apply the actor-critic based algorithm, which incorporates the first two approaches by implementing the estimated value function to critic actions and update the policy network to obtain the optimal policy faster. The algorithm is robust in different levels of system expansion dynamics. [91] proposes a multi-agent reinforcement learning (MARL) framework and connects two neural networks to improve the overall fleet pick-up rate and overall revenue. Meanwhile, extensive experiments have shown that the proposed approach is robust to different levels of system expansion dynamics. In [68], a novel reward function is designed to solve the dynamic service pricing problem in ride-hailing platforms. The proposed reward function assists the Soft Actor-Critic (SAC) model with faster convergence and higher income than the methods taking revenue as the reward function only. [92] proposes a reformulation of a mixed-integer programming model into a decentralized Markov decision process model which uses centralized training and distributed execution. The model is solved by a unique actor network for each agent and a shared critic network, to address the scalability issues of large-scale smart grid systems.
C. HYBRID APPROACHES
To realize online performance and characterize hard constraints in problem, combined mathematical programming and learning-based programming are employed in [10], [93], and [94]. [93] decouples dispatching and rebalancing (neglecting routing) in two linear programming and links them through reinforcement learning. Vehicle dispatching is yielded by solving the first linear programming, and the optimal rebalancing vehicle distribution is computed via reinforcement learning (based on graph neural networks) and realized by solving the second linear programming. The actor-critic algorithm is utilized to maximize driver profit. In [10], the Stackelberg equilibrium investigates the responsive behavior of MoD operator (order-serving, repositioning, and charging) which is formulated as a multi-commodity network flow model. A SAC-based multi-agent deep reinforcement learning algorithm is developed to solve the proposed equilibrium framework. [94] proposes a reinforcement learning-based algorithm with decentralized learning and centralized decision-making components. The centralized decision-making process enables coordination of the individual EV by formulating the EV fleet dispatching problem as a linear assignment problem, which maximizes the EV fleet's action value function.
D. DISCUSSION
Those aforementioned solution methods have their advantages and disadvantages, which is presented by following points.
1) OPTIMALITY
Generally speaking, the optimality of the mathematical programming methods can be guaranteed if the problem is convex. In contrast, the optimality of reinforcement learning based solution cannot be theoretically guaranteed.
2) CONSTRAINTS
Reinforcement learning approaches can face difficulties when incorporating complex physical constraints. While mathematical programming methods allows accurate representation of physical operational constraints for the shared MoD system.
3) COMPUTATIONAL EFFICIENCY AND SCALABILITY
A major drawback of mathematical programming approaches is that solving large-scale problems is always challenging. This becomes even worse if more detailed spatial-temporal model or real-time decision making are desired. The advantage of reinforcement learning method is could adaptively learn a near-optimal solution using the capability of neural networks. For the online implementation, it is gradually difficult for mathematical programming methods to solve as the time slot becomes shorter.
Therefore, hybrid approaches combine the advantages of these two approaches including online implementation and physical constraints characterization. However, it still requires careful tuning to achieve an ideal performance. The comparison of three kinds of reinforcement learning approaches is presented in Table 1.
IV. RESEARCH OUTLOOK
The aforementioned works have made remarkable progress in this research area. However, there are some other interesting research directions could be investigated to further account the operational characteristics and future trends of shared MoD system:
A. QUALITY OF SERVICE
To describe the operational characteristics of shared MoD system in a more detailed and realistic manner, the quality of service for passengers and drivers shall be accounted for in the modeling. Those indices include order-serving waiting time, charging waiting time, relocation distance and average trip time. Incorporating those factors into the models can further reduce the divergence between computational results and real-world results.
B. IMPACT ON POWER DISTRIBUTION SYSTEM
The impact of the shared MoD system on the power distribution system shall be further investigated. With its unique flexibility, the coordinated fleet management and charging scheduling decisions could make the charging load as a movable and dispatchable demand side resource, which is quite valuable to the flexible operation of the power distribution system and enhances its capability on voltage regulation, renewable energy integration and congestion management.
C. NAVIGATION MECHANISM
The navigation mechanism for the shared MoD system shall be coordinated with both power system and transportation system operation. The dynamic charging prices can serve as a signal to alter the charging scheduling decisions. The spatial-temporal aware transportation prices can further affect the fleet management decisions. Combing those factors together, the shared MoD system may be able to provide more significant flexibility to improve the operational efficiency for the coupled power-transportation system.
D. COMPETITION AMONG VARIED ENTITIES
In the real-world scenario, multiple shared MoD fleets simultaneously exist in the same transportation system. Therefore, it is important to investigate the competition equilibrium among multiple MoD fleets. Furthermore, the competition among shared MoD fleet and other travel options, such as private vehicles and public transit shall be further explored. Moreover, the interactions among distribution system operator, shared MoD system operator and end-users could be also be investigated.
E. ENVIRONMENTAL IMPACT
The environmental impact of the shared MoD system shall be further analyzed. Particularly, the spatial-temporal flexibility induced by the shared MoD system in both power and transportation system shall be investigated to quantify its environmental impact in both power and transportation sector. Moreover, long-term saving, such as reducing the need of more power and transportation infrastructures, shall also be considered when calculating the environmental impact.
F. MODELING AND SOLUTION METHOD
As the scale of the shared MoD system kept increasing, it becomes more and more important to efficiently model the VOLUME 9, 2022 fleet management and charging scheduling of a large-scale shared MoD fleet. Furthermore, those types of operational problems shall be accomplished in an online manner. Therefore, it is important to further investigate the computational inexpensive modeling and solution methods for the largescale shared MoD fleet. Moreover, how to combine the model-based and model-free (i.e., data-driven) approaches to achieve a trade-off between modeling accuracy and computational efficiency shall be further discussed.
V. CONCLUSION
In this paper, we provide a comprehensive review of the shared MoD system research. We categorize research based on the modeling approaches and solving methods. The operational problems of the shared MoD system are classified into four types, which are 1) order-dispatching, 2) orderdispatching and rebalancing, 3) order-dispatching, rebalancing and charging, and 4) extended. Mathematical models include graph-based, queue-theory based, grid-based and others such as the cell transmission model, which is relatively rare. Among these models, graph-based models represent the road network in the clearest way and queue theory-based models are appropriate for measuring the quality of service. Grid-based models are suitable for integrating with data driven or reinforcement leaning. Therefore, we suggest selecting the proper modeling approaches based on the objective of the study. Graph-based model would perform better if detailed road traffic analysis is desired. In contrast to that, a grid-based model would be a good option if the fleet management decisions are required to be determined fast.
Solution methods are divided into mathematical programming approaches, reinforcement learning approaches, and hybrid approaches. Mathematical programming approaches include linear or non-linear programming, heuristic algorithm and model predictive control. They can accurately characterize all the physical operational constraints for the shared MoD system. However, the scale performance is poor when detailed spatial-temporal constraints are considered. Reinforcement learning approaches include value-based, policybased and actor-critic algorithm. They could learn a nearoptimal solution utilizing neural network adaptively. Hybrid approaches combine learning methodology and mathematical programming together to indicate physical constraints and realize online performance in large-scale implementation.
Therefore, the proper solution methods depend on the problem types. If a large-scale problem requires a realtime solution, reinforcement learning performs better in time and efficiency. When the problem requires exact constraints expressions and solution optimality, a mathematical programming approach would be a choice especially when the convex programming is formulated.
|
2022-11-05T15:04:25.269Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e87a43547f7623a435711fb128f76999ef7ac8a2",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/8784343/8891891/09924240.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "e1f78fabc3cce3313d97fc4b67840b26e3bab38b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
86816130
|
pes2o/s2orc
|
v3-fos-license
|
Damu-Safen pesticide exposure risk assessment, EC (fomesafen, 250 g/l)
Annually the list of pesticides is replenished by new ones. One of the main criteria for their registration is toxicological and hygienic assessment and its impact on the environment. In order to register the new soy herbicide Damu Safen, EC (fomesafen, 250 g/l) it was necessary to assess its toxicological and hygienic impact on the environment and humans. Therefore, for the fi rst time we conducted studies of the environmental objects under the infl uence of DamuSafen, EC (fomesafen, 250 g/l) and the risk assessment of the active substance fomesafen and pesticide Damu Safen, EC on the workers. According to the results of the assessment of working conditions for the workers of the tanker and the tractor operator, an acceptable risk was obtained that meets regulatory and hygienic requirements. Residual amounts of fomesafen not exceeding the normative levels were found during conducted studies on environmental objects. Consequently, the results of the risk assessment in the application of pesticide Damu-Safen, EC (fomesafen, 250 g/l) and its impact on the working people and environmental objects indicate the possibility of its application in compliance with optimal environmental conditions and compliance with regulations for appliances and personal protective equipment. Research Article Damu-Safen pesticide exposure risk assessment, EC (fomesafen, 250 g/l) Aiman Nazhmetdinova1*, Albina Izmailova2, Altinay Chalginbayeva2 and Adlet Kassymbayev3 1SPC TAU, Head of R & D., Kazakhstan 2SPC TAU, Laboratory Assistant, Kazakhstan 3EV Group, Engineer, Kazakhstan *Address for Correspondence: Nazhmetdinova Aiman, MD, SPC TAU, Head of R & D., Kazakhstan, Tel: +7-777-244-05-02; Email: kaskadlet@mail.ru Submitted: 22 December 2018 Approved: 10 January 2019 Published: 11 January 2019 Copyright: 2019 Nazhmetdinova A, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Introduction
Currently Kazakhstan is a country with a developed agricultural sector. Since Soviet times, cereals have prevailed among crops, in particular wheat and barley. Along with the development of new technologies in the agricultural sector, the cultivation of legumes such as soybeans, lentils, and peas has become actual. Kazakhstan today is a country with a developed agricultural sector. In this regard, there was the issue of the need to use safe pesticides for growing crops. Pesticides are anthropogenic pollutants of the environment and food in the world. There is a need to explore the dangers to public health and its environment in violation of the rules for the safe handling of pesticides. It is necessary to conduct a risk assessment for the safe use of the pesticide, working personnel and the public. Toxicological and hygienic assessment of pesticides is one of the main criteria for registration of pesticides in the territory of the States members of the Customs Union [1].
One of the main points of the criterion for toxicological and hygienic assessment of a pesticide is safe working conditions during its use, respectively -this is conducting a risk assessment for occupations whose work is associated with pesticides. Therefore, we assessed the impact of the pesticide Damu -Safen, EC (fomesafen, 250 g/l) for workers, and environmental objects (air of the working area, atmospheric air, soil layer of the earth and crop production), which is registered for the irst time in the Republic of Kazakhstan.
Pesticide Damu -Safen, EC (fomesafen, 250 g/l) is a contact selective herbicide which is aimed to combat dicotyledonous weeds on crops of soybeans, beans in the post-emergence period.
The degradation of fomesafen in soils occurs mainly due to microbial activity. Although it is little known about the kinetic and metabolic behavior of this herbicide [2]. As well as fomesafen is a selective herbicide for ordinary leguminous plants.
Toxicological characteristics of the active ingredient fomesafen Evaluation of acute oral toxicity.
(LD 50 ) -Average lethalt dose -the half lethal dose for rats was set at 1250 mg/kg Evaluation of acute dermal toxicity LD 50 acute dermal: for rats more than 1000 m /kg Inhalation toxicity Acute inhalation: LC 50 (at 4 hours exposure) for rats 4.97 mg/m 3 Irritation of the skin and mucous membranes: Irritant effect of fomesafen 95% was studied on white rabbits the substance was applied to trimmed prepared skin on one side of the body, and covered with gauze bandages. The other side of the torso served as a control area on which distilled water was applied. The skin reaction was checked after 1 hour, 24 hours, 48 hours, up to 14 days after removing the dressing. According to the results of an average assessment of the degree of irritation, according to the intensity of erythema and edema, it was found that fomesafen irritates the skin of rabbits in the form of erythema for 1 hour, which then dissapears for the next 24 hours; that indicates weak irritation of the skin of fomesafen.
The study of eye irritation off Fomesafen 95.0% was conducted on adults of the New Zealand White rabbits, which caused the appearance of 0.1 g of the test substances, which are released in the conjunctival sac of the right eye of each animal, the left eye served as a control in assessing eye irritation. Eye irritation was evaluated at approximately 1, 24, 48, 72 hours and 4, 7 days after application. Mortality, clinical signs and eye irritation were observed during the entire study period. The highest average value of eye irritation (the average score on the cornea, iris and conjunctival sac) was 12.5 after application. On the basis of this result and the criteria for the gradation of irritation, the test substance fomesafen under the experimental conditions of this study is moderately irritating to the eyes.
According to toxicological and hygienic characteristics, fomesafen belongs to the 2nd hazard class according to WHO classi ication, the exposure is general toxic.
Hygienic standards of residual amounts of fomesafen in environmental objects and food.
Goals
− to assess the effect of the active ingredient fomesafen and the Damu-Safen pesticide EC (fomesafen, 250 g/l) in the air of the working area and its surroundings.
− to determine the effect of the active substance fomesafen in the water for household use (irrigation canal).
− to study residual amounts of fomesafen in the soil layer after its application.
− to conduct a study on the determination of pesticide residues in soybean leaves.
To conduct a study of the working conditions of working people and assess the risk of exposure of the pesticide fomesafen to working people and environmental objects, the subjects were instructed and agreed to the experiment.
The basis for the experiment were experimental ields located on the territory of LLP "Kazakh Research Institute of Agriculture and Plant Growing." In Almalybak Karasay district of Almaty region.
Materials
-air samples of the working area, selected according to SS 12.1.005-88 "General sanitary and hygienic requirements for working area air" [3].
-washings from the skin surface of the open parts of the body and from the working clothes of the tanker and tractor operator working were carried out in accordance with Methodical recommendations No. 3056-84 "Development of methods for determining harmful substances on the skin" [4], Guidelines 1.2.3017-12 "Exposure risk assessment pesticides on workers" [5], Guidelines 4.1.3220-14 "Hygienic and analytical control over contamination of the skin of people working with pesticides".
-soil sampling was carried out in accordance with Sanitary Rules and Regulations 4.01.001-97 "Uni ied rules for sampling agricultural products, foodstuffs and the environment for the determination of trace quantities of pesticides", Almaty, 1997 [6].
-household water samples.
Calculations and formulas Df -actual skin exposure, mg/sm 2 Dav -the average content of the substance on the skin (dermal exposure) determined during a particular study, мg/sм 2 ; F -the daily rate of the treatment area (ha) or the duration of the work shift (h). C RES -the residual coef icient, which expresses the exposure ratio of the amount of a substance remaining on the skin after a certain time, and initially applied; on average 0.25. C REL -The coef icient of relative permeability of the skin of a person and a rat or rabbit for a given substance (experimentally established) is approximately equal to 2.
S -Human skin area, on average = 16120 cm 2 C S -the safety coef icient is determined by the hazard class for acute dermal exposure in accordance with the hygienic classi ication of pesticides. C S for substances of 1-2 hazard classes for acute skin toxicity according to the hygienic classi ication of pesticides is from 20 to 10, for substances of 3-4 classes is from 10 to 3. For substances with pronounced distant or speci ic effects, including sensitization (1-2 hazard class), C S can be taken at a level of 20 or more, for substances with carcinogenic properties (hazard class 2) C S is equal to 50. -Personal protective equipment.
Methods for determining residual amounts of fomesafen
Determination of residual amounts of fomesafen in environmental objects was carried out by high performance liquid chromatography using a Waters Breeze 2 chromatograph as described in the HPLC method in the proposed article using SLE/ LTP and HPLC / AD [7].
-Determination of residual amounts of fomesafen in the air of the working area was carried out according to Guidelines No. 6218-91 "Methodological guidelines for the measurement of chloro luazuron concentrations in the air of the working area" [8], thus this pesticide is close by empirical and molecular formula to fomesafen.
-In the soil, the determination of fomesafen was carried out according to the developed method "Methodological guidelines for the determination of residual amounts of fomesafen in soybean, soybean oil and soil by high performance liquid chromatography", developed by specialists of SPC (scienti ic practical center) TAU.
Results
For full assessment of the risk Damu -Safen, EC (fomesafen, 250 g/l) we selected 72 air samples from the working area, 16 samples from the atmospheric air , 10 samples water, 16 samples soils also we selected 38 swabs from the tanker and operator ( Table 1).
Determination of fomesafen in the air of the working area
The sampling of air in the working area was carried out directly during the treatment of the ield with the preparation "Damu-Safen, EC (fomesafen, 250 g/l) "according to Guidelines No. 6218-91" Guidelines for the measurement of chloro luazuron concentrations in the air of the working zone " [8].
Samples were taken in the breathing zone of a tanker and a tractor driver -operator of production conditions. During the shift at individual stages of the process at one point, ive air samples were taken from the working area. It was taken the average values in each stage.
The results of the determination of the content of pesticide in the air of the working area are presented as an arithmetic average exposure (Figures 1,2).
Imean -the average content of substances in the air of the working area among the samples taken during a single operation.
Determination of fomesafen in the atmospheric air
The atmospheric air samples were taken in ive replications from 3 points. The average is taken from each sampling point ( Table 2).
The concentration of fomesafen in atmospheric air was 0.00135 mg/m 3 . PELs in the atmosphere air -0.003 mg /m 3
Determination of residual amounts of fomesafen on the skin of working people
The swabs were carried out at the end of work with open and closed overalls and other means of individual protection of the skin. Washout was performed by washing the ixed area of the skin with a washing liquid (80% ethyl alcohol) using a standardsized tissue cloth (specially prepared earlier) and tweezers (2-fold washing from top to bottom), then the washout was placed in a glass container with a lid.
Based on the value of D a , the actual skin exposure of D f , mg/cm 2 is calculated, taking into account the work during the work shift according to the formula: The actual skin exposure of D f , mg/cm 2 was determined taking into account the ratio of the work time during the study within an hour and the duration of the work shift with pesticides in the ield was 2 hours for the tanker (Figure 3). D a -the average content of the substance on the skin (dermal exposure) determined during a particular study, мg/sм 2 ; F -the daily rate of the treatment area (ha) or the duration of the work shift (h), for the tanker, the duration of the work shift, taking into account the number of gas stations per shift (if necessary, the duration of each of them). F 1 -treatment area (ha) or the amount of seed treated (t), or work time (hours) during the study. Based on the D a value, according to formula and taking into account the work during the work shift the actual skin exposure D f , mg/cm 2 is calculated The actual skin exposure of D f , mg/cm 2 was determined taking into account the ratio of work time during the study within 2 hours and the duration of the work shift with pesticides in the ield was 6 hours for the operator (Figure 4). The third point 300 м 0,000112 The average 0,00135 To assess the risk, we determine the ratio of the actual inhalation and dermal exposure and hygienic standards used as an acceptable level of inhalation and dermal exposure; а) The risk of inhalation exposure is determined by the value of the safety factor for inhalation intake of pesticides.
The content of residual amounts of fomesafen in water for domestic use
Next to the test ield at a distance of 150 meters there is an irrigation canal for household use. Sampling was done by samplers. The concentration of fomesafen was 0.000032 mg/dm 3 . MAC of fomesafen in the water of reservoirs was 0.0001 mg/dm 3 .
The residual content of fomesafen in the soil
A sampler was used to determine the residual amounts of fomesafen in the soil. The content of fomesafen in the soil after spraying was completed 30 minutes after treatment was 0.018 mg/kg with MAC of fomesafen in the soil -0.05 mg/kg.
Discussion
The risk of integrated receipt of pesticides is considered acceptable if SC SUM ≤0,00245 (tanker) SC SUM ≤1 SC SUM ≤0,00145 (operator -tractor driver) In the case of elevated concentrations of substances in the air of the working area (SC INH ≥1) when calculating the risk, it is necessary to consider the degree of respiratory protection according to the technical characteristics of the type of respirator used or established experimentally. SC SUM (the value of the total safety factor) for the tanker when using fomesafen as part of the pesticide "Damu -Safen, EC (fomesafen, 250 g/l) "amounted to 0.00245 ≤ 1 -risk acceptable.
The content of fomesafen in the soil after spraying in 30 minutes and after treatment was 0.018 mg/kg with MRL of fomesafen in the soil -0.05 mg/kg.
During the study of atmospheric air, next to the site of spraying, the concentration of fomesafen was 0.00135 mg/m 3 with a PELs in the air of the atmosphere amounted to 0.003 mg/m 3 . The content of fomesafen in the water for household purposes (irrigation canal) was 0.000032 mg/dm 3 ; when MRL, the water of reservoirs was 0.0001 mg/ m 3 .
Thus, it was concluded that the danger of the complex (inhalation and dermal) effects of the pesticide Damu-Safen, EC (fomesafen, 250 g/l) when applied comply with hygienic standards.
Conclusion
According to the results of the assessment of working conditions for workers of the tanker and the tractor-machine operator, an acceptable risk equal to 0.00245 for the tanker and 0.00145 for the tractor operator-0.00145 in pesticide Damu-Safen was received, (Fomesafen, 250 g/l), that is evidence of the possibility of registering a pesticide for further use in the territory of the Republic of Kazakhstan.
The results of the risk assessment for the use of pesticide Damu-Safen, EC (fomesafen, 250 g/l) and its impact on workers and environmental objects (air of the working area) indicate the compliance of the results with the hygienic requirements of the Customs Union.
Based on the results of the risk assessment of exposure to the active ingredient fomesafen in Damu-Safen, EC (fomesafen, 250 g/l) prove the possibility of use in compliance with optimal environmental conditions and compliance with the rules of application and personal protective equipment for the observer.
|
2019-03-28T13:33:04.328Z
|
2019-01-11T00:00:00.000
|
{
"year": 2019,
"sha1": "2e28e1fec39dc0dbd6a0218e586868db8543db75",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.29328/journal.acee.1001012",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a856740c0cbbd015cd658f5174279b8c0829b887",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221918115
|
pes2o/s2orc
|
v3-fos-license
|
Cilia interactome with predicted protein–protein interactions reveals connections to Alzheimer’s disease, aging and other neuropsychiatric processes
Cilia are dynamic microtubule-based organelles present on the surface of many eukaryotic cell types and can be motile or non-motile primary cilia. Cilia defects underlie a growing list of human disorders, collectively called ciliopathies, with overlapping phenotypes such as developmental delays and cognitive and memory deficits. Consistent with this, cilia play an important role in brain development, particularly in neurogenesis and neuronal migration. These findings suggest that a deeper systems-level understanding of how ciliary proteins function together may provide new mechanistic insights into the molecular etiologies of nervous system defects. Towards this end, we performed a protein–protein interaction (PPI) network analysis of known intraflagellar transport, BBSome, transition zone, ciliary membrane and motile cilia proteins. Known PPIs of ciliary proteins were assembled from online databases. Novel PPIs were predicted for each ciliary protein using a computational method we developed, called High-precision PPI Prediction (HiPPIP) model. The resulting cilia “interactome” consists of 165 ciliary proteins, 1,011 known PPIs, and 765 novel PPIs. The cilia interactome revealed interconnections between ciliary proteins, and their relation to several pathways related to neuropsychiatric processes, and to drug targets. Approximately 184 genes in the cilia interactome are targeted by 548 currently approved drugs, of which 103 are used to treat various diseases of nervous system origin. Taken together, the cilia interactome presented here provides novel insights into the relationship between ciliary protein dysfunction and neuropsychiatric disorders, for e.g. interconnections of Alzheimer’s disease, aging and cilia genes. These results provide the framework for the rational design of new therapeutic agents for treatment of ciliopathies and neuropsychiatric disorders.
and proteins restricted to motile cilia. Known PPIs were collected from Human Protein Reference Database (HPRD) 39 and Biological General Repository for Interaction Datasets (BioGRID) 40 . Gene-drug associations and ATC classifications were collected from DrugBank 41 , while neuropsychiatric gene-disease associations were collected from the GWAS catalog (www.ebi.ac.uk/gwas/). Random gene sets used in shortest path comparisons were sampled from about twenty thousand human proteins listed in the Ensembl database (www.ensem bl.org).
Novel PPIs were predicted using the HiPPIP model that we developed 42 . Each ciliary protein (say C 1 ) was paired with each of the other human genes say, (G 1 , G 2 , … G n ), and each pair was evaluated with the HiPPIP model. The predicted interactions of each of the cilia genes were extracted, which resulted in 620 newly discovered PPIs of cilia genes. The average shortest path distance was computed using the Networkx package in python. Pathway associations were computed using Ingenuity Pathway Analysis suite. GO term enrichment was carried out using BinGO 43 ; for each C 1 , a list of its known and predicted interacting partners (i.e. B 1 , B 2 , … B n ) are given as input to BinGO, which extracts the GO terms of all these genes and finds which of the GO terms are statistically enriched in comparison to the background distribution of GO terms of all human proteins. All statistically significant terms are assigned as network-based enriched GO terms of C 1 .
Gene expression datasets in Gene Expression Omnibus were used to compute the overlap of the cilia interactome with genes differentially expressed in various neuropsychiatric disorders: major depressive disorder (GSE53987 44 ), schizophrenia (GSE17612 45 ), bipolar disorder (GSE12679 46 ), autism spectrum disorder (GSE18123 47 ), Alzheimer's disease (GSE29378 48 and GSE28146 49 ), Parkinson's disease (GSE28894) and nonsyndromic intellectual disability (GSE39326 50 ). Genes with fold change > 2 or < ½ were considered as significantly overexpressed and underexpressed respectively at p value < 0.05. A gene with transcripts per million ≥ 2 was considered to be 'expressed' while analyzing the overlap of the interactome with genes expressed in the amygdala, anterior cingulate cortex, caudate, cerebellum, frontal cortex, hippocampus, hypothalamus, nucleus accumbens, putamen, spinal cord and substantia nigra extracted from GTEx 51 . Time-dependent gene expression variation in the hippocampal region was extracted from BrainSpan Atlas containing RNA-Seq data from post-conceptional weeks to middle adulthood 52 . 78 genes associated with Alzheimer's disease were extracted from DisGeNET 53 (with score > 0.2 to include only expert-curated disease-gene associations). Then, to construct the Alzheimer's disease interactome, whose overlap was to be checked with the cilia interactome, 4,742 known PPIs extracted from HPRD 54 and BioGRID 55 , and 490 computationally predicted PPIs of these 78 genes were assembled. The biological validity of the interactome was shown by the fact that 676 genes out of the 3,944 genes in the AD interactome are differentially expressed in CA1 hippocampal gray matter from patients with severe Alzheimer's disease versus healthy controls (GSE28146 49 ), out of which 71 were novel interactors (p value = 1.138e −20).
Results
We assembled a list of 165 genes encoding proteins known to be associated with primary and/or motile cilia, including IFT, BBS, TZ, and ciliary membrane proteins, as well as proteins restricted to motile cilia. Known PPIs of ciliary proteins were assembled from HPRD and BioGRID 40,56 . Novel PPIs were predicted for each of the cilia genes using our High-precision Protein-Protein Interaction Prediction (HiPPIP) model 42 . In this manner, a ciliary protein interactome was assembled comprising 165 ciliary proteins (red color square shaped nodes) with 1,011 known PPIs (blue edges) and 765 novel PPIs (red edges) that connect to 800 previously known interactors (light blue nodes) and 705 novel interactors (red nodes) ( Fig. 1 and Table 1). We predicted 216 new interactions for 50 out of the 56 cilia genes that had no known PPIs. For example, GPR83 has 12 novel PPIs, LRRC48 has 10, PKD1L1 has 10, and SPEF has 10 novel PPIs. The number of known and novel PPIs of cilia genes are given in Supplementary File 1, and the lists of all genes and PPIs is given in Supplementary File 2.
For each of the ciliary proteins, we computed the enrichment of gene ontology (GO) terms among its interacting partners in order to aid in the discovery of its function using BinGO (Biological Networks Gene Ontology tool) 43 . This information is especially useful for those ciliary proteins that have either no known or very few known GO biological process terms. For example, there are 11 genes that have no known GO terms, and we predicted new GO terms for each of those genes, for e.g. 27 novel GO terms for ARMC4, 11 for CCDC63, and 30 for DNAAF2.
We computed the pathway associations of genes in the interactome, using the Ingenuity Pathway Analysis (IPA) suite (Ingenuity Systems, www.ingen uity.com). This showed a significant overlap of neuronal pathways with the cilia interactome (see selected pathways in Table 2). The complete list of all pathways, their p values and the genes from the interactome that are associated with these pathways, are given in Supplementary File 3. We also extracted information about drugs targeting the genes in the interactome. This analysis showed that there are several genes that are targets to drugs belonging to the Anatomic category of "nervous system", highlighting the connection between cilia and the nervous system as shown in Fig. 2 and Supplementary File 4. experimental Validation of novel cilia ppis in independent studies. Four of the novels PPIs that we predicted for cilia genes were independently recovered by other groups. TMEM237-SFXN5 and DYNLL2-c17orf47 were recovered by yeast two-hybrid experiments in the recent release of the human protein interactome map 57 . We also predicted two PPIs of IFT140 that were discovered as part of the CPLANE interactome using affinity purification-mass spectrometry, but were not deposited in BioGRID or HPRD: IFT140-TELO2 and IFT140-TRAP1 36 . It is also worth noting that 8 novel interactors in the interactome appeared among the proteins isolated from the primary cilia of mouse kidney cells using a method called Mud-PIT (multidimensional protein identification technology) 33 : ABCE1, CCDC47, CCT5, G3BP1, GBF1, RAB10, RAN and USP14. 94 genes in the cilia interactome, including 44 cilia genes, 36 known and 14 novel interactors, were also recovered as regulators of the ciliary sonic hedgehog pathway in a CRISPR genetic screen (p 58 . The interactome was also significantly enriched with genes differentially expressed in bronchial biopsies of primary ciliary dyskinesia patients (p value = 2.64e−02) 59 . PKD1 24 9 NTHL1, NADSYN1, PDPK1, OR1F1, DDX58, NTN3, RPL3L, SOX17, PPL PKD1L1 0 10 C7orf69, ABCC3, C7orf65, ACAA2, C7orf57, MGAT2, ABCA13, RPL6, PLCB2, PSME3 PKD2 14 6 KIF11, ADH1C, MTAP, ADH1A, PPP3CA, UCHL3 PKD2L1 3 6 MYOF, HNRNPA2B1, LGI1, CAB39, GBF1, PRPS2 PKHD1 1 7 ORM1, CSTF2T, ILK, ATXN2,
Discussion
We developed the interactome of ciliary proteins that included IFT, BBS, TZ, ciliary membrane proteins and proteins in motile cilia. The interactome includes novel computationally predicted PPIs for multiple proteins, including proteins with few or no previously known PPIs. www.nature.com/scientificreports/ Both analysis of individual novel PPIs and the cilia interactome as a whole has the potential to highlight connections to specific neurological disorders and lead to biologically insightful and clinically translatable results. We interpreted the functions of individual novel PPIs using literature-based evidence and top pathways obtained from IPA (See Supplementary File 5 for testable hypotheses on novel PPIs involved in neuropsychiatric disorders, primary ciliary dyskinesia, hydrocephalus and in biological processes such as ciliogenesis and trafficking of membrane receptors in cilia). The following is a demonstrative example of a systems-level analysis. cilia, Alzheimer's disease and aging. Alzheimer's disease (AD) is a progressive neurodegenerative disease with an estimated prevalence of 10-30% in the population aged 65 years and more, characterized by memory loss (dementia), behavioral changes, impaired cognition and language 61 . Around two-thirds of dementia cases is attributed to AD 61 . Hippocampus, a region in the brain critical to memory and learning, exhibits signs of neurodegeneration in the early stages of AD 62 . It has been speculated that memory and learning deficits in AD may be associated with aging and reduced neurogenesis in the hippocampus [62][63][64] . It is interesting to note that primary cilia have been shown to mediate sonic hedgehog signaling (Shh) to regulate hippocampal neurogenesis 65,66 . So, we explored interconnections of AD, aging and cilia in the PPI network (the 'interactome'), while asking the following questions: Are genes associated with AD, aging and cilia closely connected in the interactome? Will such a network also include genes involved in Shh signaling and neurogenesis, and genes expressed in the hippocampus? What specific biological processes may underlie the connections of AD to aging, and will they interact with the Shh pathway?
Significant overlap was found between cilia and the AD interactomes (p value = 0.022). The AD interactome was highly significantly enriched in 'human aging-related genes' (p value = 1.77e−37), compiled from the GenAge database 67 . 51 aging genes co-occurred in AD and cilia interactomes. The subnetwork of these 51 genes and their AD and cilia interactors is shown in Fig. 3. In this subnetwork, aging genes connected cilia genes with/without Shh involvement to AD genes (Fig. 3).The next question we asked was: do any of the 51 genes directly interact with a ciliary gene involved in the Shh pathway? 15 cilia genes in the network were also recovered as regulators of the Shh pathway in a CRISPR genetic screen: ARL13B, BBS1-2, BBS4-5, BBS7, CBY1, DYNLL1, IFT140, IFT20, IFT52, IFT81, PTCH1, STUB1 and TRAF3IP1 58 . These 15 genes had direct interactions with 14 aging genes, 6 AD genes and 2 cilia genes. This included 13 novel predicted interactions connecting aging genes to cilia genes including 4 Shh genes (in italics): BAK1-BBS1, CDKN2A-DNAI1, TRAP1-IFT140, PDPK1-PKD1, SOD1-DNAH3, CCNA2-DRD2, TERT-IFT57, HTT-IFT57, FOS-DYNLT3, EP300-MCHR1, SHC1-DNAH7, PRKCA-CDK3 and RICTOR-IFT20. The network was significantly enriched in the GO term 'neurogenesis' (p value = 5.66e−12) and in genes expressed in the hippocampus (transcripts per million ≥ 2) (p value = 2.54e−09). The cilia genes DYNLT1 and PKD1 were associated with neurogenesis, and IFT20, IFT140, PTCH1 and BBS4 were Shh regulators also associated with neurogenesis. Reduced size of hippocampus was noted in mutant mouse models of 5 cilia genes, namely BBS1, BBS2, BBS4, BBS7 and PDCD6IP (Mammalian Phenotype Ontology term: small hippocampus) [68][69][70] . We next identified the biological processes that may be specifically affected in AD in relation to its links with aging. 75 genes in the network were differentially expressed in the hippocampus of AD patients compared with non-AD subjects (GSE48350 71 , GSE36980 72 , GSE1297 73 , GSE28146 49 , GSE29378 48 ). We then examined the fold change in the normal expression of these 75 genes in the hippocampus at 40 years compared with 8 post-conceptional weeks. For this, we used the ' developmental transcriptome' from the Brain-Span Atlas containing RNA-Seq data of up to 16 brain regions from post-conceptional weeks (number of weeks elapsed from the first day of the last menstrual period and the day of the delivery) to middle adulthood (up to 40 years) 52 . The genes were grouped based on the specific direction in which their expression varied in AD versus aging (i.e. fold change in same/opposite directions in AD versus non-AD hippocampal samples compared with expression at 40 years versus 8 post-conceptional weeks) (Fig. 4). 42 genes showed an expression change in the opposite direction in AD versus aging. Out of this, 18 genes were underexpressed in AD but overexpressed in aging; they were enriched in the GO term 'calcium-mediated signaling' (p value = 8.72e−09). It has been postulated that calcium signaling pathways involved in cognition may be remodeled by an activated amyloidogenic pathway in AD, resulting in elevated levels of calcium and a constant erasure of new memories through enhancement of mechanisms involved in long term depression 74 . It is also worth noting that Shh signaling requires calcium mobilization 75 . The 18 genes included the cilia genes DYNLL1, DYNLT3, PKD1 and MCHR1, and the ciliary Shh regulator BBS7. 24 genes were overexpressed in AD but underexpressed in aging; they were enriched in 'circulatory system development' (p value = 3.04e−07). Loss of hippocampal blood vessel density accompanied by ultrastructural changes in the blood vessels have been observed in a senescence-accelerated rat model of AD 76 . It is interesting to note that circulatory system processes were found to be upregulated in early stages of AD-like pathology in this model, while they were found to be downregulated with age, similar to our observations 76 . It is also interesting to note that neovascularization requires Shh signaling 77 . The 24 genes included the cilia genes CCDC40, SPAG6, ZMYND10, DNALI1 and SPAG1, BBS2 and CBY1 which are ciliary Shh regulators, DYNLT1, a cilia gene involved in neurogenesis and PTCH1 which is an Shh ligand also involved in neurogenesis. 25 genes showed an expression change in the same direction (either under/overexpression) in AD versus aging including the cilia genes VPS4B, CCNA1, DYNLRB2, NPHP1, DNAH7 and the ciliary Shh regulator BBS5; 'negative regulation of cell death' was enriched in this group (p value = 1.59e−09). Shh maintains neural stem cells in the hippocampus by inhibiting cell death 78 .
In summary, our analysis demonstrates that aging and AD genes directly interact with ciliary Shh regulators in the interactome. This network is enriched in genes associated with neurogenesis and expressed in the hippocampus. Genes involved in calcium-mediated signaling and circulatory system development are differentially expressed in the opposite direction in AD versus aging, whereas genes involved in regulation of cell death are differentially expressed in the same direction. 19 , suggesting that these novel cilia interacting partners may have a role in neurotransmission. Dopamine signaling, eNOS signaling, and synaptic long term potentiation pathways are also known to be associated with neuropsychiatric disorders such as schizophrenia 79,80 . The identification of Huntington's disease (HD) pathway in the cilia interactome is also notable given that the protein huntingtin (HTT) localizes to the centrosome and plays an important role in ciliogenesis. The HD mutant mouse model exhibits abnormal cilia motility and cerebrospinal fluid flow 23 . Recovery of Wnt signaling thought to be involved in schizophrenia etiology is also of interest 81,82 . Analysis of the known and novel PPIs and GO term associations identified a role for cilia in neuronal disease pathogenesis. While consistent with the known role of cilia in several key processes in the nervous system such as the neuronal signaling and development, these findings reveal novel connections between cilia and www.nature.com/scientificreports/ these functional modules. The defects in neuronal migration and differentiation are the underlying cause of abnormal neural circuitry in psychiatric disorders 12 . This is further supported by the reported linkage of neuropsychiatric risk genes to cilia 15,19 and the finding of neuropsychiatric phenotypes and brain abnormalities in ciliopathies 5,12 . Our interactome analysis shows that TCTN2, cilia gene with known role in neuronal development and migration 12 has 3 novel interactors and neuronal GO terms such as initiation of neural tube closure, midbrain morphogenesis and mid brain development are enriched among the interacting partners. The GO terms that are enriched for interacting partners of ARMC4 include sympathetic neuron projection guidance, axonogenesis, axon extension, and axon fasciculation. Dynein gene, DNAAF2 has only one known but 4 predicted interactions. Two of those novel interactors, ATL1 and TRIM9 are shown to be associated with cognitive performance and psychosis respectively through GWAS. The GO terms such as axonogenesis, neuron maturation, synaptic growth at neuromuscular junction are enriched among the interacting partners. Ciliary membrane genes DRD1 and DRD2 that are implicated in neurotransmission and linked to mental illnesses such as schizophrenia 83 were identified with 4 and 12 novel interactors, respectively; the associated GO terms were neuronal action potential and synaptic plasticity regulation. We also observed 4 novel interactors for the cilia protein TMEM67, including two proteins associated with cilia assembly, LAPTM4B and NDUFAF6, with NDUFAF6 also known to be associated with Alzheimer's disease 84 . Both ATG7, a novel interactor of the ciliary protein PDCD6IP, and SPR, a novel interactor of GPR83 have been associated with Parkinson's disease 85,86 . GIT1, a novel interactor of B9D1 is associated with attention deficit hyperactivity disorder and MME, a novel interactor of SPAG1with Alzheimer's disease 87 .
On inspecting mammalian phenotype ontology (MPO) terms (www.infor matic s.jax.org/), 42 novel interactors were found to be associated with various morphological or physiological aspects of brain in mice. For example, the novel interactor ITSN1 was associated with decreased brain size, abnormal corpus callosum, hippocampal fimbria, hippocampal fornix, brain white matter and anterior commissure morphology. These findings support the role of these novel interactions and the GO terms in understanding the crucial role played by cilia biology in neuropsychiatric disorders.
overlap of cilia and neuropsychiatric disorder interactomes.
To examine the connection between cilia and neuropsychiatric disorders, we computed the overlap between their interactomes. We considered 7 neuropsychiatric disorders (NPDs), namely Attention Deficit Hyperactivity Disorder (ADHD), Major Depressive Disorder (MDD), schizophrenia, bipolar disorder, autism spectrum disorder, Alzheimer's disease and Parkinson's disease. We extracted the genes associated with each disorder from the GWAS catalog (www.ebi.ac.uk/ gwas/) and then assembled disorder-specific interactomes with known PPIs from HPRD and BioGRID. We then computed how closely connected the cilia genes are to NPD genes by computing how many genes or interactors were shared between the cilia interactome and each NPD interactome. This analysis showed the overlap to be statistically significant (Table 3). For example, cilia interactome has an overlap of 88 genes with ADHD interactome (p value = 1.2E−16) of which 17 are novel interactors of cilia. Similar comparisons with other NPDs also showed overlaps as shown in Table 3.
Overlap of cilia interactome with genes differentially expressed in neuropsychiatric disorders. 965 genes in the cilia interactome were found to be expressed (transcripts per million ≥ 2) in several brain regions including amygdala, anterior cingulate cortex, caudate, cerebellum, frontal cortex, hippocampus, hypothalamus, nucleus accumbens, putamen, spinal cord and substantia nigra, from GTEx RNA-Seq data 51 (p value = 3.93E−58). Novel interactors of cilia genes were found to be highly statistically enriched among these genes expressed in the human brain (p value = 8.14E−09). www.nature.com/scientificreports/ We then computed the overlap of genes differentially expressed in neuropsychiatric disorders with the genes in the cilia interactome. We analyzed gene expression datasets of MDD (GSE53987) 44 , schizophrenia (GSE17612) 45 , bipolar disorder (GSE12679) 46 , autism spectrum disorder (GSE18123) 47 , Alzheimer's disease (GSE29378) 48 , Parkinson's disease (GSE28894) and non-syndromic intellectual disability (GSE39326) 50 . The analysis showed the overlap to be statistically significant (Table 3). For example, the cilia interactome has an overlap of 106 genes with genes differentially in the Alzheimer's disease dataset (p value = 4.7E−05) of which 46 are novel interactors of cilia. cilia and nervous system drug targets. Given the strong connection between the cilia interactome and neuronal pathways, we tested the possibility of repurposing drugs targeting proteins in the cilia interactome for treating neurological disorders. Identifying new uses for drugs shortens the time of drug discovery and approval 88 . For example, the drug amantadine which is used to treat influenza infection was successfully repurposed to treat dyskinesia and Parkinson's disease 88 . This analysis identified 548 drugs targeting 184 genes in the cilia interactome. These fall into 3 major Anatomic Therapeutic Chemical (ATC) classification system categories, nervous system with 99 drugs, 102 drugs in the respiratory system, and 98 drugs in the cardiovascular system (Fig. 2, Supplementary File 4). This finding points at therapeutics targeting the cilia proteins which may provide a novel strategy for treating neurological disorders.
Overall, 76 nervous system drugs targeted 7 novel interactors: HRH1, SLC6A2, CHRNA9, NQO2, ORM1, CACNA1I and CACNA1G. 57 drugs targeting 22 genes in the interactome are used in the treatment of at least one among the following neurological disorders-Parkinson's disease, Alzheimer's disease, attention deficit hyperactivity disorder (ADHD), major depressive disorder (MDD), autism spectrum disorder, schizophrenia and bipolar disorder-out of which 35 drugs target 6 novel interactors, namely CACNA1G, CACNA1I, CHRNA9, HRH1, SLC6A2 and ORM1. 10 out of these 57 drugs targeted cilia genes as well as known and novel interactors of cilia genes: asenapine, chlorpromazine, clozapine, loxapine and paliperidone are schizophrenia drugs, olanzapine is used in the treatment of Alzheimer's disease and schizophrenia, amphetamine in ADHD, imipramine in ADHD and MDD, mirtazapine in MDD and nortriptyline in schizophrenia, ADHD, MDD and bipolar disorder.
Among other novel interactors targeted by nervous system drugs is SLC6A2 which is involved in neurotransmission and is associated with ADHD 89,90 . SLC6A2 interacts with RPGRIP1L, a ciliary protein known to cause Joubert syndrome, MKS and bipolar disorder 91,92 . The novel interactors CACNA1I and CACNA1G targeted by nervous system drugs are calcium channels that are known to be associated with Alzheimer's disease and schizophrenia, respectively 93,94 . These novel interactors which are drug targets may have significant impact on the nervous system, and the pathogenesis of neurological disorders.
In an independent study, we proposed that the drug acetazolamide which targets the genes CA2 and CA4, having known interactions with the cilia genes, DYNLL1 and CDK3 respectively, may be repurposed for schizophrenia based on negative correlation of drug-induced versus disease-associated gene expression profiles and other biological evidences 95 . Acetazolamide is currently under consideration for funding for clinical trial. Several cancer drugs with reported effects on ciliogenesis target known and novel interactors in the cilia interactome. Vinblastine targeting JUNN, a known interactor of BBS7 and TSG101, and TUBB, a known interactor of NPHP1 and DYNLL1, inhibits cilia regeneration in partially deciliated Tetrahymena (a unicellular ciliate) 96 . Valproic acid targeting HDAC9, a known interactor of PKD1, restores ciliogenesis in pancreatic ductal adenocarcinoma cells 97 . Gefitinib targeting EGFR, a known interactor of PDCD6IP, inhibits the smoking-induced loss of ciliated cells in the airway 98 . Gefitinib also increases the percentage of ciliated cells in human pancreatic cancer cell lines 99 . Geldanamycin targeting HSP90AB1, a novel interactor of CETN3, induces lengthening of cilia in 3T3-L1, a fibroblast cell line 100 . Intellectual disability n/a n/a n/a n/a 706 75 6.50E−03 32 Scientific RepoRtS | (2020) 10:15629 | https://doi.org/10.1038/s41598-020-72024-4 www.nature.com/scientificreports/ conclusion We identified novel PPIs of cilia proteins and their associated pathways, their enriched Gene Ontology term associations, and drugs that target the interactors. This cilia interactome analysis reveals a link between cilia function, neuronal function and neurological disorders. We also demonstrated the interconnections of Alzheimer's disease, cilia and aging genes. The predicted interactions will have to be validated at the level of network perturbations in the disease state by comparing neuropsychiatric patients with healthy controls. However, one has to be aware of a few caveats while studying the role of ciliary genes in neuropsychiatric disorders (NPDs). Association of a ciliary gene with a NPD can be unequivocally ascertained only if this association is discovered within the ciliary compartment in the context of the particular NPD, i.e. a mechanistic link between ciliary function and the disorder has to be demonstrated. It may not be a true association if a ciliary gene was shown to be associated with a NPD in a cellular context not connected with cilia; a protein may perform its function at different subcellular locations. Mapping the interactome of cilia genes would be useful in carrying out networkbased systems biology studies, which will help elucidate the contribution of these novel PPIs to nervous system disease pathology as well as to develop novel therapeutics for these disorders.
Data availability
We will make the cilia interactome publicly available on our web application Wiki-Pi 101 . Novel PPIs will be highlighted in yellow on the website. The number of novel and known PPIs of the cilia genes are given in Supplementary File 1. Interactome network diagram that is shown in Fig. 1 is also being made available in PDF format and in Cytoscape file format as Supplementary File 6 and Supplementary File 7 respectively. PDF file would be suitable for printing in high resolution and for electronically searching for specific genes, and Cytoscape would allow further processing and data analysis. Wiki-Pi allows users to search for interactions by specifying biomedical associations of one or both proteins involved. Thus, queries can be customized to include/exclude gene symbol, gene name, GO annotations, diseases, drugs, and/or pathways for either gene involved in an interaction. For example, researchers can search for interactions by giving at least one cilia gene and a pathway of interest, say "IFT20 interactions where the interactor is involved in immunity"; this query would match 5 PPIs out of a total of 19 PPIs of IFT20. Another example is the search "find interactions where one protein's annotation contains the word ciliary and the other protein's annotation contain the word neuronal". The search returns 353 PPIs, out of which 13 are novel PPIs.
|
2020-09-26T13:05:54.825Z
|
2020-09-24T00:00:00.000
|
{
"year": 2020,
"sha1": "1c7e1b039450bd6b1c3386792e1cd012b039fd15",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-72024-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1540d79cf0d7226281b3bfb4176704ddca1c8576",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
109231631
|
pes2o/s2orc
|
v3-fos-license
|
MEMS Based Deep 1D Photonic Crystal
Of special interest are the Silicon-Air Bragg mirrors obtained by DRIE micromachining. They are considered as an important building block leading to a wide variety of applications. First, we elaborate on the use of this building block in resonant cavities and in interferometers (section 3). Then, we apply the multilayered stack theory to a case of study for a special structure: The mode selector, covered in section 3.7). Finally, we conclude this work by highlighting about an advanced architecture of 1D photonic crystals based on curved Bragg mirrors.
Introduction
Since the Bragg layers, also referred as to 1D photonic crystal, lie at the core of many optical devices, this chapter is devoted to the study of the theory underlying the design of multilayered structures [Macleod 2001]. The corresponding analytical model is explained in details in section 2 followed in the next sections, by various design examples for the shake of illustration.
Of special interest are the Silicon-Air Bragg mirrors obtained by DRIE micromachining. They are considered as an important building block leading to a wide variety of applications. First, we elaborate on the use of this building block in resonant cavities and in interferometers (section 3). Then, we apply the multilayered stack theory to a case of study for a special structure: The mode selector, covered in section 3.7). Finally, we conclude this work by highlighting about an advanced architecture of 1D photonic crystals based on curved Bragg mirrors.
Theory and modeling of Bragg reflectors
Under specific conditions, a stack of multilayered structure gives rise to nearly perfect optical reflectance, approaching 100 %, as compared to the reflectance from a single interface. This is the main characteristic that makes the interest in such structures, called Bragg reflectors or Bragg mirrors. This phenomenon of enhanced reflectivity might be explained by the fact that the presence of two (or more) interfaces means that a number of light beams will be produced by successive reflections, that may interfere constructively (or destructively, when considering anti-reflective surfaces), and the properties of the multilayered film will be determined by the summation of these beams. This might be the case in thin film assemblies. In thick assemblies however, the later phenomenon does not take place. Before going into the analytical details, we differentiate between thin and thick films. We say that the film is thin when interference effects can be detected in the reflected or transmitted light, that is, when the path difference between the beams is less than the coherence length of the light, and thick when the path difference is greater than the coherence length. Note that no interference can be observed when effects of light absorption dominate within the film, even in the case of thin films. The same film can appear thin or thick depending entirely on the illumination conditions. The thick case can be shown to be identical with the thin case integrated over a sufficiently wide wavelength range or a sufficiently large range of angles of incidence. Normally, we will find that the films on the www.intechopen.com Photonic Crystals -Introduction, Applications and Theory 82 substrates can be treated as thin while the substrates supporting the films can be considered thick.
In the upcoming treatment, we show analytically a generalized model applicable for an absorbing thin film assembly. The obtained result applies equally well for non-absorbing films.
Let's consider the arrangement shown in Fig. 1 where we denote positive-going waves by the symbol + and negative-going waves by the symbol -. Applying the boundary conditions on the electromagnetic field components at interface B (chosen as the origin of z-axis), we have: Continuity of the tangential components of the electric filed gives (E b being the tangential component of the resultant electric field): Continuity of the tangential components of the magnetic filed gives (H b being the tangential component of the resultant magnetic field): The negative sign in (2) comes from the convention used for the field propagation direction such that the right hand rule relating E, H and K (wave vector, along the propagation direction) is always satisfied. In writing equations (1) and (2), we assume that: Common phase factors have been omitted, and the substrate is thick enough such that no field is reflected back from it.
Then, the admittance of the thin film assembly is: The amplitude reflection coefficient and the reflectance are then given by: In case of absorbing medium, N r is complex and in general, it can be expressed as a complex number:
Literature survey
Many groups worked on the realisation of silicon-Air Bragg reflectors as basic building blocks in Fabry-Perot (FP) cavities as well as in Michelson interferometers. When considering Fabry-Perot cavities, the use of high reflectance Bragg mirrors is intended to achieve high quality factor Q at the corresponding resonant wavelengths. The use of silicon restricts the wavelength range to the infra-red region. In the same time, light coupling using optical fibers is facilitated by the microfabrication of U-grooves for supporting the fibers with pre-alignment capability. Among the groups working on this topic, we can cite [Lipson & Yeatman 2007] [Saadany et al. 2006]; the best recorded Q was 1291 for FP structure working as a notch filter. More recently, the performance was improved using cylindrical Bragg mirrors of cylindrical shapes combined with a fiber rod lens, leading to Q = 8818 on quite large cavities exceeding L = 250 µm [Malak et al. APL 2011], an unreached value for the figure of merit Q.L, which is of primary importance for cavity enhancement applications. Table 1 summarizes the specifications of the different designs discussed above.
Fabrication technology for Si-Air Bragg reflectors (for MEMS and for fixed structures)
In this section, the basic steps of the fabrication process for MEMS structures involving Bragg layers are highlighted. Many techniques can be used to produce vertical structures on silicon substrate as mentioned in [Lipson & Yeatman 2005], [Yun et al. 2006] and [Song et al 2007]. They are based on either dry or wet etching of silicon using KOH. The process described here and shown in Fig. 3 pertains to the (optional) integration of MEMS structures together with the Bragg mirrors using dry etching.
Starting from a raw SOI wafer, we proceed by making thermal oxidation for the whole wafer. In the next step, photoresist (PR) is used to cover the entire wafer where it acts as a Table 1. Summary of the specifications for state-of-the-art FP cavities mask for photolithography. The PR is then patterned using UV exposure over the DRIE layout mask. Since the PR is a positive type, the areas exposed to UV remain soft while the non-exposed areas become hard and they can not be removed in the development step. Now, the hardened PR acts as a protection mask for the originally oxidized silicon which is patterned using either Reactive Ion etching (RIE) or Buffer HydroFluoric acid (BHF). The role of PR ends here and it is completely removed from the wafer.
The fabrication process continues by the metal deposition over the whole wafer. The metal is patterned by photolithography using the frontside metal layout mask and then etched. In the next step, metal is deposited on the backside of the wafer where it is patterned by the backside layout mask and then etched. We turn again to the front side to make Deep Reactive Ion etching (DRIE) for the silicon structure layer [Marty et al. 2005]. At that level, both the oxide and the aluminum serve as mask materials for silicon etching by DRIE.
Processing the backside again, DRIE is done for the backside, in this case, only aluminum serve as mask material for silicon etching by DRIE. The process ends by releasing the MEMS structure in which the insulating oxide is removed by vapor HF.
For the fixed structures involving Bragg mirrors presented in this research work, the process differs from the one detailed above. So, in the next paragraph, we highlight the fabrication process as shown in Fig. 4, used for the realization of the fixed structures.
Starting with an ordinary silicon wafer, a thermal oxidation process is carried for both sides of the wafer to achieve an oxide thickness = 1.7 m . N e x t , P R u s e d a s a m a s k f o r photolithography, is sputtered over the entire wafer. This step is followed by the photolithography for DRIE mask for the front side and the PR is patterned accordingly. The following step is the plasma etching for the oxide. This photolithography ends by PR removal. Then, we start processing for the back side by depositing aluminium. Next, we pattern the aluminium mask by photolithography using the back side layout mask. Then, we proceed by DRIE etching over 300 m for the back side and the process ends up with DRIE etching for the front side over 100 m. Note that all steps performed on the backside are optional, depending on the nature of the target device.
Modeling and simulation of planar Bragg mirror reflectors
Based on the analytical model presented in section 2, then if we have a single layer whose thickness is an odd number of quarter the wavelength, the characteristic matrix of the layer [M] becomes: So, if we stack a combination of several layers alternatively of High refractive index, denote H, and Low refractive index, denoted L, whose thickness is an odd number of /4 (where is the wavelength in the corresponding medium), we can construct a high reflectance mirror named Bragg mirror. In the particular case where we stack a combination of five quarterwave layers which are different, mathematical manipulation of the equivalent characteristic matrix yields an equivalent admittance for the assembly: where the definitions presented previously are kept unchanged.
For m = 0 , and considering similar indices for the high and low layers as well so that, In general, for a pair (p) of HL layers with similar m and 0 , we can write: Based on these derivations, we built a MATLAB code to design the Bragg mirrors. In this comprehensive study, we focus mainly on the impact of the number of Bragg layers, the layer thickness and the technological errors on the reflectance and the transmittance of the Bragg mirror. In all the upcoming results, we consider absorption free layers with a silicon refractive index n Si = 3.478 and air refractive index n air = 1. For the simulation results shown in Fig. 5, we choose a silicon thickness = 3.67 m (33 Si /4) and air thickness = 3.49 m (9 air /4) because they are relatively easy to obtain using the affordable fabrication technology (as compared to single quarter wavelength Si /4 = 0.111 µm and air /4 = 0.388 µm at the communication wavelength = 1550 nm). We notice that the reflectance in the mid-band increases as the number of layers increases which goes in accordance with relation (44). In fact, the reflectance increases from 71.8 % (single Si layer) up to 99.98 % (4 Si layers) when the number of HL pairs increases from single to four. Also, the mirror response becomes sharper and its bandwidth (BW) decreases as the number of layers increases. In the case of single layer, the BW is about 65 nm and it goes down to 58 nm as the HL pairs increases to 4.
In the next simulation, we study the impact of the silicon layers thickness on the mirror bandwidth. For this shake, we consider 4 HL pairs with fixed air layers thickness 3.49 m (9 air /4), while the thickness of the silicon layers is increased from Si /4 up to 25 Si /4 in steps of 2 Si . Simulation results, depicted in Fig 6, show that the mirror BW decreases as the thickness of the silicon layers increases. For silicon thickness Si /4, the 3dB-BW = 238 nm and it decreases to 73 nm at a thickness of 25 Si /4. If on the other hand, we fix the thickness of the silicon layers to 33 Si /4, for the same 4 HL pairs, and increase the thickness of the air layers from air /4 up to 13 air /4 in steps of air . A similar effect is noticed but on a smaller BW scale since the BW decreases from 65 nm at L thickness = air /4, to 55 nm at L thickness = 13 air /4. The corresponding results are shown in Fig. 7. Comparing between both results, we can say that the decrease in the H thickness is more pronounced than the decrease in the L thickness in terms of the bandwidth. Good control of the H thickness can give rise to Bragg mirrors with large BW.
Another point of interest for the Bragg mirror is the technological error. The critical dimension, defined as the minimum feature size on the technology mask, can not be maintained as drawn in the original design and thus, it translates into reduced layer thicknesses on the fabricated device. In fact, the thickness of the silicon layer may vary www.intechopen.com 95 1500 1510 1520 1530 1540 1550 1560 1570 1580 1590 1600 (increase or decrease) and the air layer follows the opposite trend (decreases or increases). Then, the device performance degrades. This issue is obvious in Fig. 8 where various error values are introduced into the mirror original design. We notice that the overall response shifts toward the left side as the error decreases from 100 nm to -100 nm in steps of 50 nm. Comparing the obtained responses to the error free design, we see that the mirror reflectance might turn from 99.98 % ideally to 0.6 % for an introduced error = ± 100 nm which means that the multilayered designs are not tolerant to fabrication errors exceeding 50 nm.
Modeling and simulation FP cavity based on Bragg mirrors
If instead of the stack of high reflectance mirror, we introduce a gap layer whose thickness is an integer number of half the wavelength then the characteristic matrix [M] of this layer becomes: Thus, we can easily get a Fabry-Perot (FP) resonator if we combine two stacks of quarter wavelengths thick acting as high reflectance mirrors separated by a gap layer of half wavelength thick.
In the next part, we illustrate, by the help of MATLAB simulations, the properties of such FP resonators where we study the impact of several parameters on the resonator spectral response. Parameters of particular interest for this comprehensive study: the mirror reflectance controlled by the number of Bragg layers per mirror, the impact of technological errors and the cavity gap length. In what follows, unless otherwise stated, we consider that the silicon Bragg layers of thickness = 3.67 m (33 Si /4), the air Bragg layer has a thickness of 3.49 m (9 air /4) and the gap layer has a width = 10.075 m (13 Si /2). The silicon refractive index n Si is taken = 3.478 and all the layers are considered absorption free.
We start our study by increasing gradually the number of Bragg layers. As shown in Fig. 9 , we found that the FWHM of the resonator decreases from 7.6 nm for single Si layer/mirror, to 0.56 nm for double Si layer/mirror, to 0.046 nm for 3 Si layers/mirror and finally the FWHM becomes 0.004 nm for 4 Si layers/mirror. This is due to the increase in the mirror reflectance which goes from 71.8 % for single layer to 99.98 % for 4 Si layers. Also, the contrast improves and the minimum level goes from -10 dB up to -70 dB and the resonator sharpness improves as well.
Now, if we consider the case of 4 Si layers/mirror with introduced errors (ε) therein, we obtain the curves shown in Fig. 10 .We notice that the central wavelength 0 shifts from 1550 nm by ±8.5 nm as ε = ± 50 nm. For ε = ± 100 nm, 0 shifts by 18.15 nm. In addition, the FWHM of the peak increases from 0.004 nm for the error free case to 0.007 nm for ε = ± 50 nm and it reaches 0.029 nm for ε = ±100 nm. This might be explained by reference to previous simulations carried on Bragg mirrors with introduced errors. As mentioned earlier, the overall response of the mirror shifts right (left) as error increases (decreases) and this is the reason underlying the shift in the resonance wavelength. In addition, the maximum reflectance of the mirror decreases from 99.98 % (in the error free case) to 99.97 % (for ±50 nm error) to 99.93 % (for ±100 nm), that's why the FWHM increases.
www.intechopen.com By scanning over the wavelength for the cases of ε = -50 nm and ε = -100 nm, we notice that other resonance peaks, with larger FWHM and reduced contrast, appear in the spectral response of the cavity. This result seems strange and it does not go in accordance with the designed FSR for the error free cavity. In fact, the designed cavity gap length = 10.075 m corresponding to a quasi FSR = 119.2 nm and a resonance wavelength = 1550 nm.
This issue might be explained by looking over the reflection response of the Bragg mirrors with introduced errors as shown in Fig. 11, we find that they are shifted as compared to the error free design. Moreover, they exhibit a non-negligible reflectance between 1575 nm and 1600 nm and so the design performs as a good resonator.
Analyzing the simulation results, we come out with a new definition for the cavity length named: The effective length L eff . This new parameter suggests that the effective reflecting interfaces of the resonator lie inside the Bragg reflectors and not between the inner interfaces as conventionally thought and so it gives rise to unexpected resonances within the quasi FSR. Making inverse calculations for the simulation results shown in Fig. 12, we find that for ε = -50 nm, the FSR = 52.15 nm corresponding to L eff = 23 m and for ε = -100 nm, the FSR = 47.7 nm corresponding to L eff = 25.18 m.
Multilayered Si-Air structures for anti-reflection purposes
Antireflection surfaces (usually obtained through additional material coatings) can be obtained also from silicon micromachinned Bragg structures. They can range from a simple single layer having virtually zero reflectance at just one wavelength, to a multilayer system of more than a dozen of layers, having ideally zero reflectance over a range of several decades. The type used in any particular application will depend on a variety of factors, including the substrate material, the wavelength region, the required performance and of course, the cost. There is no systematic approach for the design of antireflection coatings. Trial and error assisted by approximate techniques and by accurate computer calculation, is frequently employed. Very promising designs can be further improved by computer refinement. Several different approaches can be used in designing AR coating. In this section, we will limit our discussion to the single layer design only. Complicated analytical www.intechopen.com formulas can be derived for the case of multilayer coating and they lie outside the scope of this work so they will not be presented.
The vast majority of antireflection coatings are required for matching an optical element into air. The simplest form of antireflection coating is a single layer. Consider Fig. 13. Since two interfaces are involved, we have two reflected rays, each representing the amplitude reflection coefficient at an interface. If the incident medium is air, then, provided the index of the film is lower than the index of the substrate, the reflection coefficient at each interface will be negative, denoting a phase change of 180°. The resultant minimum is at the wavelength for which the phase thickness of the layer is 90°, that is, a quarter-wave optical thickness, when the two rays are completely opposed. Complete cancellation at this wavelength, that is, zero reflectance, will occur if the rays are of equal length. This condition, in the notation of Fig. 13, is The condition for a perfect single-layer antireflection coating is, therefore, a quarter-wave optical thickness of material with optical admittance equal to the square root of the product of the admittances of substrate and medium. It is seldom possible to find a material of exactly the optical admittance which is required. If there is a small error, , in y 1 such that: provided that is small. A 10 % error in y 1 , therefore, leads to a residual reflectance of 1 %.
Zinc sulphide has an index of around 2.2 at 2 m. It has sufficient transparency for use as a quarter-wave antireflection coating over the range 0.4-25 m. Germanium, silicon, gallium arsenide, indium arsenide and indium antimonide can all be treated satisfactorily by a single layer of zinc sulphide. There is thus no room for manoeuvre in the design of a singlelayer coating.
In practice, the refractive index is not a parameter that can be varied at will. Materials suitable for use as thin films are limited in number and the designer has to use what is available. A Better approach, therefore, is to use more layers, specifying obtainable refractive indices for all layers at the start, and to achieve zero reflectance by varying the thickness. Then, too, there is the limitation that the single-layer coating can give zero reflectance at one wavelength only and low reflectance over a narrow region. A wider region of high performance demands additional layers.
Tilted FP cavity as a notch filter
In this part, we focus on another interesting application for devices based on Bragg structures. In particular, we study FP cavity based on multilayered mirrors but under oblique incidence. The device design differs from the case of normal incidence since the rays will propagate obliquely in the layers and the optical thicknesses for both the silicon and the air layers shall be calculated differently. In this case, we must ensure that = mπ/2 to obtain the same matrix as in equation (41), and then we will solve the problem inversely to get the corresponding thicknesses H(L) = d Si(Air) Using equation (52), we will consider H=d Si = 3.76 m using odd multiple m = 33 and L=d Air = 3.84 m using the odd multiple m = 7. In the upcoming simulations, we will take the thickness of the HL layers as mentioned previously. For the gap thickness G under oblique incidence, we have to satisfy the condition = mπ. By following the same analytical treatment as before, we will get: So, we will consider G = 14.25 m using odd multiple m = 13.
The studied architecture consists of two stacks of tilted Bragg mirrors separated by an air gap layer. While the FP configuration with normal incidence works only in transmission, the tilted architecture, shown in Fig. 14, allows working either in transmission or in reflection.
In the case of tilted FP, it behaves as a notch filter, suitable for dropping a particular wavelength. This is due to the 45° tilt angle of the cavity with respect to incident light. Simulating a structure based on the parameters stated above, we obtain the results shown in Fig. 15 and Fig. 16. As obvious, the FWHM of the filter reduces as the number of Silicon layers/mirror increases as it translates into higher reflectance. This device might have good potential in WDM systems where it can be used as an Add-Drop multiplexer. Also, it might be of interest for application involving tunable lasers as will be detailed in the next section of this chapter. Simulation results show that the FWHM decreases from 4.5 nm for the single silicon layer to 0.18 nm for the double layer design and it exhibits further decreases to 0.008 nm for the triple layer design. Now, if we consider a tilted FP cavity with mirrors of HLH configuration but with different angles of incidence, we obtain a spectral response with a shift in the resonance wavelength as illustrated in Fig. 17. Varying the angle of incidence by 0.5° around 45° results in 9 nm shift of the resonance wavelength. Then, proper design for rotational actuator to integrate with the tilted cavity, suggests the use of the whole package as a MEMS tunable filter. The next section highlights the potential of the tilted FP cavity in tunable laser source module.
A last point to mention about the tilted FP cavity is the sensitivity of the design to fabrication errors. Considering a HLH combination for both mirrors, and introducing errors from 100 nm down to -100 nm in steps of 50 nm, we notice from Fig. 18 that the resonance wavelength shifts by about ±7 nm for an increase of ±50 nm. Also, the FWHM increases from 0.18 nm for the error free design to 0.25 nm for an introduced error of 50 nm. It reaches 0.55 nm for an introduced error of 100 nm. Thus, the structure is not very tolerant to fabrication errors and the filter shall be designed, fabricated and tested carefully before integration into optical systems.
Tilted FP cavity as a mode selector
By completing the architecture surrounding the tunable tilted FP cavity with an active laser cavity and an external mirror, then we obtain a compact tunable laser by tuning the angle incident upon the tilted FP cavity. As mentioned above, the tuning might be achieved by rotating the tilted FP. Tilted FP cavities are of special interest, since they reject undesirable wavelengths off the optical axis. Therefore, they appear as interesting candidates for mode selection in external cavity tunable lasers. Indeed, as these types of lasers exhibit a competition between several longitudinal modes, there is a need for a mode selection mechanism in order to obtain single mode operation and avoid mode hopping during tuning. The main interest in using tilted FP etalon rather than a FP cavity with normal incidence is to avoid parasitic reflections due to additional FP cavities that appear when adding the mode selector. Fig. 19a illustrates the principle of the mode selector based on a 45° tilted FP cavity. The corresponding simulated transmission is shown as well, which confirms the operation principle. It is worth mentioning that the performed simulation is very basic, since it does not take into account losses. In particular, plane waves are considered here rather than Gaussian beams. Figs. 19b and Fig. 19c Tuning is achieved either by rotating the cavity further or by controlling its gap g, as shown in Figs. 20a and 20b. Tuning range of 30 nm is shown as the result of gap tuning of 150 nm. The increase in the separation distance L doesn't affect the peak position as shown in Fig. 20c.
Advanced FP architecture
In this last section, we present two advanced architecture of FP cavity based on cylindrical 1D photonic crystal vertically etched in silicon. The first architecture is based on cylindrical Bragg mirrors to focus light beam along one transverse beam. SEM Photo of a device based on single silicon layer is presented in Fig. 21. The measured characteristic is shown in Fig. 22 pertain to three different spacing between the injection fiber and the input mirror. Numerical modeling confirms the measurements and reveals that the device exhibits selective excitation of transverse modes TEM 20 . For more details, the interested reader may refer to [Malak et al. Transducers 2011] [Malak et al. JMEMS 2011]. The second architecture however, aims to focus the light beam in both transverse planes to reduce losses introduced by Gaussian beam expansion as well. For this purpose, the cylindrical Bragg is combined with a fiber rod lens to focus the light beam in the other transverse plane. Since the second architecture is not common, a stability model has been devised to enable the design of stable resonator [Malak el al. JMST 2011]. Photo of the realized device and corresponding response is shown in Fig. 23. This architecture provides a high quality factor (~9000) for a Bragg mirror based on four silicon layers. It has a strong potential for spectroscopic applications.
Concluding remarks
1D photonic crystal structure acquired a high interest long ago due to the application domain they touch. As outlined in this chapter, they constitute a basic building block in many devices like FP resonators, multilayered coating. The attractiveness in them comes from their easy design and modeling based on multilayered stack theory and the affordable fabrication process, thanks to the advance in the fabrication processes, in particular, the advance in the DRIE process which helped producing vertical Bragg on silicon. In this context, this chapter focused on specific issues concerning 1D photonic crystal: design and www.intechopen.com
|
2019-04-12T13:53:34.922Z
|
2012-03-30T00:00:00.000
|
{
"year": 2012,
"sha1": "fe6264710faa0e1c1b1040d798e278fd9c74a1d4",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/34169",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "81b8fadbd650f66d298f05ef65123bd6b09095d4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
235540864
|
pes2o/s2orc
|
v3-fos-license
|
Ulos as Batak Cultural Wisdom Towards World Heritage
Indonesia is a country rich in cultural heritage, customs, traditions, arts and local wisdom. As a rich country from Sabang to Merauke, Indonesia is a paradise for culture lovers and observers. Every year millions of tourists, both foreign and domestic, have a vacation to enjoy the natural beauty and unique cultural charm. The Indonesian nation is famous for being a pluralistic or heterogeneous nation. Religious systems and sociocultural values whose roots are the same, but in terms of their implementation are determined and influenced by the cognition, perception and social environment of each ethnic group (Napitu et al, 2020). Our nation has various ethnic groups, cultures, religions and customs (traditions). According to Sitompul et al (2020), Regions in Indonesia are mostly dominated by the Javanese and Batak tribes. Everything is reflected in the daily life of the Indonesian people, one of which is the result of the Batak culture, namely Ulos. In the realm of cultural heritage is divided into two categories, namely (1) Intangible Cultural Heritage and (2) Tangible Cultural Heritage. Based on the source from the PDSP of the Ministry of Education and Culture, what is meant by intangible cultural heritage is all practices, representations, expressions, knowledge, skills, as well as tools, objects, artifacts and cultural spaces that are recognized by the community or group. Examples of intangible cultural heritage are performing arts, traditional crafts, traditions and oral expressions, community customs, rituals, celebrations and knowledge. Meanwhile, cultural heritage objects are cultural heritage that can be sensed with the eyes and hands, for example various artifacts or sites such as temples, monuments and others. Abstract
Currently, from Indonesia, 3 intangible cultural heritages have been registered as world heritage and recognized by UNESCO, including Batik, Keris and Wayang. Batik is not just about the process of making a pattern on a piece of white cloth and then drawing it with wax and dipping it in dye until it is finished, but there is a lot of knowledge intertwined in it, including its history, its spread throughout the archipelago, and the meaning of the motifs. many kinds of. The same is the case with ulos which has a sacred value in every Batak tribal procession. Ulos cloth is a thought and has high quality art in the manufacturing process because it is an ancestral heritage. According to J. Keuning in Saragih et al (2019), that the Batak tribe is one in the principles of civilization, but varies in its manifestation in material and spiritual life.
The sacred value of ulos is a picture of the inner world of the Batak people. Therefore not all ulos can be used in everyday life. Ulos is part of traditional traditions and ceremonies, a symbol of an event, a representation of individual status from users to social status. In the past, weaving ulos should not be done carelessly. During the process of making ulos or weaving there are restrictions (Sihombing, 2013). In the current era, making ulos consists of 2 parts, namely making ulos using a manual and using a machine. The use of machines is for larger and more efficient production of ulos because the time used is shorter than the manufacture of manual ulos woven by partonun.
On October 17, 2014, ulos was designated as "Indonesia's intangible cultural heritage" as stipulated by the Minister of Education and Culture of Indonesia (Mohammad Nuh). Now every October 17th is celebrated as National Ulos Day, this step is an important one to identify any ancestral cultural heritage such as Batik. After the ulos was designated as an intangible cultural heritage of Indonesia, then the intersection was with regard to the Advancement of Indonesian Culture.
Looking at the procedures involved in submitting world heritage applications, which I have quoted from the page kebudayaan.kemendikbud.go.id, namely (1) recording, (2) determination, (3) efforts of communities and related institutions (government and nongovernment) ), (4) selection of the Ministry of Education and Culture with a special team, (5) selection to become a UNESCO NOMINATION. Indonesia, (6) trials in Indonesia and file preparation, (7) Proposals (8) Completing forms.
So far, the journey towards one of the world heritage in the category of intangible cultural heritage is still being pursued. There is still time each year to continue completing the requirements and joint efforts are made, as for things that are still running over time include research to enrich literacy about ulos, support for various ulos festivals, holding various scientific meetings on the study of ulos, carried out Ulos virtual fashion show that has just been carried out during this pandemic as well as community empowerment for the welfare of Ulos actors and suppliers.
If a cultural heritage becomes a world heritage with existing recognition, various assistance will emerge in the framework of its preservation. UNESCO is not only obliged to provide financial assistance, but also to monitor, protect and ensure that a culture will not end in extinction. Every culture that has historical values and extraordinary universal values has the right to receive the title and recognition of world heritage. That is what various parties are trying to do with ulos.
The method used in writing this paper is descriptive method. Descriptive method is a research method aimed at describing existing phenomena (Sudjana, 2008: 317). The data collection technique was carried out through literature studies. Data analysis includes four components, namely data collection, data reduction, data display, data verification or conclusion drawing. The reason the authors study this is to describe ulos as a Batak cultural identity which has been established as an intangible cultural heritage of Indonesia and in an effort towards world heritage, as well as to increase literacy about Ulos.
Symbolic Interactionism Theory
In this theory, George Harbert Mead argues that meaning appears as an interaction between humans, both verbally and non-verbally. In Mead's description, interactions do not only take place through movements but also symbols that need to be understood and understood their meaning. The three main concepts according to Mead which are integrated in this theory are society (society), self (self) and mind (mind). The essence of symbolic interaction in Mulyana (2001: 68) is an activity that is characteristic of humans, namely communication or exchange of symbols that are given meaning.
Ulos has many varieties and each has a function as a symbolic interaction tool that has been agreed upon and understood by the Batak community. Not limited to the ulos alone, but who is the ulos giver and the recipient of ulos as one of the interactions has meaning and purpose. Pardosi (2008: 107) explains that the symbolic meaning of ulos generally consists of 3 parts, namely: memorization (thick) provides warmth of the body and spirit for those who receive it. Sitorop Rambu (many signs at the end of the ulos) means to get many sons and daughters for those who receive it. Ganjang (long) which means that people who receive it will have a long life.
Ulos as a Batak Cultural Identity
National identity can use various symbols such as language symbols and symbols of other cultures. Symbol comes from the Greek word "sys-ballein" which means throwing together an (object, action) or "symbolos", which means a sign or feature that tells someone something. Symbol is a form that marks something other than the embodiment of the symbolic form itself. "A symbol is a sign which refers to the object that is denotes by virtue of a law, usually an association of general ideas, which operates to cause the symbol to be interpreted as referring to that object" (Putri, 2010: 5). In this case, ulos is a symbol used by the Batak community in conveying prayers and as a symbol of affection for the recipient of ulos.
Society and culture give birth to a cultural identity of the community itself, namely a cultural identity which will later become the identity of the nation. As in the writing of Tilaar (2007: 37) that national identity is a comprehensive picture of a nation, one of which is the Indonesian nation. The whole social values that are recognized by agreement by the Indonesian people are called the identity of the Indonesian nation. The Batak community has an inseparable cultural identity, namely Ulos, which is finally recognized as the identity of the Indonesian nation. This identity can be seen by the determination of ulos as an intangible inheritance of Indonesia on October 17, 2014, which was determined by the Indonesian Minister of Education andCulture Decree Number 270 / P / 2014, dated October 8, 2014.
The emergence of ulos based on the socio-historical context is part of the life of the Batak people for a long time. "Ulos is a piece of Batak woven cloth with a certain pattern and size where the ends hang long. This cloth originally served to protect the body and was always done by women using cotton" (Niessen, 1993: 51). From the original language, ulos means cloth, because in the beginning, ulos was used as a wrapper or body warmer. In its development, ulos is used as part of the implementation of traditional ceremonies. This sacred object is a symbol of blessing, affection and unity, as in the writing (Niessen, 2009: 63) which reads Ijuk pangihot ni hodong, ulos pangihot ni holong which means that if palm fiber is the binding of the midrib on the stem, ulos is the binder of affection between fellow.
According to the beliefs of the Batak ancestors, there are three sources that provide heat (warmth) to humans, namely the sun, fire and ulos (Marpaung, 2015). The sun rises and sets itself all the time. A fire can be lit at any time, but it is not practical to warm the body, for example, the size of the fire must be kept at all times so that sleep is disturbed. But this is not the case with ulos which are very practical to use. Based on the three sources of warmth, ulos is considered the most comfortable and familiar with everyday life. In ancient times, the ancestors of the Batak tribe were mountain humans (their historical designation). By inhabiting the highlands means that they must be prepared to fight against the chill of the weather. This is where the history of Ulos begins.
In the beginning, ulos were made only for their own needs, so that almost every family could weave ulos. With the surrounding material, namely cotton or hemp thread, ulos is woven with a very simple tool that is moved with both hands and feet. The process of making ulos does not have a special ceremony, but because of its sacred use, the way it is made is tied to a predetermined procedure. To produce a sheet of ulos it can take weeks or even months depending on the difficulty of the ulos to be woven. Weaving work requires patience, perseverance, an image of a sense of art, and even a sense of devotion (Siregar, 2017: 2). First, by using a tool called poultry and poultry. When finished then thread and roll. The next step is weaving in the local language called martonun, which is inserting the threads into a wooden loom. The type of loom used is hasoli, which is a roll on a stick with a length of about 30 cm; turak is a tool used to insert threads from the gaps between the weaving threads. The tool is made of small bamboo like a flute with hasoli filling.
The hatudungan is a loose knitting tool for loosening the weave so that the turak can be inserted; baliga is a tool made of palm tree trunks and is used to tighten threads that have been inserted by pressing several times. Pamunggung is a tool in the form of an arrow, on the right and left there is a rope to pull when weaving. The parts of the loom are an integral part that cannot be separated during the weaving process.
A sheet of ulos requires thousands of threads of different colors, each of which has been wound in hasoli. The hasoli-hasoli then enter into the turret and then the turak in and out between the threads that have been stretched to form the ulos. Once you continue the process of working on the ulos until the stretch of the threads gradually turns into a piece of cloth. During the weaving period, the weaver's body is tied to the weaving equipment, so that it cannot move freely. Usually the looms will be removed when the weaver wants to take a break or want to do other work. The weaver's persistence determines whether or not an ulos is completed. Below is a brief overview of the process of making ulos. Mangulosi is a traditional activity that is very important for the Batak people. Quoting Agustina's writing (2016) in every activity such as wedding ceremonies, births, and mourning ulos, it is always part of the tradition that is always included. The use of ulos in traditional activities has not changed like ulos yeast hotang. Ulos yeast hotang is usually used when a traditional party is given to a newly married couple in the hope that they will bond (Niessen, 1993: 102).
Apart from customary ulos, it is also used in this modernization era. Ulos is an attraction for fashion designers who are then used as the main material in fashion shows. In this case, making ulos also uses machine looms which makes ulos mass produced by machine, printing, with textile dyes. This is one of the steps to preserve the typical Batak cloth. In addition to fashion, in a pandemic like now, masks with ulos motifs have emerged to remain fashionable but cultured. Ulos has become part of Indonesian culture, the times and people's concern have made it known to the wider community and even worldwide (Mulyadi, 2016).
From the explanations above, the writer interprets two main things, namely; 1) Ulos is cloth that is used as daily clothes, ulos which means only for preservation, does not have an important role in traditional ceremonies. 2) Ulos as traditional cloth (ulos adat) for the official activities of the Batak community and for Batak traditional ceremonies, so it also has its own meaning.
Ulos as an Intangible Cultural Heritage of Indonesia
Cultural heritage according to the UNESCO definition presented in the Draft Medium Term Plan 1990-1995 is: … The entire corpus of material signs -either artistic or symbolic -handed on by the past to each culture and, therefore, to the whole of humankind. As a constituent part of the affirmantion and enrichment of cultural identities, as a legacy belonging to all human kind, the culture heritage gives each particular place its recognizable features and is the storehouse of human experience. The preservation and the presentation of the cultural heritage are therefore a corner-stone of any cultural policy. The above can be interpreted that cultural heritage as a marker of culture as a whole, both in the form of works of art and symbols, is a material that is contained in culture which is transferred by human generations in the past to the next generation. The main element that enriches and shows the bond between the identity of a generation and the previous generation is a legacy for all humanity. Cultural heritage provides a marker of identity to every place and space, and is a repository that stores information about human experience.
Intangible cultural heritage is intangible, such as concepts and technology. Its nature can pass and disappear over time with the times such as language, music, dance, ceremonies, and various other structured behaviors (Edi Sedyawati: in the introduction to the Seminar on Intangible Cultural Heritage, 2002). Recording and designation of cultural works is important, because cultural works or intangible cultural heritage contribute to social cohesion, fostering a sense of identity and responsibility that helps individuals feel part of one or more different communities and feel part of the wider community.
This intangible cultural heritage is passed down from generation to generation, which is continuously recreated by communities and groups in response to their surrounding environment, their interactions with nature and their history, and provides a sustainable sense of identity, to appreciate cultural differences and human creativity (Kemendikbud, 2018: 17).
The Directorate of Cultural Heritage and Diplomacy (Kemendikbud) through Binsar Simanullang in 2019 explained that Indonesia's intangible cultural heritage is our identity, national identity. After the establishment of ulos as an intangible cultural heritage of Indonesia, the use of ulos has increased along with its creation in the public sphere. Starting to get the attention of each stakeholder on the ulos cloth and the manufacturing process. Apart from that, discussions are often held to understand ulos from its historical, meaningful philosophy, and so on. What is impossible to miss is the celebration and reflection to commemorate Ulos every year on October 17 to make ulos even more popular (Marbun: Ulos Online National Seminar towards World Heritage).
As an intangible cultural heritage of Indonesia, everyone who is engaged in the Creative Industry, especially those with themes of ancestral cultural heritage such as ULOS must first know the philosophy, history and cultural values of their ancestors. Because creative industry activities, there needs to be 'ADVANCEMENT', for example motives, techniques, philosophies, including materials. Creative industry must also be used in introducing culture, and not just economic opportunities. a. One cloth can tell a lot of things because behind the cloth (and its motives) there is a thick culture and philosophy. It is in this context that the preservation of cultural heritage such as ULOS is important. For example, the Maratur star motif in the Batak traditional philosophy is as an intermediary for joyful greetings or happy news given to people who receive blessings or sustenance. b. Learning ULOS motifs is certainly very interesting, along with other cultures they also recognize motifs. Through motives, we can see the minds of the people who support their culture. Through motives too, we can compare them with the motives that have developed from other tribal communities both in the archipelago and in other parts of the world. For example, one of the ULOS Ragi Sapot motifs has similarities with the weaving motifs in the Kajang Tribe, it has the same function. With this similarity of motives, the question will naturally arise, whether this could be a cultural unity or are there other reasons that could explain the similarity of motives and functions. This is one of the interesting things in studying ULOS motifs.
Ulos Towards a World Heritage
Cultural issues are very sensitive because they cover the identity and characteristics of a country in the eyes of other countries, especially in international relations. Indonesia has experienced several problems with other countries regarding cultural claims. Lusianti (2012: 2) said that the widespread issue of claiming culture has resulted in the government taking a stance to save Indonesia's cultural wealth by starting to make an inventory of the existing cultural wealth.
UNESCO as one of the United Nations organizations specifically engaged in the education, social and cultural sectors has put in place a number of international laws, both binding and non-binding in the context of preserving Cultural Heritage. The scope of UNESCO's international law includes both material (objects) and immaterial (intangible) objects. UNESCO member countries are obliged to identify the culture that is to be proposed as a world cultural heritage.
The role of UNESCO is to check, make observations and assessments as well as ensures that all the criteria that have been made can be implemented. In her writing, Rani (2015) states the role of UNESCO in preserving world culture, namely: a. Forming conventions that give birth to a commitment to protect world culture b. Able to form rules of the game that govern world cultural heritage c. Able to be a space for member countries to discuss and dialogue specifically about culture d. Produce a committee that provides classification and assessment criteria, as well as conducts assessments e. Defining and recognizing a culture as a world cultural heritage f. Providing protection, supervision and preservation of world cultural heritage. g. Ensuring the guaranteed rights of world cultural heritage. h. Ensuring that a world cultural heritage continues to receive assistance in the framework of preservation i. Ensuring that a world cultural heritage does not experience extinction and destruction. j. Ensure that a cultural heritage continues to receive financial support, be it from UNESCO, or from the international community k. Ensuring a world cultural heritage is beneficial for current and future generations.
The flow of assessment and criteria by UNESCO in determining a heritage or culture can be recognized as a world heritage, including: a. The state carries out the process of submitting a heritage, culture, site, etc. to UNESCO through a predetermined procedure b. UNESCO will provide a classification of a heritage or culture, namely whether it is tangible or intangible. Is it cultural heritage or natural heritage. c. If it is tangible, it must have clear boundaries, have form, have form, and have value.
UNESCO will also see whether the objects are man-made, or if there is no human intervention at all, in other words, purely natural products d. If it is intangible (for example in the form of a system) then it must have values that can be assessed, be it cultural values, religious values, spiritual values, artistic values, and so on. e. The main values seen by UNESCO are universal extraordinary values or what are known as universal outstanding values. f. UNESCO looks at historical aspects, cultural aspects, social aspects, religious aspects, and so on. The more aspects it contains, the greater the chance it will become a world cultural heritage g. UNESCO sees the benefits and impacts received by society and future generations h. UNESCO sees the capacity for threat, be it a direct or indirect threat to a heritage or culture i. UNESCO conducts an assessment of the application file attempted by the submitting country. Assessment is not only academically or for the benefit of science, but rationally. From the 9 points above, if we highlight points 6 and 7 it is clear that ulos, seen from historical, cultural aspects as well as benefits and impacts on the next generation, would have fulfilled it to become a world heritage. The next generation needs to know that this ulos is a picture of the inner world of the Batak people.
Initially, world cultural heritage was only centered on buildings, monuments, or tangible objects of human ancestors. This is starting to shift where not all cultural heritage is tangible. In the 1990s there was a change in the concept of cultural heritage, namely the existence of intangible cultural heritage.
Basis and Obligations of Establishing World Heritage
The Indonesian government has ratified the UNESCO convention, namely the 2003 Convention for the Safeguarding of the Intangible Cultural Heritage to become Presidential Regulation Number 78 of 2007 concerning Ratification of the Convention on Intangible Cultural Heritage. As a result of the ratification, Indonesia is obliged to: a. Periodically report the progress of the preservation of intangible cultural heritage to UNESCO, b. Preserving cultural heritage in accordance with the signs specified in the convention, c. Proposing a new cultural heritage to become a cultural heritage that is recognized by UNESCO on a regular basis The Indonesian government has also ratified the 2005 Convention on the Protection and Promotion of the Diversity of Cultural Expressions into Presidential Regulation Number 78 of 2011 concerning the Protection and Promotion of the Diversity of Cultural Expressions. Impact of ratification: a. This convention guarantees artists, professional humanists, practitioners and the general public to be able to create, produce, distribute and enjoy various cultural goods, services and activities. b. This Convention recognizes the right of states to take steps to protect and promote the diversity of cultural expressions and treats obligations at both the domestic and international levels. c. The Indonesian government is obliged to propose a new cultural heritage to become a cultural heritage recognized by UNESCO on a regular basis, and d. The Indonesian government is also obliged to prepare a strategy to preserve the established cultural heritage.
Figure 5. The process of Submitting Intangible Cultural Heritage to UNESCO
Currently, Ulos is in stage 5, namely submitting proposals to UNESCO. Under the auspices and responsibilities of the Aceh BPNB (Cultural Value Conservation Center) which has the task of carrying out the PRESERVATION (protection, development and utilization) of the aspects of tradition, belief, art, film and history in its working area. In the development of cultural values, of course, it is related to the advancement of culture. The definition of cultural advancement in Law No.5 of 2017 on Cultural Advancement is an effort to increase cultural resilience and the contribution of Indonesian culture in the midst of world civilization through the protection, development, utilization and fostering of culture. Of the 10 objects of cultural advancement, from Ulos there are at least 5 objects including traditional technology, art, oral traditions, customs, and rituals.
BPNB Aceh is working with related parties to continue to develop and preserve ulos. So far what has been done is: a. conduct research to enrich literacy about ulos b. supporting various ulos festivals c. carry out various scientific meetings on the study of ulos d. network with communities that preserve ulos e. carry out a virtual ulos fashion show As for the follow-up plan by BPNB Aceh that was conveyed by Mrs. Irini Dewi Wanti (Chair of the Aceh BPNB) at the Ulos National Seminar towards World Heritage on October 17, 2020 for the next 5 years, including: festivals and seminars for the moment of Ulos (2020), Ulos festival in the lake area toba (2021), encyclopedia ulos (2022), film about ulos (2023), and facilitation of ulos conservation network (2024).
Every year, various kinds of events are held, one of which is the Ulos Fest 2019 which was attended by the chairman of the Indonesian People's Consultative Assembly Bambang Soesatyo (12 November 2019), he expressed his support in realizing Ulos as a world heritage. The series of events carried out at the 2019 Ulos Fest include seminars, FGDs, workshops, exhibitions, weaving demonstrations, bazaars, fashion shows accompanied by Manortor offerings. Apart from Bamsoet, North Sumatra Governor Edy Rahmayadi gave his appreciation at this event and supported the efforts of Ulos to become a world heritage and the need to establish the Ulos museum.
One of Ulos' achievements on the world stage is Ulos Harungguan who won an award from the World Crafts Council in 2018 which is affiliated with UNESCO. In addition, Ulos Harungguan became a souvenir at the annual IMF-World Bank meetings in Washington DC and Bali. What is the important value of Ulos Harungguan with other ulos fabrics is that there is no repetition of the motive in the manufacturing process, and in the past, Ulos Harungguan was only used by kings and prominent circles. This is what makes Ulos Harungguan have a higher value than other types of ulos.
Figure 6. Ulos Harungguan
Seeing the facts that this ulos has great potential and can become a promising industry without forgetting its cultural and historical values. However, it is very unfortunate that the ulos weavers are able to work but do not know how to market them. One of the people who care about ulos is Torang Sitorus (international fashion designer and ulos collector) who accompanies the partonuns. With the hope that Ulos will soon become a world heritage, then there will be more attention to ulos. Viewed from the traditional side, ulos still maintains its historical value and remains part of traditional events, but to preserve and prosper the partonuns in the ulos economic industry with other motives can be used as a variety of creativity and appear on the world stage
IV. Conclusion
Indonesia as a country that is rich in cultural heritage and local wisdom is an attractive place for culture lovers and observers to enjoy the natural beauty and cultural charms that are diverse and unique. Cultural heritage consists of 2, namely intangible cultural heritage and material cultural heritage. One of the intangible cultural heritages that Indonesia has is Ulos, which is the cultural identity of the Batak people. Ulos which was originally a cloth to warm the body but in its development it became a thought and has high quality art to be used in traditional Batak events, be it birth, marriage or death. On October 17, 2014, Ulos was declared an intangible cultural heritage of Indonesia by the Minister of Education and Culture. This step paved the way for further recognition, namely to become a world heritage (world heritage). Periodically, each country proposes cultural heritage to be registered with UNESCO and ulos receives support from many parties to be proposed so that ulos remain preserved and bring prosperity to the partonun community. Along with the times, the use of ulos is divided into 2 things, namely 1) Ulos is a cloth used as everyday clothes, ulos which means only for preservation, does not have an important role in traditional ceremonies. 2) Ulos as traditional cloth (ulos adat) for the official activities of the Batak community and for Batak traditional ceremonies, so it also has its own meaning.
|
2021-06-22T17:55:53.254Z
|
2021-04-22T00:00:00.000
|
{
"year": 2021,
"sha1": "811f8716d73982366f19905b78e8fafcf24d15a8",
"oa_license": "CCBYSA",
"oa_url": "https://bircu-journal.com/index.php/birle/article/download/1865/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a04cbf79a161bf453a5046cfec5a61fa4a10b731",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
15349547
|
pes2o/s2orc
|
v3-fos-license
|
Variation in Incidence and Severity of Injuries among Crown-of-thorns Starfish (acanthaster Cf. Solaris) on Australia's Great Barrier Reef
Despite the presence of numerous sharp poisonous spines, adult crown-of-thorns starfish (CoTS) are vulnerable to predation, though the importance and rates of predation are generally unknown. This study explores variation in the incidence and severity of injuries for Acanthaster cf. solaris from Australia's Great Barrier Reef. The major cause of such injuries is presumed to be sub-lethal predation such that the incidence of injuries may provide a proxy for overall predation and mortality rates. A total of 3846 Acanthaster cf. solaris were sampled across 19 reefs, of which 1955 (50.83%) were injured. Both the incidence and severity of injuries decreased with increasing body size. For small CoTS (<125 mm total diameter) >60% of individuals had injuries, and a mean 20.7% of arms (±2.9 SE) were affected. By comparison, <30% of large (>450 mm total diameter) CoTS had injuries, and, among those, only 8.3% of arms (±1.7 SE) were injured. The incidence of injuries varied greatly among reefs but was unaffected by the regulations of local fisheries.
Introduction
Predation has long been considered a key process in population regulation [1], contributing to long-term stability in the abundance of prey species.By inference, organisms that exhibit rapid and pronounced increases in abundance (often termed outbreaks; [2]) are considered to be released (or otherwise free) from predatory regulation (e.g., [3,4]).Predation may nonetheless be an important and often major cause of mortality but simply has no discernable influence on prey abundance [5].Population outbreaks of prey species may occur when predators fail to react to increases in prey densities [6].However, population outbreaks may also arise completely independently of predation, due to intrinsic and extrinsic processes.Most notably, outbreaks may result from steep changes in rates of population replenishment, especially where organisms have exceptional reproductive potential but generally low fertilisation rates and reproductive success [7].Moreover, the abundance and reproductive success of outbreaking species is often influenced by marked changes in environmental conditions and resources (e.g., food availability) [8].
Crown-of-thorns starfish (CoTS; Acanthaster spp.) have gained considerable notoriety due to their propensity for population outbreaks, as well as their corresponding impacts on local assemblages of prey corals [9].Very few marine organisms show changes in abundance of the magnitude or rate shown by crown-of-thorns starfish.In the extreme, 10-fold increases in localised (within reef) densities of CoTS have been documented within one year (e.g., [10]).In Moorea (French Polynesia), CoTS densities ranged from 11,500 individuals per km 2 to up to 151,650 individuals per km 2 around the circumference of the reef and were both spatially and temporally variable [10].During the course of this outbreak, coral cover declined by up to 93% in approximate accordance with the cumulative number of CoTS recorded at each location [10].
One of the foremost hypotheses put forward to account for outbreaks of Acanthaster spp. is the predator removal hypothesis, initially proposed by Endean [11].Endean [11] noted that shell collectors had removed ~10,000 giant tritons (Charonia tritonis), leading to significant declines in their abundance in the lead up to the first major outbreak of Acanthaster sp.recorded on Australia's Great Barrier Reef.At that time, C. tritonis was also regarded as one of the only effective predators of CoTS [11].
There are now a large number of coral reef organisms known to prey upon CoTS during different stages of their life cycle [12], including fish and other invertebrates.These may be important in regulating the abundance of CoTS, if not in actually preventing outbreaks.Coral reef fishes that prey on CoTS are receiving particular attention [13][14][15], given that localised levels of fishing seem to correspond with inter-reef variation in the severity [15] or incidence [14] of CoTS outbreaks.The most intuitive explanation for these patterns is that the overexploitation of particular fishes relaxes the top-down control necessary to regulate populations of Acanthaster spp.[11], thereby leading to population outbreaks.However, these studies have not identified the specific target species that predate on CoTS nor have they explicitly compared densities of CoTS predators or quantified rates of predation of CoTS along gradients of fishing pressure.Trophic cascades induced by fisheries, resulting in fewer invertebrates preying on juvenile starfish, may be another mechanism releasing predation pressure on CoTS [14].
One of the main limitations to testing the predator removal hypothesis is the inherent difficulty in quantifying predation rates on Acanthaster spp. in the field.This is particularly difficult for small and juvenile Acanthaster spp.due to their cryptic nature [16].One possible proxy for measuring variation (spatial, temporal, taxonomic, and ontogenetic) in the susceptibility to predation among Acanthaster spp. is the incidence of recent injuries.These are most apparent as missing or regenerating arms (Figure 1), which are often attributed to sub-lethal or partial predation [16,17].Although sub-lethal predation is also generally considered a good proxy for mortality due to predation, or overall predation pressure [16,17], this has not been explicitly tested, and predators causing injuries may not cause outright mortality of CoTS.Relatively few predators are known to consume adult CoTS in their entirety (but see [18]), while CoTS survive and can escape from predators (e.g., fishes) that only remove a portion of the body mass [19].In previous studies, up to 67% of CoTS in some locations exhibit recent or sustained injuries [16], and high incidences of injuries appear to be generally reflective of a higher intensity of predation [16,17].In the Philippines, for example, Rivera-Posada et al. [16] showed that the incidence of injuries was higher inside rather than outside of marine protected areas (MPAs) where fishing is prohibited, which would be consistent with a higher abundance of potential predators.The incidence of injuries also tends to decrease with the increasing body size of Acanthaster sp.[20] as well as for several other species of starfishes [21,22], which probably reflects their increased susceptibility to predation when small [22].Even if the predators that cause a high incidence of injuries among Acanthaster spp.do not kill these starfish outright, they may nonetheless have important effects on the behaviour and fitness of starfish, thereby contributing to population regulation [23].
The purpose of this study was to test for variation in the incidence and severity of injuries among crown-of-thorns starfish (Acanthaster cf.solaris) from Australia's Great Barrier Reef (GBR).More specifically, we wanted to test whether the incidence of injuries is higher inside versus outside MPAs, where fishing is prohibited, as would be expected if fisheries' target species impose significant predatory regulation on CoTS on the GBR and the abundance of these key predators vary significantly in accordance with spatial management zones [14].We also tested for size-based variation in the incidence of injuries among CoTS, ranging in size from 60 mm total diameter (TD) to 510 mm TD.These injuries are presumably caused mostly by sub-lethal predation [17] (but see [22]).
Figure 1.
Small and regenerating arm of Acanthaster cf.solaris (as indicated by arrow), which is indicative of past injury, presumably caused by sub-lethal predation (see also [16]).
Materials and Methods
A total of 3846 crown-of-thorns starfish (Acanthaster cf.solaris) were collected between October 2012 and May 2015 along the Great Barrier Reef (GBR).Sampling was conducted at 19 reefs, spanning 1150 km of the GBR (Figure 2).All reefs, except Centipede Reef, Davies Reef, Michaelmas Cay, and Sweetlip Reef, were considered to have an active CoTS outbreak at the time of sampling.All starfish were collected while snorkelling or SCUBA diving using large purpose-built tongs to carefully extract starfish from among the reef matrix.Starfish were kept alive in 500 L tanks connected to high flowthrough sea-water systems on live-aboard boats or at the research station on Lizard Island for a maximum of 20 h before they were processed and disposed.During processing, the starfish were removed from the water and placed on a flat surface for 30-90 s before measuring the total diameter across opposite arms that were ostensibly undamaged.The severity of injury was then assessed by counting the number of missing or damaged arms, which was then expressed as a percentage of the total number of arms (also referred to as "severity").Missing arms were apparent where the ambulacral groove terminated at the edge of the oral disk.All arms that were less than 75% the length of the adjacent arms were considered to have been damaged.Recent injuries, apparent due to fresh tears in the surface integument, were ignored, as they likely occurred during collection.We also determined the sex of each individual starfish (where possible) based on visual inspection of the gonads that were exposed following the removal of a few arms using a paint scraper [9].Starfish that were either immature or spent (virtually no gonad tissue left after spawning) could not be sexed, resulting in 1078 and 1475 individuals identified as females and males respectively.
Data Analyses
The probability of an individual CoTS being injured (injury incidence) was analysed using binary logistic mixed models (logit link), with "zone" (no fishing, restricted fishing, open to fishing) and "size" (total diameter: mm) as possible influential factors.The influence of zone and size on the severity of injuries received during predation events was also investigated using a subset of the data (n = 1797), which only included individuals that had experienced arm damage (injury = 1) and was modeled as the proportion of injured arms (in relation to the total no.arms) using binomial mixed models (logit link).The potential influence of adult gender (male or female) on CoTS injuries was also investigated using the subset of starfish for which sex could be established (n = 2553 for influence Small and regenerating arm of Acanthaster cf.solaris (as indicated by arrow), which is indicative of past injury, presumably caused by sub-lethal predation (see also [16]).
Materials and Methods
A total of 3846 crown-of-thorns starfish (Acanthaster cf.solaris) were collected between October 2012 and May 2015 along the Great Barrier Reef (GBR).Sampling was conducted at 19 reefs, spanning 1150 km of the GBR (Figure 2).All reefs, except Centipede Reef, Davies Reef, Michaelmas Cay, and Sweetlip Reef, were considered to have an active CoTS outbreak at the time of sampling.All starfish were collected while snorkelling or SCUBA diving using large purpose-built tongs to carefully extract starfish from among the reef matrix.Starfish were kept alive in 500 L tanks connected to high flow-through sea-water systems on live-aboard boats or at the research station on Lizard Island for a maximum of 20 h before they were processed and disposed.During processing, the starfish were removed from the water and placed on a flat surface for 30-90 s before measuring the total diameter across opposite arms that were ostensibly undamaged.The severity of injury was then assessed by counting the number of missing or damaged arms, which was then expressed as a percentage of the total number of arms (also referred to as "severity").Missing arms were apparent where the ambulacral groove terminated at the edge of the oral disk.All arms that were less than 75% the length of the adjacent arms were considered to have been damaged.Recent injuries, apparent due to fresh tears in the surface integument, were ignored, as they likely occurred during collection.We also determined the sex of each individual starfish (where possible) based on visual inspection of the gonads that were exposed following the removal of a few arms using a paint scraper [9].Starfish that were either immature or spent (virtually no gonad tissue left after spawning) could not be sexed, resulting in 1078 and 1475 individuals identified as females and males respectively.
Data Analyses
The probability of an individual CoTS being injured (injury incidence) was analysed using binary logistic mixed models (logit link), with "zone" (no fishing, restricted fishing, open to fishing) and "size" (total diameter: mm) as possible influential factors.The influence of zone and size on the severity of injuries received during predation events was also investigated using a subset of the data (n = 1797), which only included individuals that had experienced arm damage (injury = 1) and was modeled as the proportion of injured arms (in relation to the total no.arms) using binomial mixed models (logit link).The potential influence of adult gender (male or female) on CoTS injuries was also investigated using the subset of starfish for which sex could be established (n = 2553 for influence on injury incidence and n = 1263 for influence on injury severity) using generalized mixed models (logit link: binary logistic and binomial, respectively).
Diversity 2017, 9, 12 4 of 13 on injury incidence and n = 1263 for influence on injury severity) using generalized mixed models (logit link: binary logistic and binomial, respectively).For all models, "reef" was included as a random effect (19 levels), while "observer" was included as a potential fixed effect (3 levels) to account for possible artifacts of the sampling design and For all models, "reef" was included as a random effect (19 levels), while "observer" was included as a potential fixed effect (3 levels) to account for possible artifacts of the sampling design and different data collectors.Full models, with all the potential relevant interactions, were fit first, and then model selection procedures were applied, comparing models using likelihood ratio tests [24,25].The model assumptions were checked graphically, investigating residuals and random effects before interpretation of the final model.The confidence intervals around the coefficient estimates of the final models were generated by parametric bootstrapping.All statistical analyses were conducted using the R statistical software program (R Core Team 2016) and the lme4 package [26].
Incidence of Injuries
In all, 1955 out of 3846 (50.83%)CoTS collected from the GBR exhibited evidence of recent or sustained injuries, based on the number of arms that were missing or evidently shorter and thinner and generally covered in shorter and finer spines (Figure 1).The incidence of injury varied considerably between reefs, ranging from 20% (1 starfish injured out of 5) at Michelmas Cay near Cairns up to 83% (74 starfish injured out of 89) at Elford Reef, also located very close to Cairns (Figure 2).The mean severity of injuries (calculated based on the percentage of arms affected) also varied among reefs, ranging from 6.7% at Centipede Reef, where only one starfish that was injured was collected, up to 21.8% (±2.0%SE) at Bramble Reef.Interestingly, there was a positive linear correlation (R 2 = 0.783, p < 0.001) between the incidence and the severity of injuries at the scale of individual reefs (Figure 3).different data collectors.Full models, with all the potential relevant interactions, were fit first, and then model selection procedures were applied, comparing models using likelihood ratio tests [24,25].The model assumptions were checked graphically, investigating residuals and random effects before interpretation of the final model.The confidence intervals around the coefficient estimates of the final models were generated by parametric bootstrapping.All statistical analyses were conducted using the R statistical software program (R Core Team 2016) and the lme4 package [26].
Incidence of Injuries
In all, 1955 out of 3846 (50.83%)CoTS collected from the GBR exhibited evidence of recent or sustained injuries, based on the number of arms that were missing or evidently shorter and thinner and generally covered in shorter and finer spines (Figure 1).The incidence of injury varied considerably between reefs, ranging from 20% (1 starfish injured out of 5) at Michelmas Cay near Cairns up to 83% (74 starfish injured out of 89) at Elford Reef, also located very close to Cairns (Figure 2).The mean severity of injuries (calculated based on the percentage of arms affected) also varied among reefs, ranging from 6.7% at Centipede Reef, where only one starfish that was injured was collected, up to 21.8% (±2.0%SE) at Bramble Reef.Interestingly, there was a positive linear correlation (R 2 = 0.783, p < 0.001) between the incidence and the severity of injuries at the scale of individual reefs (Figure 3).Despite marked inter-reef differences in the incidence and severity of injuries, there was no obvious effect of the regulations of local fisheries (Tables 1 and 2).When averaged across all reefs in each of the three distinct management zones, the mean incidence of injury was non-significantly higher for yellow zones (53.78% ± 12.66% SE), followed by blue zones (50.15% ± 5.92% SE) and lowest in green zones (46.05% ± 7.10% SE).However, these differences are negligible compared to the variation observed among 'reefs' within each group, as evident based on large standard errors (Figure 4a).There was also no influence of fishing regulation (zone) in any of the models.Likelihood ratio tests with and without zone as a fixed effect can be seen in Table 1.The frequency distributions of the number of arms missing were also very similar across the three management zones, with a single missing arm affecting the majority of injured individuals (65%-70%) (Figure 4b-d).Despite marked inter-reef differences in the incidence and severity of injuries, there was no obvious effect of the regulations of local fisheries (Tables 1 and 2).When averaged across all reefs in each of the three distinct management zones, the mean incidence of injury was non-significantly higher for yellow zones (53.78% ± 12.66% SE), followed by blue zones (50.15% ± 5.92% SE) and lowest in green zones (46.05% ± 7.10% SE).However, these differences are negligible compared to the variation observed among 'reefs' within each group, as evident based on large standard errors (Figure 4a).There was also no influence of fishing regulation (zone) in any of the models.Likelihood ratio tests with and without zone as a fixed effect can be seen in Table 1.The frequency distributions of the number of arms missing were also very similar across the three management zones, with a single missing arm affecting the majority of injured individuals (65%-70%) (Figure 4b-d).* Influence of factors checked with Likelihood-ratio tests with and without the variable as a fixed effect; ** All models had "reef" as a random effect (1|Reef), and the inclusion of a random slope (e.g., Size|Reef) did not improve model fit.
Table 2. Model coefficients for the best fitting models describing the relationship between injury incidence and severity with size, whilst accounting for the variability between the reefs in all models and observer bias for models of incidence (Figures 5 and 6).
Size-Based Variation
The incidence of injuries showed a negative relationship with the increasing size of CoTS (z = −4.836,p < 0.001) (Figure 5a).The probability of a small starfish (60 mm) exhibiting any level of injury was 0.70 (95% CI = [0.57,0.79] by parameter bootstrap [PB]), decreasing to 0.25 (95% PB CI = [0.15,0.38]) in the largest individual (510 mm) when observed by the median observer.Although injury incidence was affected by the observer (Tables 1 and 2), Observer identity did not influence the shape of the relationship (Likelihood-ratio tests with and without the observer influencing slope: Chi = 1.0, df = 5, p = 0.959) (Figure 5a).The severity of injuries showed a similar decline with increasing size (z = −7.069,p < 0.001) (Figure 5b).Small starfish (60 mm) generally exhibited injuries to 19% of their arms [16%, 21% CI], decreasing to 9% [8%, 11% CI] in large starfish (~470 mm).Neither the observer or the reef had any effect on the relationship between the severity of the injuries and the size of the CoTS (Figure 5b).
Discussion
Outbreaks of Acanthaster spp.are a major contributor to coral loss and reef degradation throughout the Indo-Pacific region (e.g., [10,27]), and a major concern for coral reef management.Understanding the possible role of predation in regulating populations of Acanthaster spp. is fundamental in establishing whether increased regulation of fisheries will mitigate or prevent ongoing outbreaks [12].Although the overall incidence of injury was high (50%) amongst the CoTS sampled along much of the length of the GBR, there was no difference in the incidence or severity of injuries for CoTS from reefs where fishing is prohibited (green zones) versus reefs where fishing activities are permitted (yellow and blue zones).However, there was strong variation in the incidence of injuries among reefs, ranging from 20% to 83%.There was also a strong negative correlation between body size (CoTS diameter) and arm damage, with individuals ≤125 mm in diameter being twice as likely to be injured than individuals ≥400 mm.The severity of injuries (the proportion of the number of injured arms over the total number of arms) showed a very similar reduction with increasing body size.There was only a very weak gender trend, with females being slightly more likely to suffer increased arm damage (severity) than males.
Whether predators are able to regulate CoTS abundances remains the topic of ongoing debate.Across the GBR, the occurrence of CoTS outbreaks is lower on reefs where fishing is prohibited, compared to reefs where fishing is permitted [14,28], which suggests that the higher abundance of large target species results in greater predation of CoTS.In the current study, fishing restrictions had no effect on the incidence of injuries, which are presumably caused mostly by sub-lethal predation [17] (but see [22]).However, even if injuries are caused mainly by predation, overall mortality and predation rates may still vary with the management of fisheries.The sub-lethal predation rate is generally considered to be a proxy for overall predation pressure [16,17], but this has not been explicitly tested.It may be that the fishes that cause injuries (loss of individual arms) are altogether different from those that kill CoTS outright, such that direct tests of the predator-removal hypothesis still require measurements of actual mortality rates alongside explicit consideration of the abundance and composition of potential predators, irrespective of zoning.Except for some lethrinid species, most other known CoTS predators (e.g., the stellar pufferfish Arothron stellatus) [12] are not generally targeted by recreational and commercial finfish fisheries on the GBR, which mainly target large piscivorous fishes such as coral trout [29].
Irrespective of whether injuries are a valid proxy for overall predation and predation rates, higher incidences and severity of sub-lethal predation will have a significant impact on the individual fitness and population dynamics of CoTS [23,30,31].CoTS, like many other echinoderms, have the capacity to regenerate parts of their central disc and missing arms [19,22].However, regeneration comes at an energetic cost and affects the fitness of individuals [31,32].Injuries and regeneration can reduce feeding and growth, delay maturation, or compromise the reproductive output.Given that each arm contains gonads in CoTS, fewer arms directly affect the reproductive capacity of a female.In the sea star Heliaster helianthus, the energetic content in their pyloric caeca and gonads showed a 5 to 7 fold decrease following autotomy [31].Similarly, regenerating tails in lizards were found to affect clutch and egg size [33], and maternal effects were observed in CoTS, with reduced egg size resulting in lower survivorship in developing larvae [34].As a result, even relatively minor injuries may have considerable negative impacts on the reproductive success of individual females.
The overall incidence of injury recorded in this study (50%) is at the higher end of rates reported previously from the GBR (33% [35]; 40% [17]; 50% [36]).However, our study clearly showed that the incidence of injury varies among reefs on the GBR (20%-83%), and the range of variation recorded in this study is consistent with differences in the estimates of the incidence of injuries from previous studies, all of which considered only a single or few sites.The highest incidences of injury recorded (81.3%) on some reefs exceeded the highest reported incidence of damaged arms (67%) recorded in the Philippines [16].Although the observer had an effect on the incidence of arm damage, substantial variation between reefs was still clearly observed irrespective of who counted the number of short or missing arms.The high rates on some reefs may represent a regional effect (this could not be tested due to the covariation with the observer in some cases).The Cairns sector (Reefs 8-12, Figure 2) had above average rates (59%-83%), whereas the Cooktown sector (Reefs 5-7, Figure 2) had below average incidences of arm damage (32%-39%).The high injury rates (50%-72%) on reefs with outbreaks in the Townsville sector (Reefs 13-17, Figure 2) are likely to reflect the on average smaller individuals collected on these reefs.The variation in injury rates between reefs may also be due to differences in the local abundance of predators.Future studies should therefore explicitly test if there is a relationship between the incidence of arm damage and the abundance of predators.
Vulnerability to predation is known to vary with ontogeny in many organisms [37,38].Younger and smaller individuals are often subject to higher predation rates than older and larger conspecifics [39][40][41].Large CoTS exhibit an impressive defense against predators through the presence of large, very sharp, and poisonous spines, which may limit the number of species able to predate on large adult CoTS.It is therefore not surprising to see a 50% reduction in the incidence and severity (measured as the number of injured arms over the total number of arms) of arm damage with increasing body size (the relationship was not affected by the observers).Size has previously been identified as an important factor in predation rates in other echinoderms and CoTS, but patterns are not always consistent.Sub-lethal injuries generally decline with increasing body size in echinoderms [21], asteroids [22], sea urchins [42,43], and CoTS [16,17].McCallum et al. [17] found a similar but weak linear relationship in CoTS, whereas a hump-shaped relationship was observed in the Philippines, which covered a similar size range as the present study [16].Although the discrepancy between the studies could be due to sample size (both studies had relatively low sample sizes of individuals ≤10 cm), the discrepancy may also be due to differences in local predation pressure.A reduction in arm damage in individuals ≤10 cm is possible and could be explained by behavioural changes or changes in the ratio between lethal and sub-lethal predation rates [16].Increased sampling in these smaller size classes, including very small juvenile CoTS [44], should clarify the effect of size on sub-lethal predation rates in these early life stages.Nevertheless, the high incidence of predation rates in smaller CoTS (≤20 cm) across both studies suggests that substantial predation levels are likely to occur at night due to the nocturnal feeding habits and cryptic nature of small CoTS during the day.
There were indications that gender may have a weak effect on predation severity.Although not statistically significant, female CoTS exhibited slightly higher levels of injuries compared to males, which may be explained by the relatively higher energy content in oocytes compared to spermaries.Similarly, egg-bearers in other marine organisms were found to be more susceptible to predation, with the higher nutritional value of oocytes proposed as a mechanism [38,45].Interestingly, there was no difference in the injury incidence between males and females, suggesting that predators do not discriminate between males and females but may feed more intensively on females.
Conclusions
Predation has the potential to play a significant role in regulating populations of CoTS, though the effect is most likely to dampen fluctuations in the local abundance of CoTS rather than to prevent outbreaks per se.Ormond et al. [13] suggested that predation levels by fish on CoTS could maintain CoTS in sufficiently low densities to avoid outbreaks.Our study showed that at least sub-lethal predation on CoTS is common, with 50% of the 3846 individuals studied showing evidence of predation events, and that size plays a major role in the frequency and severity of predation events.Although the predator removal hypothesis remains controversial, with studies showing variable results, predation of CoTS is a common event that warrants further investigation.It is critical to determine predation rates in small juveniles, identify all possible predators, and assess the effects of sub-lethal predation on growth, fitness, and reproductive output to better inform population models.There is no stopping the current outbreak on the GBR, but any attempts to prevent future outbreaks in light of the increasing threats to coral reefs is possibly our easiest chance to improve the outlook for coral reefs.
Figure 1 .
Figure1.Small and regenerating arm of Acanthaster cf.solaris (as indicated by arrow), which is indicative of past injury, presumably caused by sub-lethal predation (see also[16]).
Figure 2 .
Figure 2. Map indicating sampled reefs along the Great Barrier Reef (GBR).The red box on the map of Australia shows the extent of the main map, whilst the reefs sampled in the Swains Region on the south-eastern end of the GBR are magnified in the bottom right rectangle.The colour of sampled reefs designates fishing regulations: green = no fishing (marine reserve), yellow = restricted fishing allowed, blue = open to fishing.The bar graph shows injury incidence per reef (top paired bar with solid fill and black values) and mean ± SE injury severity per reef (lower paired bar with semi-transparent fill and red values).Numbers in the white boxes within the bars represent the total number of individuals collected per reef.Injury severity values were calculated based on injured individuals only.
Figure 2 .
Figure 2. Map indicating sampled reefs along the Great Barrier Reef (GBR).The red box on the map of Australia shows the extent of the main map, whilst the reefs sampled in the Swains Region on the south-eastern end of the GBR are magnified in the bottom right rectangle.The colour of sampled reefs designates fishing regulations: green = no fishing (marine reserve), yellow = restricted fishing allowed, blue = open to fishing.The bar graph shows injury incidence per reef (top paired bar with solid fill and black values) and mean ± SE injury severity per reef (lower paired bar with semi-transparent fill and red values).Numbers in the white boxes within the bars represent the total number of individuals collected per reef.Injury severity values were calculated based on injured individuals only.
Figure 4 .
Figure 4. (a) Incidence of injury (percentage of individuals with damaged arms) averaged across the reefs in each of three different management zones (the colour of the circle designates fishing regulations: see (b-d)); and (b) frequency distributions of short or missing arms for each zone ((b) blue = open to fishing; (c) yellow = restricted fishing; (d) green = no fishing).
Figure 4 .
Figure 4. (a) Incidence of injury (percentage of individuals with damaged arms) averaged across the reefs in each of three different management zones (the colour of the circle designates fishing regulations: see (b-d)); and (b) frequency distributions of short or missing arms for each zone ((b) blue = open to fishing; (c) yellow = restricted fishing; (d) green = no fishing).
Figure 5 .
Figure 5. Relationship (with 95% confidence intervals (CI) generated by parametric bootstrap) between (a) CoTS maximum diameter and the probability of the individual to have experienced arm damage (injury incidence; binary data), as measured by the median observer on an average reef; (b) CoTS maximum diameter and the severity of an individual's injury on an average Reef (note y-axis has been truncated to aid visualization of the relationship).Black horizontal lines represent the mean probability of individuals in 25mm size classes (<125, 125-150, 150-175, …, >450 mm) superimposed over the relationship.
Figure 6 .
Figure 6.Relationships (with 95% confidence intervals (CI) generated by parametric bootstrap) between female (blue) and male (orange) CoTS maximum diameter and injury (a) incidence (arm damage as measured by the median observer on an average reef; binary data) and (b) severity on an average reef.Horizontal lines represent the mean probability of female (blue) and male (orange) individuals in 25 mm size classes (<125, 125-150, 150-175, …, >450 mm) superimposed over the relationship.Note that the y-axis has been truncated to aid visualization of the relationship for severity.
Figure 5 .
Figure 5. Relationship (with 95% confidence intervals (CI) generated by parametric bootstrap) between (a) CoTS maximum diameter and the probability of the individual to have experienced arm damage (injury incidence; binary data), as measured by the median observer on an average reef; (b) CoTS maximum diameter and the severity of an individual's injury on an average Reef (note y-axis has been truncated to aid visualization of the relationship).Black horizontal lines represent the mean probability of individuals in 25mm size classes (<125, 125-150, 150-175, . . ., >450 mm) superimposed over the relationship.
Figure 5 .
Figure 5. Relationship (with 95% confidence intervals (CI) generated by parametric bootstrap) between (a) CoTS maximum diameter and the probability of the individual to have experienced arm damage (injury incidence; binary data), as measured by the median observer on an average reef; (b) CoTS maximum diameter and the severity of an individual's injury on an average Reef (note y-axis has been truncated to aid visualization of the relationship).Black horizontal lines represent the mean probability of individuals in 25mm size classes (<125, 125-150, 150-175, …, >450 mm) superimposed over the relationship.
Figure 6 .
Figure 6.Relationships (with 95% confidence intervals (CI) generated by parametric bootstrap) between female (blue) and male (orange) CoTS maximum diameter and injury (a) incidence (arm damage as measured by the median observer on an average reef; binary data) and (b) severity on an average reef.Horizontal lines represent the mean probability of female (blue) and male (orange) individuals in 25 mm size classes (<125, 125-150, 150-175, …, >450 mm) superimposed over the relationship.Note that the y-axis has been truncated to aid visualization of the relationship for severity.
Figure 6 .
Figure 6.Relationships (with 95% confidence intervals (CI) generated by parametric bootstrap) between female (blue) and male (orange) CoTS maximum diameter and injury (a) incidence (arm damage as measured by the median observer on an average reef; binary data) and (b) severity on an average reef.Horizontal lines represent the mean probability of female (blue) and male (orange) individuals in 25 mm size classes (<125, 125-150, 150-175, . . ., >450 mm) superimposed over the relationship.Note that the y-axis has been truncated to aid visualization of the relationship for severity.
Table 1 .
Overview of the model selection process, starting with the full model including all relevant interactions, compared with Akaike Information Criterion (AIC) and Likelihood-ratio tests.
|
2017-03-31T08:35:36.427Z
|
2017-02-21T00:00:00.000
|
{
"year": 2017,
"sha1": "227ee575458e7868b6e15955598ea8e0fb7a9c22",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-2818/9/1/12/pdf?version=1487820348",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "227ee575458e7868b6e15955598ea8e0fb7a9c22",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
266764242
|
pes2o/s2orc
|
v3-fos-license
|
Modulation of mitochondrial activity by sugarcane (Saccharum officinarum L.) top extract and its bioactive polyphenols: a comprehensive transcriptomics analysis in C2C12 myotubes and HepG2 hepatocytes
Age-related mitochondrial dysfunction leads to defects in cellular energy metabolism and oxidative stress defense systems, which can contribute to tissue damage and disease development. Among the key regulators responsible for mitochondrial quality control, peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1α) is an important target for mitochondrial dysfunction. We have previously reported that bioactive polyphenols extracted from sugarcane top (ST) ethanol extract (STEE) could activate neuronal energy metabolism and increase astrocyte PGC-1α transcript levels. However, their potential impact on the mitochondria activity in muscle and liver cells has not yet been investigated. To address this gap, our current study examined the effects of STEE and its polyphenols on cultured myotubes and hepatocytes in vitro. Rhodamine 123 assay revealed that the treatment with STEE and its polyphenols resulted in an increase in mitochondrial membrane potential in C2C12 myotubes. Furthermore, a comprehensive examination of gene expression patterns through transcriptome-wide microarray analysis indicated that STEE altered gene expressions related to mitochondrial functions, fatty acid metabolism, inflammatory cytokines, mitogen-activated protein kinase (MAPK) signaling, and cAMP signaling in both C2C12 myotubes and HepG2 hepatocytes. Additionally, protein–protein interaction analysis identified the PGC-1α interactive-transcription factors-targeted regulatory network of the genes regulated by STEE, and the quantitative polymerase chain reaction results confirmed that STEE and its polyphenols upregulated the transcript levels of PGC-1α in both C2C12 and HepG2 cells. These findings collectively suggest the potential beneficial effects of STEE on muscle and liver tissues and offer novel insights into the potential nutraceutical applications of this material. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1007/s13659-023-00423-x.
Graphical Abstract 1 Introduction
Mitochondria, often referred to as the cellular "powerhouses" responsible for ATP production, play a crucial role in maintaining cellular function and appropriate responses to stress.In non-mitotic cells like cardiomyocytes and skeletal muscle cells, the aging process incites mitochondrial dysfunction, which subsequently triggers the activation of the DNA damage response and the innate immune response, thereby leading to chronic inflammation [1].The accumulation of mitochondrial DNA (mtDNA) mutations and alterations in mitochondrial quality control mechanisms, such as the balance between mitochondrial biosynthesis and mitophagy, as well as the balance between mitochondrial fusion and fission, are considered contributing factors to age-related mitochondrial dysfunction [2][3][4].
We have previously reported the identification of the four major polyphenolic components in sugarcane top (ST) ethanolic extract (STEE), namely, 3-O-caffeoylquinic acid (3CQA), 5-O-caffeoylquinic acid (5CQA), 3-O-feruloylquinic acid (3FQA), and Isoorientin (ISO, chemically luteolin-6-C-glucoside).We also demonstrated that STEE could ameliorate spatial learning and memory deficits in senescenceaccelerated model mice via promoting energy metabolism and neurogenesis in vivo [13].Furthermore, we extended our investigation to unveil that the coordinated action of 3CQA, 5CQA, and ISO within STEE elevated PGC-1α transcription levels in immature astrocytes, promoting their branching morphology in vitro [14].These findings encourage us to investigate the effects of STEE and its bioactive polyphenols on metabolically active muscle and liver functions.It is well-documented that mitochondrial dysfunction and the associated rise in oxidative stress significantly contribute to the decline in muscle mass and function, as well as the onset of liver inflammation and fibrosis [15][16][17][18].Previous studies have indicated that CQA stimulates glucose transport in skeletal muscle in vivo and enhances glycolytic and electron transport systems in HepG2 hepatocytes in vitro [19,20].Also, ISO has been reported to protect against oxidative damage in both C2C12 myotubes and HepG2 hepatocytes in vitro, through regulating the genes related to mitochondrial function [21][22][23].However, the synergistic functional effects of these compounds on the mitochondrial functions of myotubes and hepatocytes have not yet been investigated.
We, therefore, aimed herein to obtain the data concerning the alterations induced by STEE and its polyphenols in cultured myotubes and hepatocytes in vitro, with a specific emphasis on metabolic processes and mitochondrial function, in order to uncover their unexplored functional attributes.We conducted comprehensive transcriptome-wide analyses using microarray technology, which provided valuable insights into the biological and molecular changes by STEE and its polyphenols.
Six-hour exposure to STEE or mixed-compound treatment induced a significant increase in the mitochondrial membrane potential of C2C12 myotubes
First, we tested the effect of STEE on cell viability using the 3-(4,5-Dimethylthiazol-2-yl)-2,5-Diphenyltetrazolium Bromide (MTT) assay.In differentiated C2C12 myotubes, treatment with STEE for 24 or 48 h did not yield any significant changes in absorbance across various concentrations.Similarly, in HepG2 hepatocytes, exposure to STEE for either 24 or 48 h exhibited no significant changes in absorbance at any concentration (Additional file 1: Fig. S1).These results suggest that STEE does not affect the viability of either C2C12 myotubes or HepG2 hepatocytes in vitro within the concentration range used in this study.
Considering the results, we opted for a concentration of 50 µg/mL of STEE for the subsequent Rhodamine 123 (Rh123) assay.Also, we treated the cells with polyphenols of STEE either individually or in combination.The concentrations were calibrated to match the amounts contained within the STEE (for example, 3CQA = 0.50 µM; 5CQA = 0.70 µM; 3FQA = 0.85 µM; ISO = 0.48 µM were equivalent to 50 µg/mL) as previously described [13,14].The structures of the compounds are shown in Fig. 1 and the combinations of the compounds are explained in Table 1.
Treatment of STEE with C2C12 myotubes for 6 h significantly increased Rh123 intensity (Fig. 2A).Also, treatments designated as No. 3, 8, and 10, significantly increased Rh123 intensity in C2C12 myotubes (approximately 1.24-fold for each) at 6 h.Furthermore, treatment No. 5 over a 6 h period displayed a trend of increased fluorescence intensity (p = 0.095) (Fig. 2A).No significant changes in Rh123 intensity were observed in C2C12 myotubes following a 24 h treatment with any of the samples (Fig. 2A).These results suggest that the extract increased the mitochondrial membrane potential (MMP) of C2C12 myotubes during a 6 h of treatment period and this effect could be attributed to the synergistic action of 3CQA, 5CQA, and ISO present in the extract.
In the case of HepG2 cells, no significant changes in Rh123 intensity across any of the samples or treatment durations (Fig. 2B).Nevertheless, although not statistically significant, there was an approximate 1.25fold increase in fluorescence intensity following 24 h treatment with STEE, No. 7, and 8.There appeared to be a trend toward greater fluctuations in fluorescence intensity after 24 h of treatment compared to the 6 h timeframe, suggesting that the effects of STEE and its
Transcriptomic profiling of STEE-treated myotubes and hepatocytes by microarray
To gain mechanistic insight into the effects of STEE, we performed the transcriptomic analysis of the C2C12 myotubes and HepG2 hepatocytes by microarray.The RNA samples extracted from nontreated control cells were compared with the RNA samples extracted from C2C12 myotubes subjected to two different concentrations of STEE for 6 h-30 µg/mL and 50 µg/mL (referred to as STEE30-M and STEE50-M, respectively).Likewise, the RNAs from nontreated HepG2 cells were compared with the RNAs obtained from cells exposed to two distinct concentrations of STEE for 24 h, namely 15 µg/mL and 30 µg/mL (referred to as STEE15-H and STEE30-H, respectively).
In comparison to the control group, STEE30-M exhibited 954 differentially expressed genes (DEGs), with 489 being upregulated and 465 being downregulated (Fig. 3A).In the case of STEE50-M, there were 1326 DEGs, consisting of 579 upregulated and 747 downregulated genes (Fig. 3A).Similarly, in the case of STEE15-H, 1559 DEGs were detected, including 939 upregulated and 620 downregulated genes, in comparison to the control group (Fig. 3B).For STEE30-H, there were 1383 DEGs, with 838 genes upregulated and 545 genes downregulated (as represented in Fig. 3B).Compared to the control group, a greater number of genes exhibited downregulation than upregulation in STEE50-M.Furthermore, when comparing the STEE15-H group with the control, there were more DEGs than when comparing the STEE30-H group with the control.The distributions of fold change (FC) of the DEGs are shown in the butterfly charts (Fig. 3B for the C2C12 group; Fig. 3C for the HepG2 group).
Gene ontology analysis revealed that STEE-induced transcriptomic changes were associated with a wide range of biological events in C2C12 myotubes and HepG2 hepatocytes
To further investigate the potential regulatory effects of STEE on C2C12 myotubes and HepG2 hepatocytes, we performed gene ontology (GO) analysis.The analysis statistically tests whether the proportion of GO terms for a list of specific DEGs is significantly higher than the proportion of the terms for the population (enrichment).This allows us to detect characteristic terms for a group of DEGs, thereby supporting the capture of biological phenomena.GO term is represented by three 5CQA + 3FQA + ISO 5CQA = 0.70 µM 3FQA = 0.85 µM ISO = 0.48 µM (10) sub-ontologies: biological process (BP), cellular component (CC), and molecular function (MF).As a result of GO analysis for the DEGs in STEEtreated C2C12 group, regulation of membrane potential (GO:0042391), inflammatory response (GO:0006954), second-messenger-mediated signaling (GO:0019932), long-chain fatty acid metabolic process (GO:0001676), fatty acid metabolic process (GO:0006631), and cellular lipid catabolic process (GO:0044242) were enriched as GOBP terms over-represented by the DEGs both in STEE30-M and STEE50-M (Fig. 4A).In addition, of the GOBPs, regulation of protein kinase activity (GO:0045859) and regulation of lipid transport (GO:0032368) were enriched by the DEGs in STEE30-M, and cellular response to cytokine stimulus (GO:0071345), cytokine-mediated signaling pathway (GO:0019221), muscle system process (GO:0003012), and negative regulation of immune system process (GO:0002683) were enriched by the DEGs in STEE50-M (Fig. 4A).Enrichment of GOBP terms for cytokine was unique to the DEGs in STEE50-M.
Of the GOCCs, receptor complex (GO:0043235) was enriched term over-represented by the DEGs both in STEE30-M and STEE50-M (Fig. 4B).Also, transmembrane transporter complex (GO:1902495) and respiratory chain complex IV (GO:0045277) were unique enriched terms over-represented by the DEGs in STEE50-M (Fig. 4B).
hrs
Fig. 2 Fluorescence intensity of Rh123 in the cells.A After C2C12 myotubes were treated with STEE or its major constituents; 3CQA, 5CQA, 3FQA, or ISO for 6 or 24 h, the cells were stained with Rh123.One-way ANOVA followed by Dunnett's post hoc test was performed to assess statistical significance: * p < 0.05.B After HepG2 hepatocytes were treated with STEE or its major constituents; 3CQA, 5CQA, 3FQA, or ISO for 6 or 24 h, the cells were stained with Rh123.Comparisons were performed using Kruskal − Wallis test followed by Dunn's post hoc test.Results are expressed as relative percentages compared with the control (mean ± SEM, n = 3) transporter activity (GO:0005324), nuclear receptor activity (GO:0004879), and complement receptor activity (GO:0004875) were unique enriched terms over-represented by the DEGs in STEE50-M (Fig. 4C).Enrichment of GOMF terms for fatty acid, nuclear receptor, and complement was unique to the DEGs in STEE50-M.A full list of enriched GO terms shown in Fig. 3A-C is given in Additional file 2: Table S1.As a result of GO analysis for the DEGs in STEEtreated HepG2 group, regulation of Ras protein signal transduction (GO:0046578), adenylate cyclasemodulating G protein-coupled receptor signaling pathway (GO:0007188), regulation of protein kinase activity (GO:0045859), and positive regulation of protein kinase activity (GO:0045860) were enriched as GOBP terms over-represented by the DEGs both in STEE15-H and STEE30-H (Fig. 4D).In addition, of the GOBPs, regulation of MAPK cascade (GO:0043408), regulation of cell growth (GO:0001588), and second-messenger-mediated signalling (GO:0019932) were enriched by the DEGs in STEE15-H, and positive regulation of cellular component biogenesis (GO:0044089), positive regulation of MAP kinase activity (GO:043406), positive regulation of DNAbinding transcription factor activity (GO:0051091), and fibroblast growth factor receptor signaling pathway (GO:0008543) were enriched by the DEGs in STEE30-H (Fig. 4E).Enrichment of GOBP terms for fibroblast growth factor was unique to the DEGs in STEE30-H.
Of the GOCCs, transmembrane transporter complex (GO:1902495) was enriched term over-represented by the DEGs both in STEE30-M and STEE50-M (Fig. 4E).Also, receptor complex (GO:0043235) was unique enriched term over-represented by the DEGs in STEE15-H, and guanyl-nucleotide exchange factor complex (GO:0032045) was unique enriched term over-represented by the DEGs in STEE30-H (Fig. 4E).
A full list of enriched GO terms shown in Fig. 3D-F is given in Additional file 2: Table S2.
The dimensionality reduction approach revealed that STEE regulated biological pathways related to lipid metabolism, protein kinase signaling, and cytokine signaling
Next, we performed the dimensionality reduction technique to retain latent properties for the large number of DEGs.The pathways clustered from the BioPlanet_2019 gene set library were then visualized on two dimensions using Uniform Manifold Approximation and Projection (UMAP), facilitating the classification of DEG sets involved in the biological pathways.
In the analysis for the DEGs in STEE-treated C2C12 groups, Inflammasomes (Cluster 1, orange), Cytochrome P450 metabolism of endogenous sterols (Cluster 3, red), Signaling by interleukins (Cluster 7, gray), and Acyl chain remodeling of diacylglycerol (Cluster 9, light blue) were detected as the related pathways over-represented by the sets of DEGs in STEE30-M (Fig. 5A).Also, Cytokines and inflammatory response (Cluster 1), Cytokine-cytokine receptor interaction (Cluster 1), Visceral fat deposits and the metabolic syndrome (Cluster 1), Telomere extension by telomerase (Cluster 2, green), Cytochrome P450 metabolism of endogenous sterols, PPAR signaling pathway (Cluster 3), Nuclear receptor transcription pathway (Cluster 9), and Nuclear receptors (Cluster 9) were detected as the related pathways over-represented by the sets of DEGs in STEE50-M (Fig. 5A).Cumulatively, these results suggested STEE may impact pathways related to cytokines, lipid metabolism, and nuclear receptors in C2C12 myotubes.
Protein-protein interaction (PPI) networking of DEGs regulated by STEE
Given the indication of the analyses suggesting the modulation of biological events such as mitochondria activity, fatty acid (FA) metabolism, inflammatory response, and signal cascade of MAP kinase or cAMP by STEE, we attempted to do further investigation of each DEG and classify them based on their functions by using MSigDB and GeneCards.We chose to look at genes that were differentially expressed (satisfied with thresholds) in the samples treated with higher concentrations of the extract (STEE50-M and STEE30-H) compared to the controls.The heatmaps show the relative intensity of the genes (average of duplicates) regulated in C2C12 groups (Fig. 6A) and HepG2 groups (Fig. 6C).We found that some mitochondria activity-related transcripts were differentially regulated by the STEE treatment.Among the genes related to mitochondrial respiration, Cox7b2 and Cox6a2 were significantly upregulated in STEE50-M, and Cox7a1 and Cox6b2 were significantly downregulated in STEE50-M.Respiratory electron transport-related NDUFA4L2 showed significant upregulation in STEE30-H and showed 1.33fold upregulation satisfied with p < 0.05 in STEE15-H.Two mitochondrial transcription and translation-related genes: Mtrf1 and Mterf4, TCA cycle-related gene: Sucla2, and telomerase: Tert were significantly upregulated in STEE50-M.Mterf4 was significantly downregulated also in STEE30-M.Of the two fusion and fission-related genes: Mfn1 and Mtfr2, Mtfr2 was significantly downregulated both in STEE30-M and STEE50-M, and Mfn showed significant downregulation in STEE50-M and also showed 1.24-fold downregulation tendency in STEE30-M (p = 0.0514).Furthermore, Atp1b2, an iontransporting ATPase that plays a role in ATP generation by oxidative phosphorylation (OxPhos) [24][25][26], was significantly upregulated in STEE50-M.Another ATPase gene Atp1b4 was significantly downregulated in STEE50-M.In STEE30-H, ion-transporting ATPase genes ATP1A3 and ATP6V1C2 were significantly upregulated.These two genes showed 1.38-and 1.42-fold upregulation satisfied with p < 0.05 in STEE15-H, respectively.
Given these results, sub-networks of the upregulated genes related to MAPK signaling in STEE30-H were constructed by minimum-order generic PPI and sorted by 'MAPK signaling pathway' of the KEGG database [28].We could extract a module consisting of 63 nodes represented by upregulated genes.Of the genes shown in the heatmap, RPS6KA1, CRKL, PIK3CG, and FGF12 were presented in the module.In addition to these, the module was made up of many genes and proteins (Fig. 6D).Also, sub-networks of the upregulated genes related to cAMP signaling in STEE30-H were constructed by minimum-order generic PPI and sorted by 'cAMP signaling pathway' of the KEGG database.We could extract a module consisting of 35 nodes represented by upregulated genes.Of the genes shown in the heatmap, PRKACG and CREB3L1 were presented in the module.The module was made up of many genes and proteins, and this included nodes of MAPK-related genes such as RPS6KA1 and CRKL (Fig. 6D).These analyses indicated that the activated signaling pathways are constituted under the interaction of many genes, and even under the interaction between different pathways.
The list of DEGs presented in the heatmap is given in Additional file 2: Tables S3 and S4, together with the characteristics of each transcript.
PPI analysis and qPCR approach suggested that STEE-induced transcriptional regulation by PGC-1α
Given the microarray data analysis indicating transcriptional regulation of nuclear receptor-related pathways (especially peroxisome proliferator-activated receptor gamma; PPARγ) by STEE, we attempted to construct PPI networks to analyze the interactions between the DEGs and the transcription factors (TFs), which could be regulated by STEE.
To detect DEGs-TFs interactions in the comparison between STEE50-M and Ctrl-M, initial sub-networks were constructed based on the first-order generic PPI, and then the network regulated by the specific TFs was extracted as the module using the TRRSUT database [29].We could extract a module consisting of 28 nodes having regulatory interaction with TFs 'Pparg' and 'Ppargc1a' (Fig. 7A).Of these nodes, five nodes were represented by downregulated genes, including Il12rb1, and five nodes by upregulated genes, including Tnf (other 18 nodes are proteins that had a high interaction with the up and downregulated DEGs).
Next, to detect DEGs-TFs interactions in the comparison between STEE30-H and Ctrl-H, initial sub-networks were constructed based on the zero-order generic PPI, and then the network regulated by the specific TFs was extracted as the module using the ENCODE database [30].We could extract a module consisting of 19 nodes represented by downregulated genes and 22 nodes represented by upregulated genes, which have regulatory interaction with TFs 'PPARG' and 'CREB1' (Fig. 7B).These nodes included ESR1 (up) and FOXO1 (down), which have a tight relationship with PGC-1α [31][32][33].Also, upregulated RPS6KA1, CRKL, and PIK3CG associated with the MAPK cascade were included in the module's nodes.
Microarray data analysis suggested that STEE has some effect on the activity of TFs, particularly those involved in PPARγ.We checked the dataset by focusing on PPARγrelated TFs and found that ppargc1a levels showed a 1.23-fold increase (p = 0.075) in STEE50-M compared with control (Fig. 7C), and PPARGC1A levels showed a 1.2-fold increase (p = 0.004) in STEE30-H compared with control (Fig. 7D), although outside the thresholds setting in the analysis.We, therefore, attempted to analyze the PGC-1α transcript levels by running PCR cycles with an increased number biological replicates of each group.
In qPCR, additional two replicates were added to the two replicates used for the microarrays (n = 4 replicates for each).In addition, RNA samples from each cell treated with the STEE's four polyphenol mixture (All mixed) were added to this analysis: for the treatment to C2C12, 50 µg/mL STEE equivalents were treated for 6 h, and for the treatment to HepG2, 30 µg/mL STEE equivalents were treated for 24 h.As a result, the groups treated with the higher concentration-STEE (50 µg/mL in C2C12; 30 µg/mL in HepG2) and All mixed showed a statistically significant increase of PGC-1α expression both in C2C12 (approximately 1.4-fold; p > 0.05, respectively, Fig. 7C) and in HepG2 (approximately 1.45-fold; p > 0.01, respectively, Fig. 7D) compared to controls, indicating that running the PCR cycle with four biological replicates let us confirm a significant up-regulation of PGC-1α not only by STEE but also by its polyphenols.
Cumulatively, these results indicated STEE and its polyphenols may induce the physiological activations related to transcriptional activation of PGC-1α.
Discussion
In the present study, we have demonstrated that STEE and its polyphenols could enhance mitochondrial activity in cultured myotubes and hepatocytes in vitro.Further, microarray-based omics analysis provides compelling evidence indicating that STEE could modulate an array of biological processes, physiological responses, and molecular pathways.Additionally, qPCR data validated that STEE and its polyphenols have the potential to bolster the activity of the pivotal mitochondrial master regulator, PGC-1α, within an in vitro context.
In a previous study, we documented the ability of STEE to facilitate astrocyte morphogenesis [14].Notably, the mitochondrial activity, the cAMP pathway, and PGC-1α, which have been indicated as potential targets activated by STEE in the current research, have also been recognized as significant elements in the mechanisms underlying astrocyte stellation [34][35][36].This suggests that these factors likely assume a central role in the regulatory effects elicited by STEE and its polyphenolic constituents.The findings of this study are summarized in Fig. 8.
The present study suggests a preparative action of second messenger-mediating cascades by STEE, particularly in HepG2 cells: adenylyl cyclase (AC) activity, AC-activating receptor signaling, guanyl-nucleotide exchange factor activity, and cyclic nucleotide signaling as second messenger were characteristics of bio-phenomena enriched by the DEGs in STEE-treated HepG2 cells.cAMP is a ubiquitous second messenger and is generated from ATP via the action of the AC [37].cAMP activates its downstream effectors such as protein kinase A (PKA) or exchange protein activated by cAMP (EPAC) [38].Upregulation of ion-transporting ATPase genes ATP1A3 and ATP6V1C2 and cAMP downstream effector-related genes PRKACG , RAPGEF3 (EPAC encoding), and PPP1R1A (Inhibitor-1; I-1 encoding [39]) strongly suggest that STEE exposure activated ATP generation and following cAMP pathway in HepG2 cells.Also, PKA in the matrix induces phosphorylation of mitochondrial substrates including complex IV (cytochrome c oxidase; COX) [40][41][42].Upregulation of ubiquinone subcomplex gene NDUFA4L2 indicate that STEE may have a regulatory effect on the mitochondrial electron transport chain by modulating cAMP signaling [40,[42][43][44].The previously reported data of soluble AC can be localized in the mitochondrial matrix reinforce the hypothesis of activation of the intramitochondrial second messenger cascade by STEE [44].
The TFs associated with nuclear receptors were suggested to be promising elements related to STEE's regulatory effects.Our findings confirmed the increase of PGC-1α expression by STEE.PGC-1α is a transcriptional coactivator that interacts with PPARγ, but also is the master regulator of mitochondrial biogenesis and plays a key role in metabolic homeostasis [5][6][7].In energy metabolism, PGC-1α and its downstream effectors activate the mitochondrial complexes and OxPhos [45,46].The PPI analysis in this study gave a module targeted by the cAMP response element-binding protein (CREB), which is one of several PGC-1α upstream regulators.Also, CREB can be a downstream effector of PKA [9,47].Based on these backgrounds and the present data, STEE and its polyphenols could contribute to the cAMP-mediated activation of PGC-1α.Also, increased OxPhos activity could reduce mitochondrial reactive oxygen species (ROS) generation [48,49], and mitochondrial ROS levels could affect its fusion and fission dynamics [50].The balance between mitochondrial fusion and fission is critical to its quality control [2,4].Data in this study showed decreased levels of fusion-and fission-related genes Mfn1 and Mtfr2 [51], suggesting STEE contribute more to activating mitochondrial biogenesis compared with mitochondrial dynamics.Furthermore, decreased PGC-1α is linked to cellular senescence with telomere shortening and DNA damage, and upregulation of telomerase reserve transcriptase (TERT)-encoding Tret suggests that STEE and its polyphenols may have a PGC-1α-mediated anti-cellular senescence effect [52,53].
The transcriptomic analysis of this study suggested that FA metabolism was also targeted by STEE's bioactivity.FAs are generally a mitochondrial energy source.Intracellular FAs are converted into fatty acyl-CoA by acyl-CoA synthetase-activity of fatty acid transport proteins (FATPs), which are a family of trans-membrane transport proteins [54,55].Fatty acyl-CoA passed through the mitochondrial outer membrane and transported to the matrix is converted to acetyl-CoA by β-oxidation with enzymatic reactions.Excessive cytoplasmic FAs due to imbalances in energy demand increase oxidative stress and disrupt mitochondrial respiration [56,57].Upregulated FATP-encoding SLC27A genes suggest active FA transport in STEE-treated HepG2 cells [58].However, data in this study showed that downregulated FABP genes, which is another FA transport protein [55,59] in Fig. 8 The predicted diagram of mitochondria and its related pathway modulation by STEE and its polyphenols.STEE and its polyphenols may stimulate the mitochondria activity, cAMP pathway, or transcription factor activity, especially PGC-1α.This activation could trigger other pathways activation such as fatty acid metabolism, inflammatory responses, and MAPK signaling HepG2 cells, and upregulated microsomal ω-oxidation cytochrome P450 Cyp4a genes, which can generate ROS by its catalytic cycle [60,61], in STEE-treated C2C12 myotubes.These suggest that STEE may affect fatty acid metabolism via minor pathways.
An environment of permanent oxidative stress could induce chronic inflammatory states [62].Inflammation is the protective response to biological stimuli, and the signaling of cytokine, a small soluble peptide, fundamentally affects the induction and progression of inflammation.Transcription of cytokines is stimulated by cellular pathways including c-Jun N-terminal kinase (JNK) and p38 MAPK, which could be activated by oxidative stress [63].Skeletal muscles are the primary site affected by age-related inflammation, and contractile dysfunction due to TNFα, a major endocrine stimulus, and imbalanced ROS production causes the decrease of muscle mass, strength, and quality [64][65][66].Also, complements recruited by the immune system have been reported to play a key role in the pathogenesis of autoimmune muscle disorders such as inflammatory myopathies [67,68].The classical pathway components including C1 and C2 have been reported to be biosynthesized in myoblast cell lines [69].Our microarray data suggest that STEE could act in an inhibitory manner on these pathways in in vitro myotubes.
Interleukins (e.g., IL-6) are also key cytokines that mediate chronic inflammation and subsequent muscle atrophy [66].Of the microarray data of myotubes, Il2, Il3, and Il12 were detected as DEGs encoded for the class I cytokine receptor family-binding molecules, and Il19 and Il22 were detected as DEGs encoded for the class II cytokine receptor family-binding molecules.Anti-inflammatory interleukins have been suggested to be elevated compensatory to an increase in pro-inflammatory ones, indicating that not only pro-inflammatory but also anti-inflammatory interleukin expression may be involved in the progression of myositis [70].Interestingly, expression of the pro-inflammatory cytokine IL-1β gene was significantly upregulated in both types of STEEtreated cells.The previous studies reported that p38 MAPK can be phosphorylated by IL-1R-mediated signaling and that PGC-1α stimulated by ROS shows an antioxidative stress effect through a negative feedback loop [71,72].In addition, previous studies demonstrated that increases in PGC-1α protein levels would occur in parallel with an increase in the p-p38/p38 ratio in C2C12 by the Gynostemma pentaphyllum plant extract and HepG2 by the Rosa roxburghii Tratt seed oil [73,74].These reports and our previous findings [14] suggest that STEE and its polyphenols may induce an increase in PGC-1α protein levels with an increase in the p-p38/p38 ratio.These encourage us to further explore of the cytokine's defense system alterations by STEE and its polyphenols and related signaling molecule status [14].
Although the results of the assay with Rh123 reported in this and our previous study suggested that 3CQA, 5CQA, and ISO may be responsible for mitochondrial stimulation [14], there are other studies reported that 3FQA showed a strong relation to antioxidant-related proteins [75,76], suggesting 3FQA may contribute to anti-oxidative stress, not via direct stimulation of mitochondria or PGC-1α.
Several MAPK signaling-related terms were enriched by the DEGs, particularly in STEE-treated HepG2 cells.Among MAPK cascade-related DEGs, the FGF genes showed up-regulation by STEE.The FGF family comprises signaling that stimulates various biological processes such as growth, differentiation, inflammation, or cellular senescence.The formation of complex FGF and its receptor FGFR phosphorylates the specific intercellular receptor domain and recruits other proteins like CRKL, which activates down-stream pathways such as Ras/MAPK or phosphatidylinositol 3-kinase (PI3K)/ Akt signaling [77,78].Microarray data in this study suggest that STEE could regulate FGFR-mediated pathways in HepG2 cells, resulting in biological events such as cell growth.Data also showed upregulation of not only canonical FGF (FGF3 and FGF20) but also intracellular FGF (FGF12) transcript levels in STEE-treated cells, suggesting STEE could regulate the channel activity [79].
The current study presents an integrated evaluation of transcriptional changes induced by STEE in both myotubes and hepatocytes, encompassing two different concentrations for each cell type.The findings suggest subtle differences in the activated signaling pathways between the two cell types, possibly attributable to inherent disparities in the nature of these cell lines originating from distinct animal species.For example, the previous study comparing gene expression profiles of mouse and human embryonic stem cells has suggested that differences in cytokine expression between human and mouse stem cells were species-specific rather than differences in culture conditions [80].Also, a prior study examining the varying susceptibility of statins, commonly employed for cardiovascular disease prevention, in C2C12 and HepG2 cells revealed distinctive responses.Specifically, statins reduced phosphorylation of Akt (protein kinase B) and mitochondrial respiration in C2C12 myotubes but did not impact Akt signaling in HepG2 cells [81].Nonetheless, noteworthy changes induced by STEE were observed in both cell lines, with C2C12 displaying alterations in genes related to cAMP and MAPK, and HepG2 exhibiting changes in cytokine genes.Inflammation and molecular pathways such as cAMP and MAPK could be mutually regulated rather than independently, as indicated by the PPI analyses in this study.This suggests that regulatory mechanisms could be established through interactions among factors, including the genes falling outside the specified cut-off values.
Nevertheless, the data predominantly pertain to transcript-level observations, prompting the need for evaluations at the protein or functional levels employing analytical techniques like flux analyzers.Furthermore, there is potential for an expanded inquiry aimed at elucidating with precision which specific compounds, including any synergistic effects arising from their combinations, target particular pathways and molecular mechanisms.It would also be valuable to validate the observed bioactivities within the context of stress conditions, such as oxidative stress.Finally, predicting the bioavailability and metabolism of polyphenols and their practical application in vivo remains challenging.Nonetheless, it is worth mentioning that prior clinical studies have documented the presence of unchanged CQAs and ISO in plasma following oral administration [82,83].
Conclusion
This study reported for the first time the regulation of mitochondrial activity in C2C12 myotubes and HepG2 hepatocytes following exposure to STEE and its polyphenols.An in-depth analysis of the microarray data has shed light on the multifaceted alterations in gene expression induced by STEE, implicating a wide array of biological processes such as mitochondrial function, FA metabolism, inflammatory cytokine responses, MAPK signaling, and cAMP signaling.Our findings further confirmed that STEE and its polyphenols stimulate the transcription of PGC-1α, a master regulator with pivotal roles in mitochondria.Taken together, these findings introduce STEE as a compelling candidate capable of imparting beneficial effects on both muscle and liver tissues.As we embark on future investigations, the impact of STEE and its polyphenols on muscle and liver functions in animal models will be further elucidated, potentially paving the way for their clinical applications.
Cells and cell culture
C2C12 mouse myoblasts and HepG2 human hepatocytes were obtained from American Type Culture Collection (ATCC, Manassas, VA, USA).
C2C12 myoblasts were cultured at 37 °C under 5% CO 2 in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS; Gibco-Thermo Fisher, Grand Island, NY, USA) and 1% anti-bacterial penicillin/streptomycin (PS).To induce differentiation, the growth medium was replaced with a differentiation medium composed of DMEM supplemented with 2% horse serum (Gibco) and 1% penicillin/streptomycin when cells reached about 90% confluence.After differentiation for 6 days, C2C12 myotubes were treated with samples and then subjected to the experiments.
HepG2 cells were maintained in DMEM containing 10% FBS and 1% PS under 5% CO 2 at 37 °C.After the cells reached about 90% confluence, the cells were treated with samples and then subjected to the experiments.
3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay
The MTT assay was performed as a viability assay.Mitochondrial reductase converts the water-soluble yellow MTT to the insoluble purple formazan, and this allows one to detect cellular viability as changes in metabolic activity [84].C2C12 myotubes or HepG2 cells cultured on the collagen-coated 96-well plate were treated with the extract at a range from 5 to 100 µg/mL for 24 or 48 h.The extract was diluted in serum-free Opti-MEM (Gibco) and used.After removing cultures, MTT solution (5 mg/mL) was added to each well for 3 h to let formazan crystals form, and then 10% SDS was added and incubated for 16 h in the dark to dissolve the crystals.The optical density (OD) was measured with a plate reader (Varioskan LUX, Thermo Fisher Scientific, Rockford, IL, USA) at 570 nm.
Measurement of mitochondrial activity
Using Rh123, we evaluated intercellular mitochondrial activity.Rh123 is a green fluorescent dye monitoring the proton (H + ) in mitochondrial intermembrane space, and fluorescence intensity from intercellular Rh123 (i.e., MMP) is proportionate to its mitochondrial activity [85,86].
C2C12 myotubes or HepG2 cells cultured on the collagen-coated 96 wells plate were treated with the extract (50 µg/mL) or its bioactive compounds (equivalent to that contained in 50 µg/mL of the extract) for 6 or 24 h.The samples were diluted in serum-free Opti-MEM (Gibco) and used.After removing cultures, the cells were incubated with Rh123 solution (10 µg/mL) for 20 min at 37 °C.After washing with PBS, the cells lysed with 1% Triton-X solution for 30 min in the dark, and then transferred into a black clear-bottom 96-well plate.The fluorescence intensity was measured with a plate reader (Varioskan LUX, Thermo Fisher Scientific) at λex/ λem = 507 nm/529 nm.
RNA isolation
Total RNA was isolated from the cells using the RNeasy plus mini kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions.Before experimentation, cells cultured on collagen-coated 6 well plates were treated with the extract or its compounds dissolved in serum-free Opti-MEM (Gibco) for 6 or 24 h.RNA concentration and quality were assessed by Nanodrop One (ThermoFisher Scientific).
Microarray experiment
Microarray workflow was carried out using GeneChip ™ WT PLUS Reagent Kit and GeneChip ™ Hybridization, Wash and Stain Kit (Applied Biosystems-ThermoFisher, Foster City, CA, USA), with the protocol provided by the manufacturer.The starting material of 100 ng RNA was reverse transcribed to synthesize single strand-cDNA, and then the strands were fragmented and biotin-labeled.Fragmented, labeled strands were hybridized to probes on Mouse or Human Clariom S Assay chip for 16 h at 45 °C in GeneChip ™ Hybridization Oven 645 (Affymetrix-ThermoFisher, Santa Clara, CA, USA).The hybridized chip was washed and stained on the GeneChip ™ Fluidics Station 450, and then scanned on the Gene-Chip ™ Scanner 3000.
Microarray data analysis
Data processing was conducted using Transcriptome Analysis Console (TAC) version 4.0 and subjected to normalization employing the signal space transformation Robust Multiple Average (SST-RMA) package algorithm.DEGs were identified by comparing two mRNA biological samples within each group, employing a significance threshold of p-value < 0.05 (determined through one-way between-subjects ANOVA).For C2C12 myotubes, DEGs were defined based on a log2-fold change (FC) cutoff greater than 1.2 or smaller than − 1.2, while for HepG2 cells, DEGs were determined with a log2-FC cutoff exceeding 1.5 or falling below − 1.5.
GO terms over-represented by the DEGs were identified using the Metascape web tool (https:// metas cape.org/) [87].The BioPlanet_2019 gene set library was used for clustering DEG by the biological pathways under the Enrichr online tool (https:// maaya nlab.cloud/ Enric hr/) [88][89][90].Term Frequency-Inverse Document Frequency (TF-IDF) values were calculated for each gene set, and the values were dimensionally reduced using the UMAP technique.The Leiden algorithm applied to the TF-IDF values identified the terms as a cluster, and the plotted clusters were assigned colors.The Molecular Signatures Database (MSigDB) of Gene Set Enrichment Analysis (GSEA) web tool (https:// www.gsea-msigdb.org/ gsea/ index.jsp) and GeneCards database (https:// www.genec ards.org/) were used to annotate and analyze the functions of the DEGs.PPI sub-networks were built from the DEGs based on the IMEx Interactome database [91].The modules were extracted from the network using the Transcription Explorer command, which detects regulatory interactions between the target factors and the target genes.These processes were done on the Network Analyst tool (https:// www.netwo rkana lyst.ca/ Netwo rkAna lyst/ home.xhtml) [92].
Butterfly bar charts, code diagrams, and dot plots were drawn using the bioinformatics online tool (https:// www.bioin forma tics.com.cn/).
Microarray data were deposited at Gene Expression Omnibus (GEO; accession no.GSE243411 for the C2C12 group dataset and GSE243412 for the HepG2 group dataset).
Real-time quantitative polymerase chain reaction (RT-qPCR)
DNA synthesis was performed by using SuperScript IV VILO Master Mix (Applied Biosystems, Foster City, CA, USA) according to the manufacturer's protocol.The qPCR by the cycles of single-stranding (15 s at 95 °C), primer annealing (1 min at 60 °C), and amplification with Taq DNA polymerase (1 min at 72 °C) was run on Applied Biosystem's 7500 RT-PCR System.Used primers were as follows: Ppargc1 (Mm01208835_m1), Gapdh (Mm99999915_g1), PPARGC1 (Hs00173304_m1), GAPDH (Hs02786624_g1).We chose Gapdh or GAPDH as housekeeping control to normalize the cycle threshold (CT) values of the target transcript calculated by the ΔΔCT method.
Statistical analysis
All statistical analyses were performed using Graph-Pad Prism 8 (GraphPad, San Diego, CA, USA).Data were tested for normality by the Shapiro-Wilk test.
A one-way analysis of variance (ANOVA) followed by Dunnett's post hoc test was performed on normally distributed data to compare the experimental groups against a control group.The Kruskal−Wallis test followed by Dunn's post hoc test was performed
Fig. 1
Fig. 1 Structures of the polyphenolic components of sugarcane top extract
Fig. 3
Fig. 3 Microarray-identified gene expression profile reflecting STEE treatment on C2C12 and HepG2.Volcano plots depict DEGs between A STEE (lower and higher concentrations)-treated C2C12 myotubes and nontreated control, and B STEE (lower and higher concentrations)-treated HepG2 hepatocytes and nontreated control.DEGs satisfied with the efficiency (p-value < 0.05, above the log2-transformed fold change thresholds) are shown as colored dots.C, D The distribution of DEGs by fold-changes for each comparison is shown in the butterfly bar charts.E Venn diagrams showing overlapped and unique sets of DEGs between the groups.The blue circle denotes down-or up-regulated DEGs in STEE30-M compared to Ctrl-M, and the red circle denotes down-or up-regulated DEGs in STEE50-M compared to Ctrl-M.The code diagram displays the hallmark gene sets related to the 174 commonly overlapped DEGs.F The blue circle denotes down-or up-regulated DEGs in STEE15-H compared to Ctrl-H, and the red circle denotes down-or up-regulated DEGs in STEE30-H compared to Ctrl-H.The code diagram displays the hallmark gene sets related to the 462 commonly overlapped DEGs
Fig. 4
Fig. 4 Enriched GO terms reflecting STEE-induced bio-phenomena in C2C12 and HepG2.GO analysis revealed enriched biological process (BP), cellular component (CC), and molecular function (MF) gene sets by the DEGs in STEE-treated C2C12 myotubes and HepG2 hepatocytes compared to their respective nontreated control groups.A-C Dot plots showing significantly enriched GO terms of BP, CC, and MF by the DEGs between STEE-treated C2C12 myotubes and control.D-F Dot plots showing significantly enriched GO terms of BP, CC, and MF by the DEGs between STEE-treated HepG2 cells and control.The size of the circle denotes the number of genes.The negative log10 of the p-value is represented by the color
Fig. 5 Fig. 6 (
Fig. 5 Pathways on the UMAP plots related to the transcriptomic modulation by STEE.Scatterplots showing similar pathway gene set clusters identified through the UMAP dimensionality reduction technique using the BioPlanet 2019 gene set library.A Pathway clusters enriched by the DEGs in STEE-treated C2C12 myotubes (both concentrations).B Pathway clusters enriched by the DEGs in STEE-treated C2C12 myotubes (both concentrations).The points (pathway terms) are gathered and color-coded by similarity or relevance.The size and the darkness of the circle denote the degree of enrichment
7
STEE regulates transcriptions linked with the transcription factor activity.A A module from PPI networks of DEGs between STEE50-M and Ctrl-M, which are targeted by TFs 'Pparg' and 'Ppargc1a' (TRRUST database).B A module from PPI networks of DEGs between STEE30-H and Ctrl-H, which are targeted by TFs 'PPARG' and 'CREB1' (ENCODE database).The red node denotes up-regulated DEGs and the blue node denotes down-regulated DEGs.Color shading of the nodes indicates the relative intensity.The gray nodes represent the proteins.C, D PGC-1α mRNA levels are expressed as relative values in STEE-or a mixture of STEE's polyphenols-treated C2C12 myotubes compared to control and in STEE-or a mixture of polyphenols-treated HepG2 cells compared to control.Values assessed by qPCR are shown in white (n = 4, except for two outliers in HepG2), and relative values assessed by the microarray are shown in green (n = 2).For qPCR data, error bars depict mean ± SEM, and One-way ANOVA with Dunnett's post hoc test was performed to assess statistical significance: *p < 0.05, **p < 0.01.N.D. means no data
Table 1
Combinations of the compounds used for the treatment in this study
|
2024-01-06T06:17:36.961Z
|
2024-01-05T00:00:00.000
|
{
"year": 2024,
"sha1": "b9e93939599b969bd1b6416e1b0e5b26881ea4e7",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "26a29a1c9851d5447c274d75d965380369902de2",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257957474
|
pes2o/s2orc
|
v3-fos-license
|
A collection of read depth profiles at structural variant breakpoints
SWaveform, a newly created open genome-wide resource for read depth signal in the vicinity of structural variant (SV) breakpoints, aims to boost development of computational tools and algorithms for discovery of genomic rearrangement events from sequencing data. SVs are a dominant force shaping genomes and substantially contributing to genetic diversity. Still, there are challenges in reliable and efficient genotyping of SVs from whole genome sequencing data, thus delaying translation into clinical applications and wasting valuable resources. SWaveform includes a database containing ~7 M of read depth profiles at SV breakpoints extracted from 911 sequencing samples generated by the Human Genome Diversity Project, generalised patterns of the signal at breakpoints, an interface for navigation and download, as well as a toolbox for local deployment with user’s data. The dataset can be of immense value to bioinformatics and engineering communities as it empowers smooth application of intelligent signal processing and machine learning techniques for discovery of genomic rearrangement events and thus opens the floodgates for development of innovative algorithms and software.
Background & Summary
Structural variants are genomic alterations that encompass at least 50 nucleotides 1 . The term refers to a variety of events which include deletions, duplications, insertions, inversions, translocations and more complex rearrangements usually associated with mobile genetic elements 2 . Furthermore, SVs that change the number of copies of a DNA sequence are often defined as "copy number variants" (i.e., CNVs). Typically, SVs are single events, however in certain situations frequently occurring in cancer they may pile up resulting in large, complex, entangled combinations of alterations also known as chromosome shattering or chromothripsis 3,4 . Genome structural variation is a potent source of genetic diversity and may have a profound effect upon human health, as SVs are implicated in both germline and somatic disease ranging from developmental and neurological disorders to a wide spectrum of cancers 2,5-9 . SVs hold a great potential as molecular biomarkers to guide precision medicine [10][11][12][13] .
Robust and reproducible structural variation discovery still poses significant computational and algorithmic challenges 14,15 . Although, we are getting near to resolving structural variation in personal genomes with accuracy required for translational research 5,16,17 , faultless detection of SVs in many cases (e.g., insertions, CNV gains) 18 remains notoriously difficult. Recent advances in technology, such as, long-read sequencing provide plenty of good reasons for cautious optimism on reaching a reasonable accuracy of SV discovery [19][20][21][22][23] . Nevertheless, the high cost and the low throughput of this strategy currently limits its general use. The short-read sequencing routinely used in a clinical setting and in nation-wide medical genetics initiatives makes the discovery, genotyping and characterisation of the variants difficult. SV discovery algorithms designed to process short sequencing fragments rely on uniformity and evenness of sequencing coverage profile (i.e., number of reads aligned to a genomic region or nucleotide), as well as read depth information for accurate detection of structural variants 18 . However, as sequencing coverage signal is discontinuous, heterogeneous, and irregular, often even erratic existing SV detection tools still generate highly discordant results [24][25][26] .
Over the course of the past decade SV discovery algorithms have generally explored two major strategies for variant detection, namely they either exploit read depth variability or base their discovery strategy on analysis of discordant alignment features. At present no single computational algorithm can detect SVs of all types and sizes in a robust, reliable manner. Moreover, as a rule, an approach which combines calls generated by several detection methods is required to achieve satisfactory performance 24,[27][28][29][30][31] .
In this context, approaches exploring properties of depth of coverage (DOC) signal hold a tremendous potential, especially as a) relevant methodologies are applicable to data produced with both short-and long-read sequencing protocols, and b) it should be sufficient for discovery of the majority of SV classes regardless of their size and breakpoint location, as long as they distort the signal. The design of such tools calls for development of open access resources that aggregate and integrate signal coverage profiles in the vicinity of SV breakpoints, which so far is not available.
Here, for the first time we present a detailed catalogue of various waveforms and patterns observed in the sequence coverage signal associated with different types of SVs, as well as a toolkit for coverage data management and analytics. SWaveform provides easy access to approximately 7 M DOC signal profiles extracted from 911 human sequencing samples generated by the Human Genome Diversity Project (HGDP) 6,32 . A portable database architecture and provided API facilitate easy and seamless on-premises deployment encompassing data processing routines on all levels i.e., from raw aligned data to visual representation of coverage profiles (shown in Fig. 1a). We also propose a new binary format to manage sequence coverage data. Finally, as motif discovery has been successfully applied throughout a large range of domains such as medicine, finance, robotics and DNA analyses we designed an algorithm for motif extraction from the coverage signal. Taken together, SWaveform will be instrumental for in-depth studies of signal properties with an extensive body of dedicated algorithms commonly used in the signal processing domain for feature extraction, pattern discovery and anomaly detection. In addition, a collection of signals and patterns could facilitate the development of strategies for filtration of SVs detected by various callers and meta-callers. Also, SWaveform framework could be deployed locally to www.nature.com/scientificdata www.nature.com/scientificdata/ enable exploration of any sequencing data in a clinical or research context. Overall, the developed catalogue and accompanying toolkit form an indispensable resource that will facilitate development and honing of computational tools for discovery of specific genomic rearrangement events. It is expected that SWaveform will be of immense value to the machine learning and biomedical communities.
Methods
Data management. We used HGDP sequencing data 32 which includes 911 whole-genome sequenced human samples in CRAM format 33 with an average depth of coverage of about 30x. Aligned sequencing data was downloaded from the International Genome Sample Resource ftp site (http://ftp.1000genomes.ebi.ac.uk/vol1/ ftp/data_collections/HGDP/data/). Structural variation data generated by the consortium contains annotated breakpoints for the following types of events: CNV gain and loss, insertions, inversions, deletions and duplications 6 , which amounts to ~15М DOC profiles. The corresponding set of structural variants (SV) encompassing 152,841 variants was obtained from the HGDP SV data repository (ftp://ngs.sanger.ac.uk/production/hgdp/ hgdp_structural_variation/).
We developed a software suite to extract DOC profiles in a vicinity of SV breakpoints ( Table 1). The default size of the region surrounding the breakpoint is imposed by the read length typically used in short-read sequencing experiments and comprises ±256 bp. Importantly, the parameter can be adjusted to accommodate for long-read sequencing protocols or to mitigate the consequences of imprecise breakpoint detection. SVs shorter than the window size, but exceeding 20 bp in length are labelled as "special" (i.e., spSV). Importantly, all SVs shorter than 20 bp are omitted. Furthermore, to speed up coverage data processing and optimise storage we introduced a simple lossless binary format for recording of coverage values (BCOV). The format was purposely developed to ensure fast and efficient programmatic access to the DOC data, which is encoded as follows. For each position on a chromosome a numeric value corresponding to the read coverage depth is stored in two bytes, saved in a binary file in a sequential order. Thus, the maximal supported coverage value is bounded by 2 16 . If the coverage exceeds the limit, the value is capped to the maximum of 65 536 reads. The size of an average BCOV file generated for the human genome amounts to about 5.5 Gb.
The genomic data from CRAM files was processed with mosdepth tool 34 to extract a numeric value reflecting sequencing read coverage depth for each genome position and converted it to BCOV. The mosdepth program is run with the default set of parameters to exclude reads characterised with a combination of bitwise FLAGs 1796. In essence, this results in a removal of the following read categories: segment unmapped, secondary alignment, not passing QC, PCR or optical duplicate. Next, the breakpoint coordinates of copy number variants, and of the following SVs, namely deletions, insertions, inversions, duplications were obtained from the corresponding VCF files. We further filtered VCF records to include only those variants distinguished with a PASS flag (i.e., Manta FT flag). Finally, the extracted profiles, breakpoint loci and sequencing samples metadata (7, 314, 329 entries in total) were stored in a relational database (SQLite) to facilitate data search, retrieval and visualization (see Data Records section and Fig. 1a,c).
Motif discovery. A profound variability of waveforms associated with different classes of SVs has long
impeded the reliability and reproducibility of the discovery algorithms. We, therefore, sought to identify repeated patterns found within DOC profiles (i.e., motifs) and characterise conserved structures in the signal.
Briefly, the procedure for motif discovery encompasses the following steps (Fig. 1b). First, the optimal number of representative clusters containing somewhat similar DOC signals in terms of shape of associated waveforms within the annotated SV classes is estimated. This step is run only once for every combination of SV type/ breakpoint (i.e., left or right, if applicable). In the second step, the estimated number is used to cluster DOC profiles intrinsic to each of the aforementioned combinations. Next, to identify and rank motifs within each cluster we use K-nearest neighbour approach. Due to the large volume of data the latter step is run repeatedly on bootstrap samples from the original data. Finally, the motif groups emerging from each of the bootstraps are iteratively merged to pinpoint the most predominant one for each of the clusters. The details of every step of the procedure are outlined in the paragraphs below.
Although structural variation has been in the spotlight of genomic research in the last decade, the multiformity and diversity of signal profiles attributed to specific types of SVs have never been properly characterised. Furthermore, as structural variation data produced by the HGDP is not curated, it is highly likely that an a priori unknown number of false calls is present in the data set. To identify predominant waveforms characteristic to annotated SVs in the HGDP data the coverage profiles attributed to specific classes of SVs were compressed, normalised and clustered with dynamic time warping (dtw) distance 35 www.nature.com/scientificdata www.nature.com/scientificdata/ replacement) used to estimate the optimal number of clusters associated with each SV type have demonstrated that the data partitioning into more than two subsets is not justified (see Supplemental Figs. 1,2). In the case that the most representative cluster (i.e., containing more than 66% of DOC profiles) can be identified, the motif discovery is restricted to it. Alternatively, the motif discovery is performed in both clusters. The latter scenario is likely to encompass those instances where the performance of the SV discovery algorithms is questionable and, consequently, the detected breakpoints are ambiguous. This particularly applies to CNV gains as discussed below.
The motif discovery poses a significant computational challenge, as the total number of DOC profiles in the HGDP dataset amounts to ~7 M and the extracted profile length is 512 bp. We were, therefore, impelled to carry out the motif search in the dataset chunks associated with each type of SV, genotype and a corresponding breakpoint (i.e., left or right). The bootstrapping encompasses 360 subsets comprising 960 signal profiles for every SV/breakpoint/genotype combination. Thus, for every data subset compressed DOC profiles were clustered with K-Means algorithm (dtw distance) into two clusters (as justified in the above paragraph) to reveal predominant waveforms present in the data (see Fig. 1b). Due to combined imperfections of both read alignment and SV discovery algorithms the DOC profiles in the vicinity of a breakpoint are highly variable in shape and form, meaning that the signal can be either stretched or shifted. To account for variability, we apply SAX (Symbolic Aggregate approXimation) transformation 37 to the signal using an alphabet of 24 symbols. Next, the overlapping sliding window-based segmentation (32 data points) was applied to the SAX-transformed signal. Finally, to discover the most significant motifs from the profiles, the resulting segments are fed into the modified KNN_Search algorithm 38 which partitions them into similarity groups. Importantly, the KNN_Search algorithm was modified www.nature.com/scientificdata www.nature.com/scientificdata/ to facilitate efficient motif discovery (as discussed below in the Technical Validation section). The KNN_Search method yields a ranked list of similarity groups, characteristic for a given cluster. Тhe ranking reflects the group's prominence. Finally, the motifs generated as a result of the bootstrapping are iteratively merged (using SAX distance-based thresholding) and averaged to reveal the most predominant one for each cluster (Figs. 2,3). To get a full understanding of the computational approach adopted, please refer to the source code in the Code Availability section.
The pattern in itself is an ample source of information on aberrations in the signal, that could arguably be used to draw valuable conclusions on the performance of the existing algorithms for SV discovery and on the waveforms characteristic to various types of SVs.
In particular, from our findings it follows that regardless of the genotype, the breakpoints corresponding to copy number gains are much harder to localise with precision, as the patterns associated with their SV breakpoints are blurred and exhibit gradual increase in signal intensity as compared to clear step-wise pattern observed in the case of duplications. Strangely, in structural variants annotated as CNV gains, irrespective of the beginning or end of the interval (i.e., left or right breakpoint) and genotype, two motifs with opposite trends in the coverage signal are observed (see Figs. 2,3). Moreover, each of these patterns is supported by relatively similar proportions of DOC profiles. Considering these observations, we may hypothesise that segmentation-based approach to boundary determination and possibly varying signal amplitude at the variant start (or end) locus confound CNV discovery software and result in ambiguous boundary attribute (e.g., left or right) of a variant. www.nature.com/scientificdata www.nature.com/scientificdata/ Besides that, the motif discovery did not produce any convincing result in the event of insertion, which may indicate that the distortion on the DOC signal in the vicinity of the breakpoint do not go beyond superficial alterations (Figs. 2,3). Interestingly, we have detected two motifs coupled to breakpoints related to both hetero-and homozygous inversions. In fact, these clusters describe signal behaviour at the inversion boundaries (Supplemental Fig. 4), although admittedly the motif is less pronounced for the left boundary of the homozygous inversion. The latter is likely to be a consequence of a relatively small size of the data, as the number of homozygous inversion profiles included into analyses amounts to 11065 entries. Leaving aside the genotype data triples the number of profiles and allows for generation of a distinct SAX model (see Supplemental Fig. 4).
An exploratory analysis of motifs generated with spSVs demonstrate that typically the method is capable of capturing the signal around the breakpoint. As expected, the varying length of the variant downstream of the breakpoint clearly impacts the ability for recapitulation of the signal shape.
On the whole, in the case of both homozygous and heterozygous variants the best motifs are detected for the following classes of events: duplications, deletions, CNV loss and, possibly, inversions. It is quite within reason to suggest, that this result is a consequence of at least two factors, namely, the precision in discovery of breakpoints associated with the respective variants, as well as the distinct manifestation of the related waveforms.
The resulting motifs in SAX format, stored in the SWaveform database may have an important utility for a) development of novel improved approaches for breakpoint detection, and b) for visualisation of repeated patterns in the DOC signal.
Data records
Data presented in this work can be accessed directly at Zenodo repository [39][40][41] as an archive in ZIP format, which includes SQLite dump, DOC signal profiles in BCOV format and the accompanying metadata in various formats. The database schema is presented in Fig. 1c and on the SWaveform website at swaveform.compbio.ru/ description.
Technical Validation
In this study various approaches were applied to validate reliability, integrity and quality of the raw and transformed data, as well as data processing.
The HGDP provides high quality data processed in accordance with SOPs as described in Almarri et al. 6 . The DOC values were extracted from the CRAM files and converted into lossless BCOV format. The breakpoint coordinates of the structural variants characterised by the aforementioned consortium were extracted from the provided VCF files and filtered to allow variants annotated with the PASS flag. The DOC profiles in the 512 bp neighbourhood centred on the filtered breakpoint were then extracted for samples with homo-/heterozygous genotype of the variant.
Signal compression and clustering. In each bootstrap run the database was subsetted to select a 100 random signal profiles associated with a specific type of SV. To speed up the clustering procedure, signal profiles were compressed using average pooling in windows of 8 base pairs long. The compressed profiles were further normalised (i.e., scaled to zero mean and unit variance within a sequenced sample) and clustered with K-Means algorithm (as implemented in tslearn package 42 ) using two different randomly selected seeds (e.g., cluster sets C0 and C1). Concurrently, the same group of signal profiles was compressed and clustered using the same seed as in www.nature.com/scientificdata www.nature.com/scientificdata/ C0 resulting in a cluster set C2. A percentage of profiles that retained their cluster association regardless of seed between cluster sets C0 and C1, as well as between C0 and C2 was computed.
Clustering procedures using both compressed and uncompressed signal profiles were repeated 80 times to generate distributions showing clustering consistency. The resulting distributions of profiles that retained their cluster association expressed as percentage from the overall number of profiles were compared using Kolmogorov-Smirnov one-sided two-sample test, as shown in Supplemental Fig. 3. Our numerical experiments clearly demonstrate that signal compression produces much less effect on cluster consistency as opposed to the seed selection, indicating that signal compression impact on clustering results is minor. www.nature.com/scientificdata www.nature.com/scientificdata/ linearization concept i.e., it is reasoned that when two subsequences are close in the (multidimensional) SAX-space, these elements are also close in 1D space they are projected into. The linearization is achieved through selection of a reference node and subsequently ordering all other data points in accordance with their distances from it. The neighbourhood expansion is controlled through a threshold imposed on a distance between the reference point and a prospective group member. This allows for an efficient neighbour grouping in 1D space and reduces the search space for the time-consuming SAX distance calculations. To further improve computational efficiency of the method and scale down the SAX search space we introduced a second reference node and, consequently, an additional 1D space (see Fig. 4). Furthermore, instead of a randomly chosen reference node we opt to select two fixed distinct reference points, namely SAX-transformed sine and cosine functions on an interval of
Modified
The SAX distance within the prospective neighbourhood is computed, if and only if, two nodes are close in each of the one-dimensional spaces. Thus, the optimization is achieved through narrowing down the search space in which SAX distance estimation is performed.
Usage Notes
SWaveform resource can be accessed through graphical user interface (GUI) on swaveform.compbio.ru. The interface provides the ability to visualise profiles, search by genomic coordinates, and filter by ethnic group provided by the HGDP, SV class, genotype and breakpoint type. The interface also provides chromosome browser capability. The user part of the interface (front-end) is implemented using React.js and D3.js tools, while the server part (back-end) is written in Python with the flask framework. Examples of signal profile visualisation both for individual samples and for averaged profiles corresponding to one or another type of structural variation are shown in Fig. 5. In addition, the application programming interface (API) to the database was developed, allowing direct access to the data from the user's programs by means of Python or PHP.
The predominant motifs associated with a given combination of SV type/breakpoint are provided as a SAX transformation, which enables scanning of DOC profiles encoded in BCOV format for possible anomalies and aberrations. This solution is implemented in C and Python and is available as a part of the software suite accompanying the resource.
To showcase the resource in action we provide two Snakemake workflows (please see Code availability section for details). The first one encompasses all steps required for resource deployment from user's data. The workflow generates all the files necessary to set up a local database of DOC profiles and extracts coverage signals in the vicinity of breakpoints to build a set of predominant motifs associated with a given combination of SV type/breakpoint. The second workflow is a prototype to demonstrate a practical implementation of a simple pattern search in the data to facilitate anomaly detection in the DOC signal. Both workflows use moderate-sized datasets available on Zenodo 40,41 .
The database population workflow is shown in Fig. 1a.
Code availability
A software suite accompanying the resource is available on https://github.com/latur/SWaveform. The repository contains scripts for a) database and GUI deployment on the SQLite platform and b) a toolkit for DOC profile and SV data processing and management. The toolkit contains scripts for generation of DOC profiles corresponding to breakpoint loci from alignment files (SAM, BAM or CRAM format) and annotated VCF files, as well as DOC profile conversion into BCOV format. In addition, we provide tools for profile clustering, motif discovery and a script for subsequent motif detection in DOC profiles.
|
2023-04-06T14:20:00.249Z
|
2023-04-06T00:00:00.000
|
{
"year": 2023,
"sha1": "252dc9b8a6648a324d581cc078c61209a005c90f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "252dc9b8a6648a324d581cc078c61209a005c90f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232014172
|
pes2o/s2orc
|
v3-fos-license
|
Statistical analysis of fireballs: Seismic signature survey
Fireballs are infrequently recorded by seismic sensors on the ground. If recorded, they are usually reported as one-off events. This study is the first seismic bulk analysis of the largest single fireball data set, observed by the Desert Fireball Network (DFN) in Australia in the period 2014-2019. The DFN typically observes fireballs from cm-m scale impactors. We identified 25 fireballs in seismic time series data recorded by the Australian National Seismograph Network (ANSN). This corresponds to 1.8% of surveyed fireballs, at the kinetic energy range of 10$^6$ to 10$^{10}$ J. The peaks observed in the seismic time series data were consistent with calculated arrival times of the direct airwave or ground-coupled Rayleigh wave caused by shock waves by the fireball in the atmosphere (either due to fragmentation or the passage of the Mach cone). Our work suggests that identification of fireball events in the seismic time series data depends both on physical properties of a fireball (such as fireball energy and entry angle in the atmosphere) and the sensitivity of a seismic instrument. This work suggests that fireballs are likely detectable within 200 km direct air distance between a fireball and seismic station, for sensors used in the ANSN. If each DFN observatory had been accompanied by a seismic sensor of similar sensitivity, 50% of surveyed fireballs could have been detected. These statistics justify the future consideration of expanding the DFN camera network into the seismic domain.
INTRODUCTION
When a meteoroid enters the atmosphere, it experiences aerodynamic drag and dynamic pressure. The atmosphere slows down meteoroids and in most cases they break-up and vaporize (Ceplecha & Revelle, 2005). The break-up occurs when the dynamic pressure is higher than its compression strength (Cevolani, 1994;Stevanović et al., 2017). Shock waves can be generated in the atmosphere by ( Figure 1): • The hypersonic flight forming a Mach cone, • A discrete fragmentation event during the meteoroid's trajectory, • A catastrophic final airburst, • Physical impact on the ground (extremely rare).
The Mach angle within the Mach cone is expected to be negligibly small, because the impact speed is much larger than the speed of sound in the air. Therefore, the shock waves generated during a hypersonic fireball entry are expected to propagate almost perpendicular to the trajectory (Figure 1a). The fragmentation of the meteoroid can also create shock waves that propagate with no preferred direction; thus, can be assumed they propagate omnidirectionally (Figure 1b). If the impactor or parts of the impactor survive the atmospheric path and hit the ground (Figure 1c), the seismic waves in the ground can be generated by the impact itself Tancredi et al., 2009). The atmospheric shock waves can couple with the ground and form body and surface waves (Figure 1d) (Brown et al., 2003;Stevanović et al., 2017;Karakostas et al., 2018). The arrival times for different seismic waves differ as they travel at different speeds through different media (ground or air), which allows for their classification. Airwaves generated by the Mach cone ( Figure 1e) will arrive last as they travel slowest (at the speed of sound), through the air directly between the fireball and the sensor on the ground .
For larger (bolide and cratering) events, a variety of seismic waves has been recorded. For example, the seismic signals caused by the 20-m diameter asteroid that exploded over Chelyabinsk, Russia in 2013 (esti- Figure 1. Shock wave generation during a fireball event: (a) Shock waves are generated by the Mach cone that travel almost perpendicular to the trajectory of the object and rapidly decay from a non-linear to linear wavefront, (b) fragmentation-induced airburst causes shock waves that travel omnidirectionally, (c) seismic waves originating from impact itself (d) Rayleigh waves formed by coupling between airwaves and the ground, and (e) an air disturbance directed at the seismic station (Brown et al., 2003;Revelle et al., 2004). Figure redrawn from Edwards et al. (2008). mated to have carried 10 15 J at airburst (Emel'yanenko et al., 2013)) were identified as P and S body waves, ground-coupled airwaves and Rayleigh waves (Tauzin et al., 2013); The P and S seismic waves were also seen when the 13.5-m diameter crater formed near Carancas, Peru in 2007Le Pichon et al., 2008;Tancredi et al., 2009); The Neuschwanstein large meteorite (estimated to have had 10 12 J initial source energy) Oberst et al., 2004) caused seismic activity by direct airwaves and ground coupled Rayleigh waves at seismic stations within a few hundred km distance Edwards et al., 2008).
These impact examples were all significantly larger than fireballs observed daily by the Desert Fireball Network (DFN) in Australia. Fireballs detected by the DFN have energies in the range of 10 3 to 10 12 J at atmospheric entry (Devillepoix et al., 2019). Meteorite-dropping fireballs are at the upper energy range observed by the DFN.
DFN is the world's largest fireball camera network, located in the Australian outback and consisting of 52 observatories, covering an area of 3 million km 2 . It is aimed to detect fireballs, recover meteorites and to calculate their orbits (Devillepoix et al., 2019(Devillepoix et al., , 2018. The observatories are optimised to image objects having a brightness between 0 to -15 magnitudes which corresponds to sizes between 0.05 and 0.5 m (Devillepoix et al., 2019). In this work, we make a bulk seismic analysis of the largest single data set of terrestrial fireballs obtained by the DFN in the period from 2014 to 2019, by systematically searching for seismic signals occurring in the time window and proximity of fireball trajectories.
Unlike other studies that used data from images (Beech et al., 1995;Brown et al., 1994;Spurný et al., 2012), seismic stations (Brown et al., 2003;Devillepoix et al., 2020;Koten et al., 2019) and infrasound (El-Gabry et al., 2017) to calculate the orbits and energies of meteors, this is the first study that uses information about the trajectory and timing of fireballs from a large dataset to back-trace any impact-related seismic activity. We investigate detection threshold of the DFN-observed fireballs in seismic data recorded by the Australian National Seismograph Network (ANSN). We also report on the seismic properties of the fireballs caught by the seismic instruments. This information will be used for future instrument development in detecting fireballs in the seismic domain.
METHODOLOGY
We used the DFN database containing trajectories of 1410 fireball events that occurred above Australia over the last 6 years. The DFN trajectory data provide absolute timing of fireball events, the start and end coordinates as well as the height above ground of the observed bright flight. A Python-based program was written to calculate distances between the entire fireball trajectory (bright flight path) and all ANSN seismic stations. The program was applied to all 1410 DFN fireballs. The arrival times for the airwave are then calculated for both the longest and the shortest direct distances, using a speed of sound of 300 ± 60 m/s. We used this error margin to account for local temperature and wind dependencies (Le Pichon et al., 2008). The large time window also considers unknown coupling with the ground and the low signal strength.
Seismic data were acquired from the ANSN, operated by Geoscience Australia (GA), via public service domain IRIS (Incorporated Research Institutions for Seismology). The ANSN consists of a network of broadband seismometers across Australia and its offshore territories. Figure 2 shows the locations of broadband seismometers (red triangles) and the coverage of DFN observatories (blue circles).
The criteria that determined if a signal in time series data can be confidently classified as a signal coming from a fireball event are: 1. The amplitude of the signal must be similar or lower than previously confirmed seismic signals from fireballs or bolides, accounting for uncertainties related to the event's distance to a detector, yet above the background noise; 2. The seismic signal must be within the calculated arrival times of the airwave (direct or ground-coupled Rayleigh wave; No P and S waves were identified in this survey); 3. There must not be an earthquake activity in the database (Geoscience Australia, 2019) at about the same time; 4. There must not be any clear anthropogenic-related noise (e.g., mine blasts, proximity to airport runways, etc). We note that DFN detects only nighttime fireballs and at that time the anthropogenic noise is expected to be minimal.
The seismic time series data were obtained from the nearest seismic stations and checked for distinguishable signals in the time window of the arrival of the airwave and Rayleigh wave (Criteria 2). Time series data were interrogated for a time window starting 30 seconds prior to the start of a fireball event in the upper atmosphere and ending up to 28 minutes later. This is to account for the travel time of the airwave from the fireball to any seismic station within 400 km. The seismic data was downloaded from the IRIS database. The Python framework ObsPy (Beyreuther et al., 2010;Krischer et al., 2015) was used to manipulate and analyse the time series data and the Python library Astropy (Astropy Collaboration et al., 2013, 2018 was used for making coordinate transformations. The time series data were filtered using a Butterworth-Highpass filter at a default frequency of 2 Hz. For most signals this filtering was the most satisfactory in cutting out ambient noise. In attempt to distinguish between meteor fragmentation and the Mach cone passage, we used two approaches. We looked into the fireball orientation with respect to the location of the seismic station. If the shortest distance to the seismic station is perpendicular to the bright flight trajectory and arrival time for the airwaves fits, signals are classified as likely originating from the Mach cone. If the shortest distance is not perpendicular to the bright flight trajectory, any seismic signals can be assumed to come from a fragmentation along the trajectory. Considering that the fragmentation has no preferred orientation, the events flagged as likely originated from the Mach cone could have instead originated from the airburst caused by fragmentation. However, we class them as Mach cone events because previous literature reported fragmentation to cause lesser air disturbance compared to the Mach cone passage (Brown et al., 2003;Edwards et al., 2008). We also visually investigated DFN fireball images to identify the distinct presence of fragmentation. However, we were unable to unambiguously make such a distinction for all fireball events. This is probably due to camera sensor saturation and because of DFN cameras using the deBruin shutter sequence to mark absolute timing which interrupts visual light curve recording (Table 3).
RESULTS
Compared to larger impact events, it was expected that the DFN-observed fireballs could only cause occasional weak seismic signals, predominantly coming from the atmospheric disturbance, and only in favourable positions and locations. Such an expectation was set by previous works (Brown et al., 2003;Edwards et al., 2008). Table 1 shows the fireball events with suspected seismic signals including the start time of the bright flight observation. Seismic signals were found for 25 fireball events (Tables 1-3) out of 1410 surveyed, setting the detectability at 1.8% when using the public seismic data. From here on, we will refer to specific events with their allocated ID letter, rather than DFN event code name, as introduced in Table 1. Figure 3 shows the location of all DFN observatories (blue circles) and seismic stations of the ANSN (red triangles) that identified these 25 events. It also shows the trajectories of the bright flight of the fireballs for which seismic signals are suspected (yellow lines). Table 2 shows the coordinates of the beginning (lat b , long b ) and the end (lat e , long e ) of the bright flight, the beginning (h b ) and end (h e ) height, the trajectory slope, and the velocity (V), inferred mass (m) and fireball energy (KE) at atmospheric entry. The slope is defined as the angle between the beginning of the bright flight trajectory and local horizontal. The recorded fireballs had almost the entire range of possible impact angles (from 4 • to 78 • ) with a mean value (±1σ) of 38 • ±19 • . The mean h b was 86±25 km and h e was 46±18 km. The impact speed at the atmospheric entry was 25±13 km/s. Meteoroids had a very large mass range, from 1 g up to 180 kg estimated at atmospheric entry, corresponding to energies of 10 6 to 10 10 J. The peaks in the seismic time series data are consistent with the calculated arrival times of the airwave travelling perpendicular to the fireball trajectory and/or from an onmnidirectional source (fragmentation or frontal pressure at the end of the trajectory). Based on the orientation of the fireball trajectory with respect to the location of the nearest seismic station, 13 events [A:M] could have originated from the Mach cone shock wave ( Figure 1a) and 12 events [N:Y] were likely from an omnidirectional source (Figure 1b; Tables 1-3). Figure 4 shows one example of seismic time series data (top) and spectrogram (bottom) of fireball event P (Tables 1 -3) for which signals of the airwave and the Rayleigh wave can be identified separately. Based on the seismic wave arrival time, the seismic source could either be from a direct airwave (A) or a ground-coupled Rayleigh wave (R). In some cases the arrival windows for A or R are clearly separated, but for most cases these windows overlap preventing us from confidently determining which source wave the signal came from (Table 3). Table 3 lists DFN fireball events for which we identified possible corresponding seismic signals, including the Table 3 Fireball events with suspected seismic signal data, including the shortest station-to-trajectory distance (dmin), peak values for the seismic acceleration in vertical (BHZ), N-S (BHN) and E-W (BHE) seismic axes, estimated duration of the seismic signal (t), and peak frequency (ν) after applying 2 Hz high pass filter. Based on the arrival times, the seismic source can be a direct airwave (A) or a ground-coupled Rayleigh wave (R). The last column shows whether the optical image of the fireball displayed clear evidence of fragmentation processes. *Note that NWAO station is non-aligned to cardinals. name of the seismic station at which the signal was detected, the shortest station-to-fireball distance (d min ), the peak values for the acceleration in vertical (BHZ), N-S (BHN) and E-W (BHE) components seen in the time series data, the duration of the signal (t), the peak frequency (ν) and estimates for the seismic source. The seismic signals for all 25 fireballs are between 3 s and 55 s long and the peak values of the seismic frequencies are up to 10 Hz with an average at 3.8±1 Hz, which is in agreement with previous works (D'Auria et al., 2006;Edwards et al., 2008Edwards et al., , 2007Kanamori et al., 1992;Revelle, 1976). The shortest distance to the nearest seismic station is 112±40 km, ranging from 53 km to 215 km, although the surveyed area reached the maximum of 325 km distance. No surveyed fireballs were detected by more than one seismic station. This is expected given the sparse distribution of ANSN stations and is roughly in agreement with previous works (Brown et al., 2003. Figure 5 shows the time series data for 25 fireball events [A:Y] for which seismic signals were detected. It can be seen that for 18 out of 25 events, the highest peaks are in the vertical direction. We examined any correlations between the direction of the highest peak in amplitude seen in the time series data and the position of the seismic station relative to the trajectory of the fireball and if the fireball approaches the seismic station or not. However, we did not find any other azimuth on directionality. On average, the amplitude for the highest peaks for seismic signals in the vertical direction was 5.5×10 −3 mm/s 2 , while it was 2.7×10 −3 mm/s 2 in N-S and 2.4×10 −3 mm/s 2 in E-W directions. This suggested a slight preference in vertical direction agreeing with the assumption that the seismic excitation was from the atmosphere. Figure 6 shows the highest peak in vertical direction as a function of the shortest distance between the trajectory and the seismic station for all events for which seismic signals are suspected. The colours of the markers represent the slope of the fireballs. It can be seen that fireballs that occur very close to the seismic station have higher peak amplitudes in vertical direction than fireballs further away. There is also additional observational bias that could be attributed to favourable fireball orientation to create Mach cone disturbance that is directed at a seismic station. The Mach cone-related fireball detections are more likely to originate from shallower (lower) impact angles that assure longer trajectories in the atmosphere than in the case of suspected fragmentation as a seismic source.
DISCUSSION
From the 1410 DFN fireball events surveyed, we identify seismic signals in time series data that correspond to 25 of these events. This is 1.8%. Figure 6 shows there is a rough correlation between peak amplitude and distance to a seismic station. Beyond 215 km we do not detect any unambiguous seismic signals, and the furthest events are all steep-sloped. It is therefore reasonable to place a threshold at 215 km as an approximate limit for the seismic detection of fireballs. Given that, the number of DFN fireballs within this range is reduced to 1101, increasing the detection success to 2.3%. DFN observatories are approximately 150 km apart. There were 1236 of fireball trajectories within 215 km of a DFN observatory. Should the DFN camera network be equipped with seismic instruments (of comparable sensitivity) at each observatory site, 86% of observed fireballs would be within the 215 km distance threshold for detection in the seismic domain. The mean distance to a seismic station of detected fireballs using ANSN was 112 km (Fig. 6), which corresponds to about 50% of all surveyed fireballs if each observatory site had a seismic station equipped. It would be possible to detect fireballs at multiple stations, with an average of four stations per fireball.
The survey showed that some seismic stations are more sensitive to fireball events than others. The highest number of signal detections was at the station Oodnadatta (OOD) which detected 7 suspected fireball events followed by Buckleboo (BBOO) and Leigh Creek (LCRK), where each detected 5 events, and Innamincka (INKA) with 3 events. There are five seismic stations (Forrest (FORT), Mundaring (MUN), Hallett (HTT), Mulgathing (MULG), Narrogin (NWAO)) that only detected one event. This could be due to the individual instrument quality or background noise levels which are influenced by the positioning setup and geographic location of the sensor. Previous studies by Revelle et al. (2004) have also pointed this out. Another sensitivity to detection might be directionality between seismic stations and bright flight trajectory. Seismic stations that are perpendicular can detect the signal from the Mach cone which has a higher amplitude and is therefore easier to recognize. A combination of these factors, like the presence of noise, distance to the station, the directionality from the trajectory to the seismic stations, weather conditions, soil properties and also the characteristics of the impactor, are among reasons we did not detect more than 2.3% events within the 215 km threshold.
As well as identifying the 25 fireball events in seismic time series data, we also investigated five of the largest events ever seen by the DFN. Unfortunately none pass the selection criteria. To date, there are two events detected by the DFN (DN150102_01, DN170630_01) that have also been recognized by the US Government Sensors (USG) and described in detail by Devillepoix et al. (2019). The closest stations to these two events where data are available were 120 and 182 km away. These stations show noisy signals or a signal only in one component.
We also looked for seismic signals from fireballs that had dropped a meteorite (Murrili, Sansom et al. (2020); Dingle Dell, Devillepoix et al. (2018); DN160822_03, Shober et al. (2019)) that were recovered from the field. The closest stations to these events were 150; 93 and 169; and 191 km respectively and show noisy seismic data and no signals.
CONCLUSIONS
Fireball events occur on a daily basis, yet are rarely reported as seismic events because their energy (at the top of the atmosphere) is often not sufficient to cause quakes that are detectable by seismic stations. Unlike other studies who used data from images, seismic stations and infrasound to calculate the orbit and energies of meteors, this study uses information about the trajectory and timing of fireballs observed by the DFN to search for seismic signals. We report possible detections of 25 seismic signatures originating from 1410 surveyed fireballs observed by the DFN over a 6-year period. This is made by calculating the distance between the bright flight trajectory of the fireball to Australian National Seismograph Network (ANSN) seismic stations. We searched for significant seismic signals recorded that fit our selection criteria. The observed signals cannot be explained to be of any other geologic or anthropogenic origin. Signals are secondslong in duration and have peak amplitude ranges in the following components: • Vertical: 5×10 −4 mm/s 2 -2×10 −2 mm/s 2 • N-S: 3×10 −4 mm/s 2 -2×10 −2 mm/s 2 • E-W: 4×10 −4 mm/s 2 -7×10 −3 mm/s 2 The total of 18 out of 25 signals showed the highest peak in vertical component. The signals showed the peak frequency in the range up to 10 Hz. Calculations of arrival times suggests signals are due to direct airwaves or ground-coupled Rayleigh waves. The fireball directionality suggest that about half of the observed signals could have been caused by the Mach cone and the other half originated from fragmentation of the impactor.
We propose an upper threshold for seismic detectabilty of fireballs to be approximately 215 km. If a seismometer (of equal sensitivity) was installed alongside these systems, it may have been possible to record 50% of all DFN fireballs.
ACKNOWLEDGEMENTS
TN is fully, and PAB partially, supported by the Australian Research Council on DP180100661. KM is fully supported by the Australian Research Council on DP180100661 and DE180100584. MW is supported by DP180100661 via Discovery International Award. The DFN, EKS, PAB and HARD would like to thank support from the Australian Research Council as part of the Australian Discovery Project scheme
|
2021-02-24T02:15:55.681Z
|
2021-02-23T00:00:00.000
|
{
"year": 2021,
"sha1": "9170d68fd30480e7e9804ba08488887bdd7a5e80",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2102.11534",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9170d68fd30480e7e9804ba08488887bdd7a5e80",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
89725429
|
pes2o/s2orc
|
v3-fos-license
|
Synergistic activity of antifungal drugs and lipopeptide AC7 against Candida albicans biofilm on silicone
: The occurrence of Candida albicans device-associated infections is tightly correlated to the ability of this fungus to form biofilms. The presence of this three-dimensional structure protects cells from host defenses, and significantly increases their resistance to antifungal agents. Lipopeptide biosurfactants are microbial products with interesting antibacterial, antifungal and anti-adhesive properties. Aim of the present study was to investigate a possible synergistic effect of lipopeptide AC7BS in combination with amphotericin B or fluconazole against C. albicans planktonic cells, biofilm formation and 24 h-old biofilms on medical-grade silicone elastomer disks, in simulated physiological conditions. In co-incubation experiments, AC7BS alone was not effective. However, the combination of AC7BS with the antifungal compounds resulted in a synergistic increase in the efficacy of the drugs against planktonic cells and biofilm, leading to a reduction of MICs and SMICs 50 . In pre-coating conditions, amphotericin B alone and AC7BS alone significantly inhibited C. albicans biofilms. When the two molecules were tested in association, a synergistic effect was observed on different phases of biofilm formation and a lower SMIC 50 was detected. The observed synergism could be related to the combination of the AC7BS anti-adhesive activity and the AMB antifungal effect, but also to the ability of the biosurfactant to affect membranes, thus facilitating AMB entry in the cells. These results suggest that AC7BS can be considered a potential inhibitor of C. albicans biofilm on medical insertional materials and its use as coating agent may potentiate the effect of antifungal compounds such as AMB, when applied in combination.
Introduction
Biofilms are communities of microorganisms attached to biotic and abiotic surfaces surrounded by an extracellular polymeric substance (EPS) and often involved in chronic infections and medical device contamination [1,2]. Host defense systems typically eliminate transient bacterial contamination, however, the presence of biofilm may protect the microorganisms, and significantly reduce its susceptibility to antimicrobial agents [3,4].
The dimorphic yeast Candida albicans is the fungal species most frequently isolated from medical devices, such as catheters, heart valves and urinary devices [5,6]. When host immune functions are decreased or the competitive commensal flora is perturbed, C. albicans can be responsible for superficial or life threatening systemic infections [7]. A study performed in the United States showed that C. albicans infections are the fourth most common hospital acquired systemic infections with a high mortality rate [8]. Risk factors for infection include neutropenia, damage of mucosae and the use of broad-spectrum antimicrobials. Moreover, the application of central venous catheters represents a cause of systemic infections because of direct contact with the bloodstream [9].
A major problem in the eradication of nosocomial C. albicans infections is the resistance of the biofilms to established antifungal agents like polyenes and azoles. The resistance is found to be multifactorial. Ramage et al. [10] describe the resistance mechanisms, which include general physiological state of sessile cells, cell density, over-expression of drug targets, efflux pump mediated resistance, extracellular matrix, persister cells and tolerance against stress. In almost all the cases, the presence of C. albicans biofilms often requires implant removal, with significant increase of morbidity, mortality and hospital costs [11]. As frequent replacement is uncomfortable, costly, time consuming and may lead to damage of the cellular tissue in patients, alternative approaches are highly desirable. Current strategies to prevent biofilm formation provide medical devices coated with antimicrobials [1]. From this point of view, it could be useful to increase the efficacy of known antifungal drugs.
Biosurfactants, amphiphilic metabolites produced by a wide group of microorganisms, can represent a useful approach to counteract biofilms. In particular, lipopeptides exhibit interesting biological properties such as high surface activity and antimicrobial potential [12]. We previously demonstrated that the coating of silicone elastomer disks with a lipopeptide biosurfactant from Bacillus subtilis AC7 led to the reduction of C. albicans biofilm formation [13]. Other studies on lipopeptides demonstrated their efficacy against Escherichia coli CFT073 pre-formed biofilms in combination with antibiotics [14] or silver [15]. In particular, results indicated that the V9T14 lipopeptide alone was not able to remove pre-formed biofilms but its association with antibiotics led to a synergistic increase in their efficacy up to a total eradication of biofilm, in some combinations [14]. In addition, the activity of silver was synergistically enhanced by the presence of V9T14 lipopeptide, leading to a significant reduction of the amount of AgNO 3 used and to an increase of its antimicrobial activity [15].
In the present study, the efficacy of AC7 lipopeptide biosurfactant in association with two clinically used antifungal agents, amphotericin B and fluconazole, against C. albicans planktonic cells and biofilm formation was assessed on silicone elastomer, with the aim to identify a synergistic combination of molecules with different origin and mechanism of action to treat or prevent Candida biofilms.
Biosurfactant production
A loop of B. subtilis AC7, from a LB agar overnight culture, was inoculated into 20 ml of LB broth and incubated at 28 °C for 4 h at 140 rev min -1 . Two milliliters of the seed culture were inoculated in 500 ml of the same medium, and incubated for 24 h at the previously described growth conditions. AC7BS was extracted according to the method described by Rivardo et al. [16].
Medical-grade silicone elastomeric disks preparation
Two different sizes of medical-grade silicone elastomeric disks (SEDs) (TECNOEXTR S.r.l., Italy) were used: 5 mm in diameter and 1.5 mm in thickness for experiments in 96-well plates, and 10 mm in diameter and 1.5 mm in thickness for experiments in 24-well plates. Cleaning and sterilisation of SEDs were carried out as described by Busscher et al. [17]. Briefly, disks were immersed in 200 ml of distilled water supplemented with 1.4% (v/v) of RBS TM 50 solution (Sigma-Aldrich), sonicated for 5 min at 60 kHz using Elma S30H (Elmasonic, VWR International) and rinsed twice in 1 l of MilliQ water. Then, disks were submerged in 20 ml of methanol (99%) (Sigma-Aldrich), rinsed twice and autoclaved for 15 min at 121 °C.
Antifungal activity on planktonic cells
The antifungal activity of AMB (Sigma-aldrich), FLC (Sigma-aldrich) and AC7BS towards planktonic cells of C. albicans 40 was assessed according to EUCAST guidelines [18]. Briefly, 100 µl AC7BS 2× (2 mg ml -1 in phosphate buffered saline-PBS), AMB 2× (0.25, 0.5, 1 µg ml -1 in PBS) or FLC 2× (0.25, 0.5, 1 µg ml -1 in PBS) were added in a 96-well plate (Bioster). When the joint activity of AC7BS and the antifungal drugs was evaluated, 50 µl AC7BS 4× (4 mg ml -1 in PBS) were mixed with 50 µl of AMB 4× (0.5, 1, 2 µg ml -1 in PBS) or FLC 4× (0.5, 1, 2 µg ml -1 in PBS). In control wells (no biosurfactant or antifungal drugs added), 100 µl of sterile PBS were used. A standardized C. albicans suspension at the concentration of 1-5 × 10 5 Colony Forming Units (CFU ml -1 ) was prepared in sterile double-strength Roswell Park Memorial Institute (RPMI) 1640 medium (Sigma-Aldrich) buffered with 3-(N-morpholino)propanesulfonic acid buffer (MOPS) (Sigma-Aldrich) and supplemented with D-glucose (2% final concentration), pH 7.0. One hundred microliters of this suspension were added to test wells, to obtain final concentrations of 1 mg ml -1 AC7 BS, 0.125, 0.25, 0.5 µg ml -1 AMB and 0.125, 0.25, 0.5 µg ml -1 FLC, and to control wells. Corresponding blank wells (without planktonic cells) were also prepared. The plate was incubated at 37 °C for 24 h in static conditions. Finally, OD 450 was measured in each well using an Ultramark Microplate Imaging System (Bio-Rad). The data were normalized with respect to the value of the corresponding blank wells. The percentage of inhibition in each well, compared to control wells (containing disks not treated with biosurfactant or AMB), was determined as: Where: OD treat , optical density of treated samples; OD ctrl , optical density of controls. The minimal inhibitory concentration of AMB was defined as the lowest concentration leading to a growth inhibition ≥ 90% in comparison to control (MIC 90 ), while that of FLC as the lowest concentration giving inhibition ≥ 50% (MIC 50 ). Assays were carried out in triplicate, and repeated in two different days.
Co-incubation
The susceptibility of C. albicans 40 sessile cells to AMB, FLC and AC7BS was performed in 96-well plates as described by Nweze et al. [19] with some modification. SEDs were pre-coated with 3 ml fetal bovine serum (FBS) (Sigma-Aldrich) in 12-well plates (eight disks per well) at 37 °C for 24 h at 140 rev min -1 and, then, inoculated with 4 ml standardized fungal suspension containing 1 × 10 7 CFU ml -1 in PBS.
To evaluate the susceptibility of C. albicans in the intermediate phase of biofilm formation, after 1.5 h at 37 °C (adhesion phase), SEDs were transferred in a 96-well plate (Bioster) and incubated in 200 µl YNBD supplemented with AC7BS alone (final concentration 1 mg ml -1 ) or AMB (final concentrations 0.5, 1, 2 µg ml -1 ) and FLC (final concentrations 64, 128, 256 µg ml -1 ) alone or in combination with AC7BS. Control wells consisted of YNBD supplemented by an equal volume of the antifungal/AC7BS diluent (i.e. PBS). The plates were incubated at 37 °C for 24 h for biofilm growth.
Furthermore, the effect of AC7BS, AMB and FLC alone or in combination was also evaluated on 24 h-old biofilms (mature phase). In this case, C. albicans biofilms were grown for 24 h at 37 °C using the protocol described above. Subsequently, 200 µl YNBD supplemented with AC7BS (final concentration 1 mg ml -1 ), AMB (final concentrations 2, 4, 8 µg ml -1 ) and FLC (final concentrations 64, 128, 256 µg ml -1 ) alone or in combination were added and the plates incubated for additional 24 h at 37 °C. Control wells consisted of YNBD supplemented by an equal volume of the antifungal/AC7BS diluent (i.e. PBS).
Pre-coating
C. albicans biofilms on AC7BS pre-coated SEDs were prepared as described by Ceresa et al. [13]. Briefly, SEDs were dipped into 1 ml of a 2 mg ml -1 AC7BS solution or in PBS only and incubated at 37 °C for 24 h at 140 rpm. SEDs were then placed in a new 24-well plate and 1 ml of standardized C. albicans suspension at the concentration of 1 × 10 7 CFU ml -1 in PBS + 10% FBS was added in each well (t = 0). After 1.5 h incubation, the disks were transferred into 1 ml YNBD + 10% FBS and incubated at 37 °C for 24 h at 100 rpm.
The activity of AMB was evaluated at different times of biofilm formation on SEDs pre-coated or not with AC7BS. In particular, the antifungal drug was added both to the standardized fungal suspensions at t = 0 and to the growth medium at t = 1.5 h to evaluate its efficacy on both adhesion and intermediate phase of biofilm formation, at the concentrations of 0.125, 0.25, 0.5 µg ml -1 (pre-coating type 1), or only to the growth medium at t = 1.5 h, to test its activity in the intermediate phase only, at the concentrations of 0.5, 1, 2 µg ml -1 (pre-coating type 2). Control wells (containing disks not treated with biosurfactant or AMB) and AC7BS alone wells (containing AC7BS pre-coated disks) consisted of 1 ml YNBD + 10% FBS, supplemented by an equal volume of the antifungal diluent (i.e PBS).
Furthermore, the effect of AMB alone or in combination with AC7BS was also evaluated on 24 h-old biofilms (pre coating type 3). Biofilms were formed for 24 h at 37 °C by using the protocol described above. Afterwards, 1 ml YNBD + 10% FBS supplemented with AMB at the final concentrations of 2, 4, 8 µg ml -1 was added and the plate incubated for additional 24 h at 37 °C. Control and AC7BS alone wells consisted of YNBD + 10% FBS, supplemented by an equal volume of the antifungal diluent (i.e. PBS).
Quantification of biofilm
The quantification of C. albicans 40 biofilms was performed by the {2,3 bis(2-methoxy-4-nitro-5-sulfophenyl)-5-[(phenylamino)carbonyl]-2H-tetrazolium hydroxide} (XTT, Sigma-Aldrich) colorimetric assay. A working solution was prepared by adding 12.5 µl XTT solution (1 mg ml -1 ) and 1 µl of 1 mmol l -1 menadione solution to 1 ml PBS. Biofilm growth medium was carefully removed by aspiration and replaced with XTT working solution. The plates were covered with aluminum foil and incubated for 5 h at 37 °C. Blank wells containing disks without biofilm were also included. Afterwards, the supernatant was carefully transferred into a new plate and the absorbance was measured at 490 nm (OD 490 ) using an Ultramark Microplate Imaging System (Bio-Rad). Each tested condition was carried out in triplicate and experiments were repeated at least two times in different days. The data were normalized with respect to the blank values. The percentage of inhibition in each well, compared to control wells, was determined as indicated previously.
The sessile minimal inhibitory concentration (SMIC) of AMB and FLC was defined as the lowest concentration leading to a metabolic activity inhibition ≥ 50% compared to control wells (SMIC 50 ).
Evaluation of synergism
Synergism is generally defined as the interaction between two or more molecules whose effect, in association, is greater than the combination of the effects of the individual compounds. Usually, the combination under consideration consists in an additive effect. However, if the effect is a percentage (in our case, a survival percentage), this is not additive but multiplicative; the combination of the effects of two compounds (assuming they act independently) is, thus, the product of the percentages. Synergy, in this context, amounts to say that the survival percentage due to the two compounds acting simultaneously is less than the product of survival percentages due to the two compounds by themselves.
In this work, the effect (E) of a treatment (T) (i.e. survival) is calculated as OD*T = ODT/ODctrl. The synergism is evaluated by comparing the multiplicative effect x 1 = OD* (AC7) × OD* (antifungal) with the experimental effect x 2 = OD* (AC7 & antifungal agent) . If x 1 > x 2 in a statistical significant way, then we are in presence of synergism.
Statistical analysis
Statistical analysis and graphs were elaborated by means of the statistical program R,3.1.2. (R Development Core Team, http//www.R-project.org). Two-way ANOVA was performed to investigate the effect of AC7BS and antifungal agents alone or in combination on C. albicans planktonic or sessile cells both in co-incubation and pre-coating conditions. Results were considered to be statistically significant when P < 5 × 10 -2 .
Joint activity of amphotericin B, fluconazole and lipopeptide biosurfactant AC7 on planktonic cells of Candida albicans 40
To define a possible joint activity against C. albicans 40 planktonic cells, the two antifungal agents were tested in association with the lipopeptide biosurfactant AC7 (AC7BS) at two sub-MIC concentrations (0.125 µg ml -1 , 0.25 µg ml -1 ) and at the MIC (0.5 µg ml -1 ). These concentrations were chosen based on previously calculated MIC 90 for AMB and MIC 50 for FLC (data not shown).
AC7BS was tested at the concentration of 1 mg ml -1 . The optical density at 450 nm (OD 450 ) of planktonic cells in co-incubation with or without AC7BS is displayed as a function of the concentration of AMB or FLC (Figure 1). For AMB, the net OD 450 value of control (no BS or antifungal drug added) was 1.103 0.007 and decreased to 0.859 0.005, 0.719 0.005 and 0.022 0.001 when planktonic cells were treated with 0.125, 0.25 and 0.5 µg ml -1 AMB, respectively. Cells co-incubated with AC7BS alone showed a net OD 450 value of 1.105 0.005 that decreased to 0.180 0.001, 0.039 0.008 and 0.018 0.001 when the biosurfactant was associated with the three concentrations of AMB ( Figure 1A).
For FLC, the net OD 450 value of control was 1.108 0.001 and decreased to 1.107 0.002, 0.702 0.001 and 0.231 0.002 when planktonic cells were treated with 0.125, 0.25 and 0.5 µg ml -1 FLC, respectively. Cells co-incubated with AC7BS showed a net OD 450 value of 1.108 0.002 that decreased to 0.864 0.002, 0.432 0.003 and 0.064 0.003 when the biosurfactant was associated with the tree concentrations of FLC ( Figure 1D).
According to ANOVA analysis, the survival of C. albicans 40 planktonic cells was significantly dependent on the concentration of antifungal agents (P < 1 × 10 -14 ) and on the type of treatment (P < 1 × 10 -5 ) both for AMB and FLC.
The percentage of inhibition of C. albicans 40 planktonic cells are reported in Table 1. With respect to controls, the growth of C. albicans 40 planktonic cells was significantly inhibited up to 98.0% by 0.5 µg ml -1 AMB and up to 79.1% by 0.5 µg ml -1 FLC. AC7BS alone resulted to be ineffective, suggesting that biosurfactant had no antifungal activity. The joint application of AC7BS and the antifungals significantly reduced Candida growth up to 98.4% at 0.5 µg ml -1 AMB and up to 94.3% at 0.5 µg ml -1 FLC. Furthermore, in both cases, the MIC value of each antibiotic was decreased by the presence of AC7BS from 0.5 µg ml -1 to 0.25 µg ml -1 . Figure 2 shows the experimental and multiplicative effect of AC7BS and the antifungal drugs against C. albicans 40 planktonic cells, calculated as described in the Materials and Methods section. In particular, when the two sub-MIC concentrations of AMB (Figure 2A) or the three concentrations of FLC ( Figure 2D) were associated with AC7BS, the experimental effect OD* (AC7 & antifungal) was less than the multiplicative effect OD* (AC7) × OD* (antifungal) , indicating a strong synergistic association. ANOVA analysis, confirms synergy (P < 1 × 10 -5 ) and its dependence on concentrations (P < 1 × 10 -14 ).
Co-incubation
The two antifungal agents were tested in association with AC7BS to evaluate a possible joint activity against C. albicans 40 sessile cells at different phases of biofilm formation. AMB was applied at a sub-SMIC 50 concentration (0.5 µg ml -1 ), at the SMIC 50 (1 µg ml -1 ) and at 2 µg ml -1 in the intermediate phase whereas at 2 µg ml -1 (sub-SMIC 50 ), 4 µg ml -1 (SMIC 50 ) and 8 µg ml -1 in the mature phase of biofilm formation. FLC was tested in both cases at three sub-SMIC 50 concentrations (64, 128, 256 µg ml -1 ). These concentrations were chosen based on previously calculated SMICs 50 for AMB and FLC on intermediate and mature phases of biofilm formation (data not shown). AC7BS was used at the concentration of 1 mg ml -1 . The metabolic activity (OD 490 ) of sessile cells in co-incubation with or without AC7BS is shown as a function of the concentration of AMB or FLC (Figure 1).
For AMB, in the intermediate phase of biofilm formation, the OD 490 value of control was 0.198 0.006 and decreased to 0.158 0.001, 0.093 0.005 and 0.048 0.006 when sessile cells were treated with respectively 0.5, 1, 2 µg ml -1 AMB. When cells were co-incubated with AC7BS, the value was 0.188 0.003 and decreased to 0.077 0.006, 0.014 0.002 and 0.001 0.001 when the two molecules were associated ( Figure 1B). In the mature phase, the OD 490 value of control was 0.173 0.004 and decreased to 0.108 0.003, 0.079 0.008 and 0.066 0.004 when sessile cells were treated with 2, 4, 8 µg ml -1 AMB. In the presence of AC7BS, the OD 490 value was 0.166 0.005 and decreased to 0.087 0.004, 0.057 0.004 and 0.045 0.006 when AC7BS was associated with AMB ( Figure 1C).
For FLC, in the intermediate phase of biofilm formation, the OD 490 value of control was 0.201 0.005 and decreased to a mean value of 0.155 0.007 at the three tested concentrations. Sessile cells co-incubated with AC7BS showed a value of 0.190 0.002 that decreased to a mean value of 0.108 0.005 when biosurfactant was associated with FLC ( Figure 1E). In the mature phase, the value of control was 0.171 0.004 and decreased to a mean value of 0.160 0.005 in the presence of FLC. Sessile cells co-incubated with AC7BS showed a value of 0.165 0.007 that decreased to a mean value of 0.159 0.003 when biosurfactant was associated with FLC ( Figure 1F).
In the intermediate phase, sessile cells survival was significantly dependent on the concentration of antifungal agent (P < 1 × 10 -6 ) and on the type of treatment (P < 1 × 10 -6 ) both for AMB and FLC whereas on 24 h-old biofilms, it was significantly dependent on the concentration of antifungal agent (P < 1 × 10 -3 ) and on the type of treatment (P = 1 × 10 -2 ) only in the case AMB.
The percentages of inhibition of C. albicans 40 sessile cells are reported in Table 2. In the intermediate and mature phases, the metabolic activity of cells co-incubated with AC7BS was comparable to that of control (no BS or antifungal drug added) indicating that biosurfactant had no antifungal activity on biofilm. AMB significantly counteracted C. albicans 40 biofilm in the intermediate phase up to 75.7% at 2 µg ml -1 whereas significantly reduced 24 h-old biofilm up to 61.8% at 8 µg ml -1 . Regarding FLC, in the intermediate phase, C. albicans 40 biofilm formation was significantly inhibited of about 23% at 64, 128, 256 µg ml -1 but no relevant activity on 24 h-old biofilm was detected. The simultaneous use of AC7BS and AMB significantly inhibited C. albicans 40 intermediate phase biofilm up to 99.6% (at 2 µg ml -1 ) whereas significantly reduced 24 h-old biofilms up to 74.3% (at 8 µg ml -1 ). The simultaneous use of AC7BS and FLC at all the tested concentrations significantly inhibited C. albicans 40 in the intermediate phase of biofilm formation of about 46.6% whereas no joint activity on 24 h-old biofilms was observed.
Furthermore, the SMIC 50 values of AMB were decreased by the presence of AC7BS from 1 µg ml -1 to 0,5 µg ml -1 in the intermediate phase and from 4 µg ml -1 to 2 µg ml -1 in the mature phase of biofilm formation.
The synergistic activity of AC7BS and the antifungal drugs against C. albicans 40 biofilm formation was evaluated ( Figure 2). In the intermediate phase, both for AMB and FLC, the occurrence of synergism is visualized by the higher position of the blue curve in comparison to the red curve ( Figure 2B and Figure 2E). In the mature phase ( Figure 2C and 2F), the synergism of AC7BS and antifungal drugs was observed only in the case of AMB ( Figure 2C). Globally, the synergistic association and its dependence on the concentration of antifungal agents are significantly confirmed by ANOVA analysis (P < 1 × 10 -2 and P < 1 × 10 -14 , respectively).
Pre-coating
The antifungal effect of AMB, the anti-adhesive properties of AC7BS and the combination of the two activities (AMB&AC7BS) were evaluated against C. albicans 40 biofilm formation in pre-coating conditions. In particular, the activity of AMB (alone or in combination with AC7BS pre-coating) was evaluated in three different experimental settings: AMB added at time 0 to the fungal suspension and at time 1.5 h to the growth medium, at the concentrations of 0.125, 0.25, 0.5 µg ml -1 (pre-coating type 1); AMB added only at time 1.5 h to the growth medium, at the concentrations of 0.5, 1, 2 µg ml -1 (pre-coating type 2); AMB added at time 24 h to the growth medium, at 2, 4, 8 µg ml -1 (pre-coating type 3). These concentrations were again chosen based on the previously calculated MICs and SMICs for AMB on planktonic and sessile Candida cells.
In Figure 3, the metabolic activity (OD 490 ) of C. albicans 40 in biofilm formation on silicone elastomeric disks pre-coated with or without AC7BS is shown as a function of the concentration of AMB. In pre-coating type 1, the OD 490 value of control was 0.102 0.003 and decreased to 0.100 0.007, 0.090 0.008 and 0.070 0.007 when cells were treated with respectively 0.125, 0.25, 0.5 µg ml -1 AMB. When cells were treated with AC7BS the value was 0.047 0.010 and decreased to 0.039 0.012, 0.029 0.018 and 0.005 0.003 when AC7BS were associated with the three concentrations of AMB ( Figure 3A).
In pre-coating type 2, the OD 490 value of control was 0.109 0.011 and decreased to 0.065 0.006, 0.043 0.009 and 0.022 0.007 when cells were treated with respectively 0.5, 1, 2 µg ml -1 AMB. When cells were treated with AC7BS the value was 0.052 0.011 and decreased up to 0.02 0.002 when AC7BS was associated with AMB at 2 µg ml -1 ( Figure 3B).
In pre-coating type 3, the OD 490 value of control was 0.105 0.014 and decreased to 0.072 0.011, 0.052 0.007 and 0.042 0.010 when cells were treated with respectively 2, 4, 8 µg ml -1 AMB. When cells were treated with AC7BS the value was 0.061 0.009 and decreased to 0.025 0.004, 0.010 0.001 and 0.003 0.002 when AC7BS was associated with the three concentrations of AMB ( Figure 3C).
The complete set of percentages of inhibition in pre-coating assays is reported in Table 2.
In pre-coating type 1, with respect to controls, AC7BS significantly reduced C. albicans 40 sessile cell of 53.7%. AMB significantly killed cells up to 31.4% at 0.5 µg ml -1 , whereas its simultaneous use with AC7BS up to 94.7% (at 0.5 µg ml -1 ). In addition, in the presence of AC7BS, the SMIC 50 value was achieved at 0.125 µg ml -1 .
In pre-coating type 2, with respect to controls, AC7BS reduced C. albicans 40 sessile cells of 52.4% whereas AMB up to 79.4% at 2 µg ml -1 . The use of the two molecules together significantly inhibited C. albicans 40 up to 98.2% at 2 µg ml -1 . In addition, in the presence of AC7BS, the SMIC 90 value of AMB was reached at 0.5 µg ml -1 .
In pre-coating type 3, with respect to controls, AC7BS significantly reduced 24 h-old biofilms of 41.7% whereas AMB of 60.2% at 8 µg ml -1 . The joint application of AC7BS and AMB significantly inhibited 24 h-old biofilms up to 97.0% at 8 µg ml -1 . Furthermore, a decrease of the SMIC 50 value, from 4 µg ml -1 to 2 µg ml -1 was achieved. The activity obtained by the association of AC7BS and AMB in the pre-coating assays was graphically expressed in Figure 4. ANOVA analysis confirms synergy (P < 1 × 10 -3 ) and its dependence on concentration (P < 1 × 10 -7 ).
Discussion
Candida species are involved in Candidiasis, the most common opportunistic yeast infection, and Candida albicans, the most prevalent, is responsible of approximately 50-90% of cases [20]. C. albicans pathogenesis is closely associated to its ability to growth as biofilms, structured cell communities embedded in extracellular matrix, that protect the microorganism from host defences and reduce significantly its susceptibility to antifungal agents [21]. Amphotericin B and fluconazole represent antifungal agents of choice in the treatment of serious Candida infections [22]. The interaction of amphotericin B with ergosterol results in pores formation, surface adsorption and ergosterol extraction from plasma membranes leading to membrane damage and rapid fungal cell death [23]. Fluconazole interferes with the ergosterol synthesis preventing the conversion of lanosterol to ergosterol by the inhibition of the fungal cytochrome P450 enzyme 14α-demethylase [24]. Despite the presence of more effective antifungal agents, active on both planktonic and sessile cells such as AMB and echinocandins, fluconazole is still widely used in the clinic because of its efficacy and low toxicity. Various approaches have been recently proposed to increase the susceptibility of C. albicans to fluconazole, such as its combined use with different classes of non-antifungal agents that overcome fungal resistance or enhance antifungal activity [25].
In the presence of biofilm, antifungal drugs are generally less effective or, in some cases, even ineffective. To overcome this serious clinical problem, novel strategies in preventing biofilm development are beginning to explore the combined use of different antimicrobial compounds in order to increase their efficacy. Biosurfactants-secondary metabolites produced by numerous microorganisms-have drawn attention of scientific community thanks to their interesting biological and chemical properties such as the ability to disturb cell membrane integrity and permeability and to affect adhesion of microorganisms [12]. In particular, lipopeptides are involved in the destabilization of membranes lipid packing. They penetrate into the membrane through hydrophobic interactions, influencing in the order of the hydrocarbon chain and varying the membrane thickness and form pores that change membrane permeability and decrease the cooperativity of lipid-lipid interactions [26][27][28][29][30]. Furthermore, lipopeptides are able to reduce the hydrophobicity of surfaces interfering with microbial adhesion and desorption processes [31].
In a previous work by the authors, it has been demonstrated that the lipopeptide biosurfactant AC7BS was characterized by antiadhesive activity against C. albicans planktonic and sessile cells [13]. Pre-treatment of silicone elastomeric disks with AC7BS caused the inhibition of fungal adhesion and biofilm formation without altering cell viability of both planktonic and sessile cells. Other studies successfully demonstrated the antiadhesive and anti-biofilm activity of lipopeptides against bacterial and fungal pathogens on polystyrene and silicone materials [32,33].
In the present study, for the first time, the association of AC7BS with two antifungal compounds extensively used in the treatment of invasive fungal infections, amphotericin B (AMB) and fluconazole (FLC), was explored on medical-grade silicone against planktonic and sessile cells of C. albicans 40, a clinical isolate from central venous catheter. Moreover, in order to emphasize the clinically oriented approach, the activity of AC7BS and antifungal agents, alone or in association, was assessed in the presence of fetal bovine serum, to mimic medical devices contact with biological fluids during clinical use in internal body areas.
The evaluation of the anti-adhesive and anti-biofilm activity of AC7BS and antifungals was carried out in co-incubation and pre-coating conditions. Co-incubation assays were applied to assess AC7BS biological properties as adjuvant during antifungal treatments, as well as to measure its ability to dislodge pre-formed biofilms. On the other hand, pre-coating assays were used to evaluate the real efficacy and the possible applicability of AC7BS as medical devices coating agent for the prevention of microbial adhesion and biofilm formation, alone or in association to antifungal treatments. Antifungal agents were, generally, tested at three different concentrations: above the MIC, at MIC and at sub-MIC. The first two concentrations were tested with the objective of defining whether the lipopeptide could enhance the killing activity of the antifungal drug. The sub-MIC concentration was tested to assess whether the presence of AC7BS could determine an increase in the antifungal activity, with the consequent decrease of the MIC value. This combined activity could allow the reduction of the therapeutic dose of antifungal agent, thus limiting the occurrence of adverse reactions and the onset of resistances.
In general, when AC7BS was used in association with AMB or FLC, a synergistic activity against planktonic cells and biofilm formation was observed. The term synergism, meaning working together, is referred to the interaction between two or more molecules when their effect in association is greater than the combination of the effects of the individual compounds.
In the case of planktonic cells, despite the inability of AC7BS to inhibit cells viability, the antifungal activity of AMB and FLC was synergistically increased by the presence of the biosurfactant and a reduction of MICs values was observed for both the antifungal agents.
In anti-biofilm assays, in co-incubation experiments, results demonstrated a significant inhibitory effect of AMB against sessile cells. FLC affected the intermediate phase of biofilm formation but was unable to reduce 24 h-old biofilms. It was also observed that AC7BS alone was not able to inhibit sessile cells but, generally, it increased the killing activity of antifungal agents when used in association. In addition, a reduction of SMICs values was observed for AMB.
It can be hypothesized that the antifungal activity of AMB or FLC is synergistically increased by the interaction of AC7BS with membrane lipids leading to a higher permeability of plasma membranes to antifungal agents.
Another work described a synergistic effect of a lipopeptide in co-incubation with various antibiotics against biofilm of the uropathogenic strain E. coli CFT073 on polystyrene [14]. The combined use of V9T14 lipopeptide and six different antibiotics [cefazolin, ciprofloxacin, ceftriaxone, piperacillin, tobramycin and trimethoprim/sulfamethoxazole (SXT) (19:1)] led to a reduction of biofilms in terms of CFU ml -1 ranging from 1.0 log10 to 2.1 log10, whereas the association of the biosurfactant with ampicillin led to a complete eradication. Similarly to AC7BS, the presence of V9T14 did not affect cell viability of planktonic and sessile cells but decreased the amount of antibiotics required to obtain the same cell reduction detected with the antibiotic alone.
In pre-coating experiments, the absorption of AC7BS on silicone disks resulted in a significant reduction of biofilm formation confirming the antiadhesive properties of this biosurfactant. The treatment with AMB alone showed a higher inhibitory effect when added after the adhesion phase. A less marked, but still significant, activity was observed when the antifungal agent was added directly to the fungal suspension and on 24-old biofilms. Interestingly, when AC7BS and AMB were used in association, a synergistic activity of the two molecules against C. albicans 40 biofilm formation was observed. In particular, the most encouraging results were obtained in pre-coating type 2 experiments, wherein the co-use of AC7BS and AMB at all concentrations decreased biofilm formation of more than 90%. Furthermore, it was also possible to observe a reduction of the concentration of antifungal agent needed to achieve the SMICs values.
In this case, the synergism can be due both to the anti-adhesive activity of AC7BS and to the antifungal effect of AMB, but also to the previously indicated ability of the biosurfactant to decrease the cooperativity of the lipid-lipid interactions in the bilayer membrane, thus facilitating the entry of AMB in the cells.
Conclusion
The study demonstrated, for the first time, that the association of lipopeptide AC7 with antifungal agents leads to a synergistic effect in inhibiting C. albicans 40 planktonic cells and biofilm formation. Although additional studies are required to determine the molecular basis of these observations, these results suggest that the joint activity of AC7BS and antifungal agents might have potential applicability in prophylactic or therapeutic strategy against C. albicans infections related to the use of medical insertional materials.
|
2019-04-02T13:06:56.572Z
|
2017-04-12T00:00:00.000
|
{
"year": 2017,
"sha1": "3c12bc063764633c64070885288f9970bd62f4ea",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/bioeng.2017.2.318",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bcfd615c8ce3092ca57d8ab676932cb5107ecb28",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
265804497
|
pes2o/s2orc
|
v3-fos-license
|
Natural Killer Cells: From Basic Research to Treatments
Innate ImmunIty and nK cells The immune system is classically divided into innate and adaptive. Adaptive immunity can be defined by the presence of cells (i.e., T and B lymphocytes in higher vertebrates) that clonally express a colossal repertoire of receptors (i.e., the T cell and the B cell antigen receptors), the diversity of which results from somatic DNA rearrangements. Besides T and B cells, natural killer (NK) cells have been originally defined as cytolytic lymphocytes that selectively eliminate tumor cells without antigen-specific receptors (Oldham and Herberman, 1973; Herberman et al., 1975; Kiessling et al., 1975). NK cells are lymphocytes of the innate immune system that can kill an array of target cells and secrete cytokines that participate to the shaping of the adaptive immune response (Vivier et al., 2008, 2011). A feature of NK cells resides in their capacity to distinguish stressed cells (such as tumor cells, microbe-infected cells, cells which have undergone physical or chemical injuries) from normal cells via an array of germline-encoded recognition receptors. The acquisition of cell cytotoxicity during evolution has been associated with the development of highly sophisticated and robust mechanisms that control the initiation of the cytolytic processes and avoid tissue damage. Along this line, much progress has been made over the last 15 years in the dissection of the mechanisms that allow NK cells to discriminate target cells from other healthy “self” cells. These data have been instrumental in defining several immune recognition strategies and in the emergence of the “dynamic equilibrium concept.” The NK cell detection system includes a variety of cell surface activating and inhibitory receptors, the engagement of which regulates NK cell activities. Thus, the integration of antagonistic pathways upon interaction with neighboring cells governs the dynamic equilibrium regulating NK cell activation and dictates whether or not NK cells are activated to kill target cells (Moretta and Moretta, 2004; Vivier et al., 2004; Lanier, 2005).
Innate ImmunIty and nK cells
The immune system is classically divided into innate and adaptive. Adaptive immunity can be defined by the presence of cells (i.e., T and B lymphocytes in higher vertebrates) that clonally express a colossal repertoire of receptors (i.e., the T cell and the B cell antigen receptors), the diversity of which results from somatic DNA rearrangements. Besides T and B cells, natural killer (NK) cells have been originally defined as cytolytic lymphocytes that selectively eliminate tumor cells without antigen-specific receptors (Oldham and Herberman, 1973;Herberman et al., 1975;Kiessling et al., 1975). NK cells are lymphocytes of the innate immune system that can kill an array of target cells and secrete cytokines that participate to the shaping of the adaptive immune response (Vivier et al., 2008. A feature of NK cells resides in their capacity to distinguish stressed cells (such as tumor cells, microbe-infected cells, cells which have undergone physical or chemical injuries) from normal cells via an array of germline-encoded recognition receptors.
The acquisition of cell cytotoxicity during evolution has been associated with the development of highly sophisticated and robust mechanisms that control the initiation of the cytolytic processes and avoid tissue damage. Along this line, much progress has been made over the last 15 years in the dissection of the mechanisms that allow NK cells to discriminate target cells from other healthy "self " cells. These data have been instrumental in defining several immune recognition strategies and in the emergence of the "dynamic equilibrium concept." The NK cell detection system includes a variety of cell surface activating and inhibitory receptors, the engagement of which regulates NK cell activities. Thus, the integration of antagonistic pathways upon interaction with neighboring cells governs the dynamic equilibrium regulating NK cell activation and dictates whether or not NK cells are activated to kill target cells (Moretta and Moretta, 2004;Vivier et al., 2004;Lanier, 2005).
mIssIng-self and nK cell educatIon
Natural killer cells use inhibitory receptors to gauge the absence of constitutively expressed self-molecules on susceptible target cells. In particular, NK cells express MHC class I-specific receptors and "lose" inhibitory signals when encountering MHC class I-deficient hematopoietic cells in several in vitro and in vivo models. As a consequence, NK cells can recognize "missing self " on hematopoietic cells (Kärre et al., 1986;Bix et al., 1991). The MHC class I-specific inhibitory receptors include the killer cell immunoglobulin-like receptors (KIRs) in humans, the lectin-like Ly49 dimers in the mouse and the lectin-like CD94-NKG2A heterodimers in both species (Yokoyama and Plougastel, 2003;Parham, 2005). A conserved feature of these inhibitory receptors resides in the presence of one or two intracytoplasmic inhibitory signaling domains called immunoreceptor tyrosinebased inhibition motifs (ITIMs; Burshtyn et al., 1996;Olcese et al., 1996). By interacting with MHC class I molecules that are constitutively expressed by most healthy cells in steady-state conditions but that may be lost upon stress, inhibitory MHC class I receptors provide a way for NK cells to ensure tolerance to self while allowing toxicity toward stressed cells. MHC class I is not the only constitutive self-signal detected by NK cells, as other inhibitory receptors (for example, mouse NKR-P1B, human NKR-P1A, and mouse 2B4) that recognize non-MHC selfmolecules (for example, Clr-b, LLT-1, and CD48, respectively) also regulate NK cell activation (Kumar and McNerney, 2005).
MHC class I-specific inhibitory receptors and their ligands (H-2 in mice and HLA in humans) are highly polymorphic molecules encoded by multigenic, multiallelic families of genes that are inherited independently (Yokoyama and Plougastel, 2003;Parham, 2005). NK cells have thus to discriminate self in a context where self-molecules differ from individuals to individuals. Like T lymphocytes, NK cells are educated to self versus altered-self discrimination. This education, also termed "tuning, licensing, or arming" leads to the maturation of a NK cell functional repertoire (i.e., the ensemble of stimulations toward which NK cells are reactive), which is adapted to self-MHC class I environment (Fernandez et al., 2005;Kim et al., 2005;Anfossi et al., 2006;Raulet and Vance, 2006;Yokoyama and Kim, 2006). Consequently, NK cells in MHC class I-deficient hosts are hyporesponsive to stimulatory receptor stimulation and thereby tolerant to self. Other studies have reported that the hyporesponsiveness of NK cells grown in a MHC class I-deficient environment can be overcome by inflammatory conditions in NK cell environment (Tay et al., 1995;Orr et al., 2010). It remains that two types of self-tolerant NK cells coexist in vivo at steady-state: functionally competent NK cells, whose effector responses are inhibited by the recognition of self-MHC class I molecules, and hyporesponsive NK cells that cannot detect self-MHC class I. NK cell education does not result in an on/off switch, but rather in a quantitative tuning of NK cell responsiveness: the more inhibitory receptors recognizing self-MHC class I are expressed, the more NK cells are responsive to cells lacking self-MHC class I (Brodin et al., 2009;Joncker et al., 2009;Hoglund and Brodin, 2010). The molecular mechanisms underlying the MHC-dependent NK cell education have been shown in mice to require a functional ITIM in the intracytoplasmic tail of Ly49 inhibitory receptors (Kim et al., 2005). Recently using spot variable fluorescence correlation spectroscopy to monitor the movement of receptors, we have shown that in NK cells genetically engineered to not be properly educated, lenge. Following MCMV infection, NK cells can therefore give rise to a population of long-lived cells with an intrinsic ability to exhibit enhanced effector functions when restimulated. Supporting these observations, NK cells activated in vitro by cytokines (IL-12/IL-18) and adoptively transferred in vivo also display some " memory-like" properties (Cooper et al., 2009). In this model, after an episode of "activation-driven" proliferation, a population of apparently resting cells with an enhanced ability to secrete IFN-γ ex vivo is maintained during at least 3 weeks. Notably, this increased ability to secrete IFN-γ was observed regardless of the cell generation. Thus, NK cells can retain in vivo an intrinsic memory of a prior in vitro activation, which is maintained across cell divisions. Altogether, these results prompt to research into the mechanisms that allow the boosted effector function of NK cells to be maintained across cell divisions, in particular the epigenetic marks associated with various stages of these cells' activation.
Besides, the "memory-like" features of NK cells also prompt to investigate how they participate in immunological memory. The ultimate goal of a vaccine is to develop long-lived immunological protection, whereby the first encounter with a pathogen is remembered and leads to an enhanced immune response. Novel insights into the cellular and molecular mechanisms controlling the development and function of the immunological memory are therefore critical for vaccine development and improvement. There are several reports showing that NK cells interfere with the shaping of the adaptive immune response (Raulet, 2004), but very few address specifically the role of NK cells in immunological memory (Raulet, 2004;Soderquest et al., 2011). Concerning primary responses, NK cells can meet dendritic cells (DC) in peripheral tissues, as well as in secondary lymphoid organs, and can act on them in two distinct ways (Moretta, 2002;Degli-Esposti and Smyth, 2005;Walzer et al., 2005). Upon interaction, NK cells can kill immature DC in humans and mice, thereby influencing DC homeostasis, but also potentially limiting DC-based vaccination efficacy. Conversely, the killing of target cells by NK cells can lead to the cross-presentation of antigens from apoptotic NK cell targets by subsets of DCs. This NK cell-mediated cytotoxicity other NKG2D + lymphocytes) to attack the diseased cells (Gasser et al., 2005;Gasser and Raulet, 2006).
Besides NKG2D, NK cell express an array of cell surface molecules, such as the natural cytotoxicity receptors (NCR), which have been shown since more than a decade to be involved in the activation of NK cells by tumor cells (Moretta et al., 2001). The NCR family includes NKp46 (NCR1, CD335), NKp44 (NCR2, CD336), and NKp30 (NCR3, CD337; Bottino et al., 2006). However, the NCR ligands that are expressed on tumor cells and activate NK cells are still unknown at the noticeable exception of B7-H6 that we recently identified as a ligand for NKp30 (Brandt et al., 2009). One important aspect of future research in NK cells, will be to characterize the nature and the regulation of NCR ligands.
nK cells and ImmunologIcal memory
The immune system, like the nervous system, has this ability to learn from previous experience, such as a single encounter with the many pathogens that exist. The result is immunological memory that confers longlasting protection. Until now, immunological memory was thought to be a feature of the adaptive immune system. Unexpectedly, recent studies revealed that NK cells could be players in the persistence of immunity, although they have traditionally been considered to be part of the innate immune system. First, a seminal study showed that in mice lacking T and B cells, a subset of liver NK cells are able to mediate the prototypical "adaptive" immune reaction of hapten-specific contact hypersensitivity (O'Leary et al., 2006;Paust et al., 2010). More recently, in a model of murine cytomegalovirus (MCMV), mouse NK cells expressing the receptor Ly49H, which recognizes the MCMV m157 protein, was shown to clonally expand and subsequently contract while leaving a few long-lived cells able to mount a "secondary" response (Sun et al., 2009;Ugolini and Vivier, 2009). These "virus-experienced" cells are still detectable up to 3 months after infection and display enhanced IFN-γ secretion and degranulation compared to naïve cells when restimulated ex vivo. Finally, when transferred to naïve immunocompromised mice, these cells are more protective against a lethal virus chal-inhibitory and activating receptors were confined together in domains where they were associated with an actin network at the plasma membrane (Guia et al., 2011). When these cells were educated by MHC class I recognition, inhibitory receptors remained associated with an actin meshwork at the membrane, while activating receptors were present in nanodomains characteristic of active receptor signaling (Guia et al., 2011). This mechanism, as compared to transcriptional reprogramming, may allow the NK cells greater flexibility to switch between an unresponsive state and a state in which they are competent to respond to stimuli, consistent with recent NK cell adoptive transfer experiments (Elliott et al., 2010;Joncker et al., 2010).
stress-Induced self recognItIon
Besides the detection of self via inhibitory receptors, NK cells are also equipped with cell surface activating receptors. In addition to the recognition of microbial molecules by a variety of innate immune receptors, the so-called "infectious non-self recognition," it has been shown that some receptors of innate immune cells can detect internal changes, leading to the concept of "stressinduced self recognition" (Bauer et al., 1999;Gasser et al., 2005;Guerra et al., 2008;Raulet and Guerra, 2009). This mode of detection is based on the recognition of molecules whose expression is barely detectable in steady-state conditions, but induced upon various forms of stress. A prototypical example of this mode of detection is illustrated by the activation of NK cells via engagement of the NKG2D receptor, which interacts with self-molecules selectively up-regulated on stressed cells, such as tumor cells. In vivo, NKG2D is critical for immunosurveillance of epithelial and lymphoid malignancies in transgenic models of de novo tumorigenesis (Guerra et al., 2008). In the transgenic Eμ-Myc mouse model of spontaneous B cell lymphoma, the tumor expression of NKG2D ligands was shown to represent an early step of tumorigenesis associated with still unknown genetic lesions of cancer cells (Unni et al., 2008). A linkage between tumorigenesis, DNA damage response (DDR) and the immune response has been proposed: DNA-damaging agents or DNA lesions associated with tumorigenesis activate the DDR, which results in up-regulation of NKG2D ligands leading NK cells (and several key elements of NK cell function and mode of action remain to be unveiled. Frontiers in NK cell biology provides a platform to highlight new knowledge on key aspects of NK cell biology from the nanoscopic to the organismal scales. In addition, Frontiers in NK cell biology has the ambition to also apply to human immunology by bridging basic and translational research to open novel therapeutic perspectives. acKnowledgments Eric Vivier and Sophie Ugolini are supported by grants from the European Research Council (Advanced Grants), Agence Nationale de la Recherche (ANR), Ligue Nationale contre le Cancer (Equipe labellisée "La Ligue"), and institutional grants from INSERM, CNRS, and Université de la Méditerranée to the CIML. Eric Vivier is a scholar from the Institut Universitaire de France. references lysis of leukemic cells. However, one of the main concerns for using this therapeutic approach in humans is the risk of generating a strong reactivity against normal self-tissues and/or to interfere with NK cell education. Therefore the precise understanding of NK cell education mechanisms is not only critical to describe this process as a model of education to self reactivity for cells of the innate immune system, but it is also pivotal for the development of innovative therapeutic strategies based on the manipulation of NK cell immunity.
conclusIons
Studies on NK cells, which integrate their function as a result of their education and their mode of recognition of target cells, have already provided a novel conceptual framework for the study of innate immunity that can serve as an inspiration for the study of other hematopoietic cells. The experimental evidence of the role of NK cells as innate components of the host defense against some viruses and tumors, and the potential efficacy in manipulating KIR/HLA class I recognition in anti-tumoral hematopoietic stem cell transplantation has initiated the research on NK cell-directed therapies. These protocols are likely to be used in combinatorial regimen with classical anti-cancer approaches. However, more knowledge is required before therapeutic breakthroughs can be envisioned in this direction. The absolute requirement for a better understanding of NK cell biology is also highlighted by the possibility of using NK cell-based therapies in other clinical indications such as autoimmunity, infectious diseases, immune deficiencies, and pathological pregnancy. In addition, the need for novel immuno-monitoring studies has emerged with the burst of tumor vaccines and the search for surrogate markers of efficacy of druggable components. However, given the complexity of NK cell biology, their standard monitoring (e.g., in vitro lysis of K562 leukemia cells) should be revisited, implemented, and internationally standardized. Furthermore, the detection of NK cell ligands of tumor cells could also be evaluated as possible prognostic markers.
Thus, almost 40 years after their discovery and despite the growing interest in both basic and clinical aspects of NK cell biology, of target cells induces robust antigen-specific adaptive immune responses involving CD8 + T cells, CD4 + T cells and antibodies (Krebs et al., 2009). Recognition and killing of target cells by NK cells might thus provide a new and powerful strategy for vaccine development. NK cells can also influence adaptive immune responses by directly acting on T and B cells. In the inflamed lymph node, NK cells can promote the priming of CD4 + T helper type 1 cells by secreting IFN-γ. NK cells can also kill activated T cells, unless the T cells express sufficient amounts of classical or non-classical MHC class I molecules. As a consequence, blockade of CD94-NKG2A inhibitory receptors leads to NK cell cytotoxicity against activated CD4 + T cells, suggesting the use of blocking antibodies to NKG2A to prevent CD4 + T celldependent autoimmunity (Lu et al., 2007). As the nature of reliable biological markers of protective immunity is still a matter of debate, it is also exciting to consider that NK cells might be monitored as a potential protection correlate for testing the efficiency of vaccines under development.
nK cell-based therapIes
Several studies suggest that the manipulation of NK cell missing-self recognition may have important clinical benefit in leukemic patients (Cooley et al., 2007;Ljunggren and Malmberg, 2007;Terme et al., 2008;Zitvogel et al., 2008). In particular, retrospective studies of KIR/HLA mismatched stem cell transplantation in acute myeloid leukemia patients showed that the lack of KIR engagement on donor NK cells by patient MHC class I molecules was associated with a significant reduced risk for leukemia relapse (Ruggeri et al., 2002). The manipulation of NK cell alloreactivity in these settings implies haploidentical hematopoietic transplantations, which are associated with considerable adverse effects, including graft versus host disease mediated by allogenic T cells. A safer strategy is to block NK cell inhibitory receptors in an autologous setting, and is currently tested in phase II clinical trials with a fully human anti-KIR monoclonal antibody (1-7F9; Romagne et al., 2009;Sola et al., 2009). This monoclonal antibody recognizes KIR2D inhibitor receptors and blocks their interaction with the human MHC class I molecules HLA-C, leading to NK cell-mediated
|
2016-06-17T02:15:39.545Z
|
2011-06-03T00:00:00.000
|
{
"year": 2011,
"sha1": "8a4772765f7eee8a379142a482ad2b27bd3fb09d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2011.00018/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a4772765f7eee8a379142a482ad2b27bd3fb09d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
27629419
|
pes2o/s2orc
|
v3-fos-license
|
Radiomic Features Are Superior to Conventional Quantitative Computed Tomographic Metrics to Identify Coronary Plaques With Napkin-Ring Sign
Supplemental Digital Content is available in the text.
geometric shapes, which might pose a challenge for radiomic feature analysis. Therefore, we sought to assess whether calculation of radiomic features is feasible on coronary lesions. Furthermore, we aimed to evaluate whether radiomic parameters can differentiate between plaques with or without NRS.
Methods
Institutional review board approved the study (SE TUKEB 1/2017) and because of the retrospective study design informed consent was waived. The data and study materials will not be made available to other researchers for purposes of reproducing the results or replicating the procedure because of intellectual property rights and patient confidentiality. However, we made our analysis software open source and freely accessible for other researchers. 17
Study Design and Population
From 2674 consecutive coronary CT angiography examinations because of stable chest pain, we retrospectively identified 39 patients who had NRS plaques. Two expert readers reevaluated the scans with NRS plaques. To minimize potential variations because of inter-reader variability, the presence of NRS was assessed using consensus read. Readers excluded 7 patients because of insufficient image quality and 2 patients because of the lack of the NRS; therefore, 30 coronary plaques of 30 patients (NRS group; mean age: 63.07 years; interquartile range [IQR], 56.54-68.36; 20% female) were included in our analysis. As a control group, we retrospectively matched 30 plaques of 30 patients (non-NRS group; mean age: 63.96 years; IQR, 54.73-72.13; 33% female) from our clinical database with excellent image quality. To maximize similarity between the NRS and the non-NRS plaques and minimize parameters potentially influencing radiomic features, we matched the non-NRS group based on degree of calcification and stenosis, plaque localization, tube voltage, and image reconstruction. Detailed patient and scan characteristics are summarized in Table 1, whereas detailed description of scan characteristics and image quality measurements are described in Methods 1 section of the Data Supplement.
Traditional Plaque Characteristics
All plaques were graded for luminal stenosis (minimal 1% to 24%; mild 25% to 49%; moderate 50% to 69%; severe 70% to 99%) and degree of calcification (calcified; partially calcified; noncalcified). Furthermore, plaques were classified as having low attenuation if the plaque cross-section contained any voxel with <30 Hounsfield unit and having spotty calcification if a <3-mm calcified plaque component was visible. Detailed plaque and imaging information is shown in Table 2.
Image Segmentation, Conventional Quantitative Metrics, and Data Extraction
Image segmentation and data extraction was performed using a dedicated software tool for automated plaque assessment (QAngioCT Research Edition; Medis Medical Imaging Systems B.V., Leiden, The Netherlands). After automated segmentation of the coronary tree, the proximal and distal ends of each plaque were set manually. Automatic lumen and vessel contours were manually edited by an expert if needed. 18 From the segmented data sets, 8 conventional quantitative metrics (lesion length, area stenosis, mean plaque burden, lesion volume, remodeling index, mean plaque attenuation, and minimal and maximal plaque attenuation) were calculated by the software. The voxels containing the plaque tissue were exported as a DICOM data set using a dedicated software tool (QAngioCT 3D Workbench; Medis Medical Imaging Systems B.V.).
Smoothing or interpolation of the original Hounsfield unit values was not performed. Representative examples of volume-rendered and crosssectional images of NRS and non-NRS plaques are shown in Figure 1.
Calculation of Radiomic Features
We developed an open-source software package in the R programming environment (Radiomics Image Analysis), which is capable of calculating hundreds of different radiomic parameters on 2-and 3-dimensional data sets. 17 We calculated 4440 radiomic features for each coronary plaque using the Radiomics Image Analysis software tool. Detailed description on how radiomic features were calculated can be found in the Methods 1 section of the Data Supplement, whereas a detailed description of the calculated statistical parameters can be found in the Methods 2 section of the Data Supplement.
Statistical Analysis
Binary variables are presented as frequencies and percentages, whereas ordinal and continuous variables are presented as medians and IQRs because of possible violations of the normality assumption. For robust statistical estimates, parameters between the NRS and the non-NRS groups were compared using the permutation test of symmetry for matched samples using conditional Monte Carlo simulations with 10 000 replicas. 19 For diagnostic performance estimates, we conducted receiver-operating characteristics analysis and calculated area under the curve (AUC) with bootstrapped confidence interval values using 10 000 samples with replacement and calculated sensitivity, specificity, and positive and negative predictive values by maximizing the Youden index. 20 To assess potential clusters among radiomic parameters, we conducted linear regression analysis between all pairs of the calculated 4440 radiomic metrics. The 1−R 2 value between each radiomic feature was used as a distance measure for hierarchical clustering. The average silhouette method was used to evaluate the optimal number of different clusters in our data set. 21 Furthermore, to validate our results, we conducted a stratified 5-fold cross-validation using 10 000 repeats of the 3 best radiomic and conventional quantitative parameters. The model was trained on a training set and was evaluated on a separate test set at each fold using receiver-operating characteristics analysis. The derived curves were averaged and plotted to assess the discriminatory power of the parameters. The number of additional cases classified correctly was calculated compared with lesion volume. The McNemar test was used to compare classification accuracy of the given parameters compared with lesion volume. 22 Because of the large number of comparisons, we used the Bonferroni correction to account for the family-wise error rate. Data are presented as median with interquartile ranges or frequency and percentage as appropriate. BMI indicates body mass index; DLP, dose length product; and NRS, napkin-ring sign.
Bonferroni correction assumes that the examined parameters are independent of each other; thus, the question is not how many parameters are being tested but how many independent statistical comparisons will be made. Therefore, based on methods used in genome-wide association studies, we calculated the number of informative parameters accounting for 99.5% of the variance using principal component analysis. 23,24 Overall, 42 principal components were identified; therefore, P values <0.0012 (0.05/42) were considered significant. All calculations were done in the R environment. 25
Descriptive Results
There was no significant difference between the NRS and non-NRS groups regarding patient characteristics and scan parameters (Table 1). Furthermore, we did not observe any significant difference in qualitative plaque characteristics and image quality parameters (
Cluster Analysis of Radiomic Parameters
Results of the linear regression analysis conducted between all pairs of the calculated 4440 radiomic metrics are summarized using a heatmap (Figure 3). Hierarchical clustering showed several different clusters where parameters are highly correlated with each other (represented by the red areas in Figure 3) but only have minimal relationship with other radiomic features (represented by the black areas in Figure 3). Cluster analysis revealed that the optimal number of clusters among radiomic features in our data set is 44.
Cross-Validation Results
Five-fold cross-validation using 10 000 repeats was used to simulate the discriminatory power of the 3 best radiomic and conventional parameter. Average receiver-operating characteristics curves of the cross-validated results are shown in Figure 4. Radiomic parameters had higher AUC values and identified lesions showing the NRS significantly better compared to conventional metrics. Detailed results are shown in Table 4.
Discussion
We demonstrated that coronary plaques consist of sufficient number of voxels to conduct radiomic analysis, and 20.6% of radiomic parameters showed a significant difference between plaques with or without NRS, whereas conventional parameters did not show any difference. Furthermore, several radiomic parameters had a higher diagnostic accuracy in identifying NRS plaques than conventional quantitative measures. Cluster analysis revealed that many of these parameters are correlated with each other; however, there are several distinct clusters, which imply the presence of various features that hold unique information on plaque morphology. Cross-validation simulations indicate that our results are robust when assessing the discriminatory value of radiomic parameters, implying the generalizability of our results.
Radiomics uses voxel values and their relationship to each other to quantify image characteristics. On the basis of our results, it seems not only do radiomic features outperform conventional quantitative imaging markers but also parameters incorporating the spatial distribution of voxels (GLCM, GLRLM, and geometry-based parameters) have a better predictive value than first-order statistics, which describe the statistical distribution of the intensity values. Among GCLM parameters, the interquartile range, the lower notch, the median absolute deviation from the mean of the GLCM probability distribution, Gauss right focus, and sum energy had the 5 highest AUC values. NRS plaques have many low-value voxels next to each other in a group surrounded by higher density voxels. This heterogeneous morphology results in an unbalanced GLCM and therefore higher interquartile rank values, which also means smaller lower notch values and bigger deviations from the mean. Gauss right focus and sum energy both give higher weights to elements in the lower right of the GLCM, which represents the probability of high-density voxels occurring next to each other. Because NRS plaques do not have many high-value voxels Component numbers of the geometric-based parameters refer to the specific attenuation bins created by discretizing the attenuation values to a given number of bins. AUC indicates area under the curve; CI, confidence interval; GLCM, gray-level co-occurrence matrix; GLRLM, gray-level run-length matrix; NPV, negative predictive value; and PPV, positive predictive value.
*Based on discretizing to 4 equally probable bins. †Based on discretizing to 16 equally probable bins. ‡Based on discretizing to 32 equally probable bins. §Based on discretizing to 2 equally probable bins. ‖Based on discretizing to 8 equally probable bins next to each other, they received smaller values, whereas non-NRS plaques have higher values, which resulted in excellent diagnostic accuracy. Among GLRLM statistics, long-and short-run low-gray-level emphasis, long-and short-run emphasis, and run percentage had the best predictive value. Run percentage and long-run emphasis give high values to lesions, where there are many similar value voxels in 1 direction, whereas long-run low-gray-level emphasis adds a weight to the previous parameter by giving higher weights when these voxel runs contain low Hounsfield unit values. NRS plaques' low-density core has many low CT number voxels next to each other in 1 direction; therefore, NRS plaques have higher values compared with non-NRS plaques, which results in excellent diagnostic accuracy. In case of short-run emphasis and shortrun low-gray-level emphasis, the contrary is true, which results in NRS plaques receiving low values, whereas non-NRS plaque have higher values also leading to high AUC values.
Among geometry-based parameters, the first 5 with the best diagnostic accuracy all represent the surface ratio of a specific subcomponent to the whole surface of the plaque. In all cases, the ratio of high-density subcomponents (eg, subcomponent 2 when the plaque was divided into 2 components) to the whole surface had excellent diagnostic accuracy. Because each subcomponent is composed of equal number of voxels because of the equally probable binning, the difference in surfaces is a result of how the high-intensity voxels are situated to each other. In case of NRS plaques, extraction of low attenuation voxels leaves a hollow cylindrical shape of high CT number voxels, which has a relatively large surface. Non-NRS plaques on the contrary do not have such voxel complexes; therefore, the surface of the high attenuation voxels is smaller, and, therefore, the ratio compared with the whole surface is also smaller.
This kind of transition from qualitative to quantitative image assessment was initiated by oncoradiology. Because studies showed that morphological descriptors correlate with later outcomes, 26 reporting guidelines such as the Breast Imaging Reporting and Data System started implementing qualitative morphological characteristics into clinical practice. 27 However, despite all the efforts of standardization, the variability of image assessment based on human interpretation is still substantial. 28 Radiomics, the process of extracting thousands of different morphological descriptors from medical images, has been shown to reach the diagnostic accuracy of clinical experts in identifying malignant lesions. 10 Furthermore, radiomics can not only classify abnormalities to proper clinical categories but also discriminate between responders and nonresponders to clinical therapy and predict long-term outcomes. 12,15 However, there are major concerns on the generalizability of radiomics. Several studies have shown that imaging parameters, reconstruction settings, segmentation algorithms, etc, all affect the radiomic signature of lesions. [29][30][31][32] Furthermore, it has been shown that the variability caused by these changeable parameters is in the range or even greater than the variability of radiomic features of tumor lesions. 33 Little is known about cardiovascular radiomics. Several studies will be needed to replicate these results in the cardiovascular domain. The potential of radiomics is extensive; however, the problem of standardized imaging protocols and radiomic analysis need to be solved to achieve robust and generalizable results.
Despite our encouraging results, our study has some limitations that should be acknowledged. All of our examinations were done using the same scanner and reconstruction settings. It is yet unknown how these settings might affect radiomic parameters and therefore influence the applicability of radiomics in daily clinical care. Furthermore, our results are based on a case-control study design. The true prevalence of the NRS is considerably smaller compared with non-NRS plaques in a real population. Therefore, our observed positive predictive values might be higher, whereas our negative predictive values might be smaller than that expected in a real-world setting. Moreover, our limited sample sizes might not allow the accurate assessment of the diagnostic accuracy of the different parameters. However, we performed Monte Carlo simulations and cross-validated our results to achieve robust estimates.
Radiomics is a promising new tool to identify qualitative plaque features such as the NRS. Because the number of CT examinations increases, we are in dire need of new techniques that increase the accuracy of our examinations without increasing the workload of imaging specialists. We demonstrated that radiomics has the potential to identify a qualitative high-risk plaque feature that currently only experts are capable of. Furthermore, our findings indicate that radiomics can quantitatively describe qualitative plaque morphologies and therefore have the potential to decrease intra-and interobserver variability by objectifying plaque assessment. In addition, we observed several different clusters of information present in our data set, implying that radiomics might be able to identify new image markers that are currentlyt unknown. These new radiomic characteristics might provide a more accurate plaque risk stratification than the currently used highrisk plaque features. Radiomics could easily be implemented into currently used standard clinical workstations and become a computer-aided diagnostic tool, which seamlessly integrates into the clinical workflow and could increase the reproducibility and the accuracy of diagnostic image interpretation in the future. Further studies are needed to explore the potential of cardiovascular radiomics.
Disclosures
Dr Kolossváry is the creator and developer of Radiomics Image Analysis software package, which was used for radiomic analysis. AUC values of averaged ROC curves shown in Figure 4 are presented with the corresponding proportion of additional cases classified correctly by the given parameter compared with the reference lesion volume. P values indicate the statistical significance of the increased diagnostic accuracy compared with lesion volume. AUC indicates area under the curve; and ROC, receiver-operating characteristic. . Stratified 5-fold cross-validated receiver-operating characteristic (ROC) curves of the best radiomic and conventional quantitative parameters. Radiomic parameters (blue) have higher discriminatory power to identify plaques with napkin-ring sign compared with conventional quantitative metrics (green). Detailed performance measures can be found in Table 4.
|
2017-12-14T05:01:18.109Z
|
2017-12-01T00:00:00.000
|
{
"year": 2017,
"sha1": "0f00c9d55c1ffa2828b911bab86f923586150b2d",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/CIRCIMAGING.117.006843",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f00c9d55c1ffa2828b911bab86f923586150b2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14371855
|
pes2o/s2orc
|
v3-fos-license
|
References
Abstract. Convective processes profoundly affect the global water and energy balance of our planet but remain a challenge for global climate modeling. Here we develop and investigate the suitability of a unified convection scheme, capable of handling both shallow and deep convection, to simulate cases of tropical oceanic convection, mid-latitude continental convection, and maritime shallow convection. To that aim, we employ large-eddy simulations (LES) as a benchmark to test and refine a unified convection scheme implemented in the Single-column Community Atmosphere Model (SCAM). Our approach is motivated by previous cloud-resolving modeling studies, which have documented the gradual transition between shallow and deep convection and its possible importance for the simulated precipitation diurnal cycle. Analysis of the LES reveals that differences between shallow and deep convection, regarding cloud-base properties as well as entrainment/detrainment rates, can be related to the evaporation of precipitation. Parameterizing such effects and accordingly modifying the University of Washington shallow convection scheme, it is found that the new unified scheme can represent both shallow and deep convection as well as tropical and mid-latitude continental convection. Compared to the default SCAM version, the new scheme especially improves relative humidity, cloud cover and mass flux profiles. The new unified scheme also removes the well-known too early onset and peak of convective precipitation over mid-latitude continental areas.
Introduction
Accurate representation of deep convection with global climate models of coarse resolution remains a nagging problem for the simulation of present-day and future climates.Typical biases include the simulation of a double Inter-Tropical Convergence Zone (ITCZ, see e.g., Bretherton, 2007;Lin, 2007), a too weak, too fast or spatially distorted Madden-Julian Oscillation (MJO, see e.g., Slingo et al., 1996;Bretherton, 2007) and poor timing of convection with a too early onset, peak and decay of precipitation.This last bias is apparent both over the Tropics (e.g., Yang and Slingo, 2001;Bechtold et al., 2004) and mid-latitude continental areas (e.g., Dai et al., 1999;Lee et al., 2007).
Many approaches have been proposed over the years to parameterize deep convection (see e.g., Arakawa, 2004;Randall et al., 2003, for a review).The most popular method remains the use of a mass flux scheme (see e.g., Plant, 2010;Arakawa and Schubert, 1974).The latter aims to predict the vertical structure and evolution of a one-dimensional entraining-detraining plume (bulk mass flux scheme) or spectrum thereof (spectral mass flux scheme).Irrespective of the specific design, convection schemes have to rely on some assumptions to relate the sub-scale cloud behavior to the large-scale resolved flow.Such relations are hard to get from observations and hard to formulate.
Recently, the use of large-eddy or cloud-resolving simulations to characterize the behavior of the cumulus ensemble has allowed the formulation of improved convective parameterizations.Rio et al. (2009) were able to simulate a realistic diurnal cycle of convection for an idealized case of mid-latitude continental convection by adding a density current parameterization to Emanuel (1991)'s convection scheme.Grandpeix et al. (2010) investigated this approach for the Hydrology-Atmosphere Pilot Experiment in the Sahel (HAPEX-Sahel) and the Tropical Ocean Global C. Hohenegger and C. S. Bretherton: Unified shallow-deep convection scheme Atmosphere Coupled Ocean Atmosphere Response Experiment (TOGA COARE) and found good agreement with cloud-resolving model simulations.Several studies also documented improvements in tropical convection, without nevertheless being able to fully remove the ITCZ or MJO biases, by employing more elaborate entrainment/detrainment formulations (e.g., Chikira and Sugiyama, 2010;Bechtold et al., 2008;Li et al., 2007;Wang et al., 2007), revised closures/triggering functions (e.g., Deng and Wu, 2010;Li et al., 2007;Zhang and Mu, 2005;Neale et al., 2008) or by introducing convective momentum transport (e.g., Deng and Wu, 2010;Richter and Rasch, 2008).The possible impacts of such modifications are in general strongly model dependent and confined to certain aspects of the simulated convection.In this respect it is still not clear whether a single convective parameterization can realistically handle both tropical oceanic and mid-latitude continental convection.
This study is geared towards improving the simulation of deep convection in coarse-resolution climate models.In contrast to the approach employed in most such models, we seek to develop a unified convection scheme starting from a parameterization designed for shallow cumulus convection.We regard shallow convection as mostly non-precipitating convection with no ice formation.Deep convection will refer to precipitating convection.Cloud-resolving modeling studies have documented the gradual transition occurring from shallow to deep convection and highlighted its importance for the simulated convective diurnal cycle (e.g., Guichard et al., 2004).This may be best achieved with a unified scheme.Our study is a step in this sense.We will explore how to unify shallow and deep convection and present singlecolumn model experiments to test our results.
The basic hypothesis behind our approach is that the main difference between shallow and deep convection is precipitation (both rain and snow) and its effects.Evaporation of precipitation (hereafter called rain evaporation) modifies the atmospheric environment and especially the structure of the planetary boundary layer (PBL), which feeds back on the convective development.Including such effects in a shallow convection scheme should thus allow the representation of deep convection within the same scheme.We thus see deep convection as highly interactive with the PBL state, like shallow convection.Our parameterization approach is further motivated by the results of recent large-eddy simulations (e.g., Khairoutdinov and Randall, 2006) which have highlighted the importance of rain evaporation for deep convection.
In order to fulfill our goals and test our hypothesis, we will employ large-eddy simulations of different convective events.We will investigate modifications in the PBL structure and in the atmospheric environment due to falling precipitation, and derive appropriate relations to describe them.These relations will then be implemented in the shallow convection scheme developed at the University of Washington (UW) by Bretherton et al. (2004) and Park and Bretherton (2009).Using a single-column version of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM), the performance of the new unified scheme will be assessed against large-eddy simulations, the default version of the CAM single-column model, and a version of the single-column model in which the UW shallow convection scheme is used without modification (but also without any separate deep convection scheme).
As this paper was being written, Mapes and Neale (2011) also presented results of CAM simulations with a unified convection scheme.They extended the UW shallow convection scheme to a two plume model and introduced a new prognostic variable called org to control the transition between shallow and deep convection.org is meant to represent convective organization and acts upon cloud-base properties and lateral mixing rates.The source of org is rain evaporation with an arbitrary set conversion rate.Our approach bears similarities with the one of Mapes and Neale (2011) as it also uses rain evaporation and its effects on cloud-base properties and mixing rates to control the transition from shallow to deep convection.However we stick to the one plume model, do not introduce new prognostic equations and employ largeeddy simulations to quantify the effect of rain evaporation on the subsequent cloud development.
The outline is as follows.Section 2 presents our method with a description of the different models, cases considered, and our experimental set-up.Section 3 focuses on the planetary boundary layer; changes in cloud-base mass flux and cloud-base thermodynamic properties between shallow and deep convection are investigated, parameterized and tested with single-column model experiments.Section 4 repeats the analysis for entrainment and detrainment rates.Conclusions are given in Sect. 5.
Models
The large-eddy simulations (LES) are performed with the System for Atmospheric Modeling (SAM, see Khairoutdinov and Randall, 2003).The model solves the 3D anelastic equations given prescribed large-scale tendencies and surface fluxes/sea surface temperature.As parameterization, the model includes a bulk microphysics scheme, a Smagorinskytype scheme to represent subgrid-scale turbulence, and the radiation package (Collins et al., 2006) taken from the NCAR CAM3 global climate model (GCM).A more detailed description of SAM can be found in Khairoutdinov and Randall (2003).
For the single-column model experiments we employ the Single-column (one-dimensional) version of the Community Atmosphere Model (SCAM, see Hack and Pedretti, 2000), version 3.5.SCAM comes with the full atmospheric parameterization package of the CAM3.5 GCM.This is a version of CAM3 (see Collins et al., 2006) with a modified treatment of deep convective momentum transport (Richter and Rasch, 2008) and a revised deep convective trigger (Neale et al., 2008).CAM3 includes a surface-driven boundary-layer turbulence scheme based on Holtslag and Boville (1993).Deep convection is parameterized after Zhang and McFarlane (1995) while shallow convection follows Hack (1994).
As alternate parameterizations, the model can be run with new moist turbulence and shallow convection schemes developed at the UW (see Bretherton and Park, 2009;Bretherton et al., 2004;Park and Bretherton, 2009).
The UW shallow convection scheme
Since the UW shallow convection scheme serves as the starting point to develop a unified convection scheme, it is explained here in more detail.It is a mass flux scheme based on a buoyancy-sorting, entrainment-detrainment plume model.Updraft mass flux M u and updraft properties ψ u are computed according to: with the fractional entrainment rate, δ the fractional detrainment rate, ψ the mean environmental property and S ψ source term.The mass flux at cloud base M cb is determined by the ratio between convective inhibition (CIN) and mean planetary boundary layer turbulent kinetic energy (TKE): with ρ the air density.CIN is implicitly computed within the scheme (see Park and Bretherton, 2009), while TKE must be provided by the boundary layer scheme.This ensures tight interactions between the planetary boundary layer (PBL) and cumulus convection.If the lifting condensation level (LCL) is much higher than the top of the boundary layer, CIN is very large and air parcels don't have enough kinetic energy to overcome their CIN.The boundary layer height increases via entrainment until it reaches the LCL.As a result, CIN decreases, the mass flux increases and compensating subsidence increases preventing the PBL to rise further.The closure thus acts to keep the cumulus base near the top of the PBL and keeps CIN on the same order as TKE (Fletcher and Bretherton, 2010).Cloud properties are expressed in terms of the total water mixing ratio q t = q v + q l + q i and the ice-liquid water potential temperature θ li = θ − q l L v / c p − q i L f / c p (Deardorff, 1976), with θ the potential temperature, q i , q l and q v the ice, liquid water and water vapor mixing ratios, L v and L f the latent heat of vaporization and of sublimation, c p the specific heat of dry air at constant pressure, and the Exner pressure function.Both q t and θ li are assumed to be conserved for non-precipitating moist adiabatic processes.At cloud base, q t is set to its surface value, while θ li is diagnosed from the lowest value of the virtual potential temperature over the PBL and the value of q t at cloud base.Updraft vertical velocity w u is diagnosed solving with B u updraft buoyancy, a virtual mass coefficient and b drag coefficient.a and b are set to 1 and 2, respectively (see Bretherton et al., 2004, for more detail).The updraft vertical velocity determines the maximum height reached by the plume.
Entrainment and detrainment processes are parameterized using buoyancy sorting principles.Mixing of cloudy air with environmental air generates a spectrum of mixtures with different buoyancies and vertical velocities.It is assumed that only mixtures that can travel a certain vertical distance l crit remain in the updraft.By assuming that the generated spectrum of mixtures is uniform, the fractional entrainment and detrainment rates per unit height are found to be The critical mixing fraction χ c depends on height; at each level it is fully determined by the chosen l crit as well as by the updraft and environmental properties expressed by their buoyancy and humidity (see Eq. (B1) in Bretherton et al., 2004).The fractional mixing rate 0 (m −1 ) is set empirically to 8/z, with z (m) being the height above ground.The scheme also includes enhanced penetrative entrainment above the level of neutral buoyancy of the bulk updraft (see Eq. (D1) in Bretherton et al., 2004).The UW shallow convection scheme employs extremely simple microphysics: condensate larger than 1 g kg −1 is removed from the updraft as precipitation, which is partitioned between a fixed fraction that can fall through the updraft (and which can only evaporate below the cumulus base) and a remainder that is detrained into the environment (and which can evaporate above cloud base).In either case, the evaporation rate depends upon the saturation deficit and the precipitation flux.Note that while rain evaporation drives organized downdrafts in reality, there is no explicit downdraft formulation in the scheme; evaporated precipitation homogeneously cools the entire grid cell.
In principle, the UW shallow convection scheme could be directly used to predict deep convection.It contains a representation of precipitation and ice formation processes as well as of evaporation.However, it does not include any feedback between falling precipitation and subsequent convective development, which, as stated in the introduction, might be important for deep convection.Within the framework of a bulk mass flux scheme, cloud-base mass flux, cloud-base properties and entrainment/detrainment rates are key quantities controlling the cloud development.Those are thus the three quantities that we will examine in more detail in Sects.3 and 4 and modify with appropriate relationships to design a unified convection scheme.
Cases
In order to investigate issues related to the parameterization of moist convection, we consider three cases that have been well observed and extensively studied in the past.They have been chosen to span diverse atmospheric conditions and types of convection.
The first case is taken from measurements made at the Atmospheric Radiation Measurement (ARM) Southern Great Plains station between 19 June and 3 July 1997 (Julian days 170-186).This case typifies continental summertime midlatitude convection.The period encompasses a wide range of conditions, including clear days, shallow convection, diurnally forced convection, and precipitation associated with the passage of extratropical cyclones and fronts.
The second case represents tropical marine deep convection.The measurements are taken from the Kwajalein Experiment (KWAJEX) over the west Pacific warm pool.We restrict here our analysis to the period 24 July-10 September 1999 (Julian days 205-253).
Finally, we also consider the Barbados Oceanographic and Meteorological Experiment (BOMEX), a frequently simulated example of non-precipitating shallow trade-cumulus convection.The forcing data are derived from observations taken on 22-23 June 1969.
Experimental set-up
The three cases are simulated with SAM and with different versions of SCAM, using prescribed time-dependent profiles of large-scale vertical motion and horizontal advective heating and moistening as well as surface fluxes (for ARM) and sea surface temperature (for KWAJEX and BOMEX).Each SAM simulation is doubly periodic in the horizontal but employs a different grid.For the ARM case, SAM is run with a horizontal resolution of 500 m with 384×384 grid points and 96 vertical levels going up to 30 km.The grid spacing varies between 50 m near the surface to 250 m in the midtroposphere.The KWAJEX simulation has a horizontal resolution of 1000 m and a vertical resolution of 100 m near the surface up to 400 m in the mid-troposphere.The domain contains 256 × 256 × 64 grid points.For both ARM and KWA-JEX, the domain-mean winds are nudged to the time-varying observational profiles with a one-hour relaxation time.Finally, the BOMEX simulation contains 256 × 256 × 96 grid points with a resolution (both horizontally and vertically) of 40 m.In the upper third of the domain, perturbations to the horizontal mean are linearly damped to help absorb convectively-forced gravity waves.For BOMEX, the winds are forced by a geostrophic wind profile rather than through nudging.
Similar simulations of SAM have been validated and investigated in detail by Khairoutdinov and Randall (2003) for ARM, Blossey et al. (2007) for KWAJEX, and Siebesma et al. (2003) for BOMEX.These studies show that the SAM model reproduces the overall convective development fairly accurately compared to observations in all three cases.Hence, we will use the SAM simulations as a benchmark both to characterize the behavior of the cumulus ensemble and to validate the SCAM single-column model experiments.
For all cases, SCAM is run with 30 vertical levels and a time step of 5 min, driven by the same large-scale forcing and surface fluxes/sea surface temperature as SAM.For KWAJEX and BOMEX, the start and end times of the SCAM simulations coincide with the SAM integrations.For ARM, only specific rain events are simulated with SCAM instead of the full time period as a whole.This is to ensure that differences obtained between the integrations are due to the convective parameterization rather than to the simulation of different atmospheric conditions.Indeed, SCAM drifts away from SAM with time in ARM due mainly to different timings and amplitudes of individual rain events.For each rain event, we employ the SAM-simulated mean profiles as initial data for the SCAM simulations.The specific events that we simulate (see, e.g.Fig. 1) are days 174 (05:30 UTC Julian day (JD) 174 to 11:30 UTC JD 175), 176 (05:30 UTC JD 176-11:30 UTC JD 177), 178 (05:30 UTC JD 178-05:30 UTC JD 179), 179 (05:30 UTC JD 179-05:30 UTC JD 180) and 180 (05:30 UTC JD 180-11:30 UTC JD 181).Days with strong large-scale forcing are omitted since SCAM will tend to perform well for those cases due to the use of prescribed largescale tendencies.
To investigate the performance of the new unified convection scheme, three main types of SCAM simulations are performed (Table 1).The first experiment employs the default version of the CAM3.5 model, in which PBL processes are parameterized after Holtslag and Boville (1993), shallow convection after Hack (1994) and deep convection after Zhang and McFarlane (1995).This simulation is called CAM and serves as our control experiment.
The second experiment employs the UW PBL scheme, the UW shallow convection scheme and no deep convective parameterization.In this case, precipitation associated with deep convection will only be produced if the full grid cell reaches saturation (through SCAM microphysical scheme) or if the shallow convection scheme by itself succeeds in producing deep plumes.It can thus be expected that this simulation will underestimate deep convection.The experiment is called UWS and is otherwise identical to the CAM experiment.
Finally, the last set of experiments uses the UW PBL scheme and a modified version of the default UW shallow convection scheme encompassing a unified treatment of shallow and deep convection.Otherwise the integrations Table 1.Overview of the different SCAM simulations.HB stands for Holtslag and Boville (1993), Hack for Hack (1994), ZM for Zhang and McFarlane (1995), UWPBL for the University of Washington PBL scheme (Bretherton and Park, 2009) and UW for the default University of Washington shallow convection scheme (Park and Bretherton, 2009).UWunif corresponds to the new unified convection scheme.
Name
PBL Shallow Cu Deep Cu Mass flux σ q Entrainment Eq. ( 6 are identical in their set-up to CAM and UWS.They are called UWSDpbl, UWSDall, UWSDe0, UWSDe0mf and UWSDe0sq, depending on the modifications made to the UW shallow convection scheme.The modifications are described along the text and in Table 1.Ideally, those simulations should stand in closer agreement to SAM than both the CAM and the UWS integrations.
The planetary boundary layer under deep convection
As stated in the introduction, we regard deep convection as shallow convection modified due to its production of heavy precipitation.In this view, the cloud-base mass flux in deep as well as shallow convection is regulated by the PBL and the subcloud mixed layer.Bulk instability measures like convective available potential energy (CAPE) are relevant to the vertical structure of cumulus convection, which in turn indirectly modifies the thermodynamic structure of the PBL and the overlying air.However, they are not viewed as direct controls on the cloud-base mass flux.This approach is supported by Kuang and Bretherton (2006), who showed that changes in CIN and TKE were closely correlated in largeeddy simulations of an idealized transition from shallow to deep convection and Fletcher and Bretherton (2010), who showed that a closure based on CIN and TKE could predict the cloud-base mass flux in LES simulations of ARM, KWAJEX and BOMEX.We especially refer to the study of Fletcher and Bretherton (2010) for more details on the advantages/disadvantages of employing a closure based on CIN and TKE.We nevertheless note that such a closure allows for a more straightforward implementation of precipitation effects on the cloud-base mass flux than a closure based on CAPE.
In this section, we thus investigate how changes in the PBL structure between shallow and deep convection, especially due to rain evaporation, affect cloud-base mass flux and cloud-base thermodynamic properties.Both are key parameters controlling the convective development.We use the SAM outputs to derive appropriate relations characterizing such effects.Except noted otherwise, all the quantities are computed from the SAM output statistics.The latter are computed at each time step and averaged both horizontally (if appropriate) and over one hour time interval.The derived relations are then implemented in the UW shallow convection scheme and tested in a single-column mode.horizontally averaged fields).TKE is averaged over the depth of the planetary boundary layer PBLH and is denoted hereafter TKE.PBLH is diagnosed as the height where the resolved-scale turbulent buoyancy flux reaches its minimum.The cloud base is defined following Fletcher and Bretherton (2010) as the lifting condensation level of an air parcel with a potential temperature equal to the potential temperature averaged over the layer 200-400-m and a water vapor mixing ratio q v equal to the mean 200-400-m q v +σ q , where σ q is the horizontal standard deviation in q v averaged over the same height range.If the estimated cloud base is lower than the PBL height, we set its value to the height of the PBL, as done in the UW scheme.It is evident in Fig. 1 that TKE increases from shallow to deep convection, i.e. with increasing precipitation.This increase is driven by rain evaporation, which generates cold pools that induce horizontal flows.Together with the associated organized surface convergence along cold pool boundaries, it represents a supplementary energy source for lifting an air parcel and thus favors the development of convection, as is apparent in our SAM simulations and many past studies of deep convection (see e.g., Rio et al., 2009;Khairoutdinov and Randall, 2006).
Cloud-base mass flux
The increase in TKE due to cold pool activity is not directly resolved by a coarse-resolution global model.Rio et al. (2009) represented this effect by implementing a density current parameterization and coupling it to Emanuel (1991)'s scheme.Here we follow a simpler, more empirical, approach to parameterize this effect.
Figure 2 shows a scatter plot of TKE versus a measure of evaporative potential (and thus cold pool activity) formed as the product of RR cb and PBLH, for our ARM and KWAJEX simulations.The full circles in Fig. 2 Figure 2 indicates that TKE scales with RR cb • PBLH with a similar slope both for KWAJEX and ARM.The value for zero precipitation should correspond to the TKE in a dry convective boundary layer TKE dry , which is predicted by the PBL scheme.We can thus write: and PBLH in m.The correlation coefficient is 0.92 for KWA-JEX and 0.83 for ARM during the onset/mature phase.The correlation is quite strong: adding further predictors does not provide any additional skill.The larger scatter in ARM results from the larger variability in the sampled synoptic conditions.The agreement worsens during the decay precipitation phase as cold pools need time to dissipate after rain evaporation is finished.The evaporation of convective precipitation induces a positive feedback between convection and boundary-layer processes embodied in Eq. ( 6), because it generates TKE that yields more convection and more precipitation.However, rain evaporation also cools and stabilizes the PBL.At a certain point, the PBL collapses and shuts down convection.This effect is expressed by the use of PBLH in Eq. ( 6).
Cloud-base thermodynamic properties
Figure 3 shows example profiles of mass flux as a function of moist static energy (MSE) for ARM day 178 at 11:00 and 14:00 LT.In contrary to the other Figures, we employ the instantaneous 3D output from SAM to construct Fig. 3. 11:00 LT corresponds to the shallow convection phase, while 14:00 LT illustrates the situation under deep convection.
We use MSE as it is moist-adiabatically conserved and determines the temperature in saturated air.It is thus a useful and dynamically relevant characteristic of cumulus updrafts.This conservation is approximate in reality, but is exact (except for ice processes) given the thermodynamic equations employed in SAM.Throughout this paper, MSE is rescaled into temperature units by dividing by The profiles in Fig. 3 are obtained by binning at each height the grid points by their MSE and summing their mass flux per bin (see Kuang and Bretherton, 2006).The bin size is 0.25 K. Light to dark red colors in Fig. 3 imply positive values of the vertical velocity and thus represent updrafts, while light to dark blue colors represent downdrafts/subsidence.Figure 3 also displays in white and black the domain-averaged MSE and the domain-averaged saturated MSE, as well as the domain-averaged cloud cover in grey.Equivalently, the shaded portion in Fig. 3 above the black line of the saturated MSE can be interpreted as representing the cloudy points.
Comparison of Fig. 3a and b reveals similarities and differences in the partitioning of cloud-base MSE between shallow and deep convective updrafts and downdrafts.The cumulus cloud base is visible in both plots as the altitude of maximum lower-tropospheric cloud fraction; at this level the mean updraft MSE, indicated by MSE cb in Fig. 3, is almost identical to the domain-mean saturation MSE at that height, suggesting the cumulus updrafts have nearly the same temperature (and hence buoyancy) as their environment at cloud base.Above cloud base, the net upward mass flux is carried almost exclusively within cumulus clouds.Since clouds are less numerous than cloud-free grid points the line of the domain-mean MSE does not pass in-between up-and downdrafts but is shifted towards the environment.The typical range of MSE carried by the upward mass flux is also vertically continuous across cloud base at both times.
Before strong precipitation (Fig. 3a), the PBL has a structure akin to the structure of a dry convective boundary layer.Half of the PBL experiences updrafts with slightly higher MSE, half downdrafts with slightly lower MSE and MSE cb , as originating from the warmer part of the MSE spectrum, appears slightly warmer than the values of the domain-mean MSE in the PBL (the white line).Later on (Fig. 3b), precipitation-driven downdrafts bring a broad range of lower MSE into the PBL.Only the remaining high-MSE part of the PBL contributes to the convective cloud-base updrafts, and the difference between MSE cb and the values of the domainmean MSE in the PBL (the white line) increases.
We find that for both shallow and deep convection, the mean updraft MSE at cloud base MSE cb can be parameterized as follows (using SAM domain-and hourly-averaged statistics): with RR cb given in mm day −1 , MSE defined as the MSE averaged over the vertical layer 200-400 m, σ q the horizontal standard deviation in specific humidity averaged over that same vertical layer and L = 2.5 × 10 6 J kg −1 .The expression in Eq. ( 7a) is inspired by Fletcher and Bretherton (2010), who, through trial and error, found it the most skillful at predicting cloud-base properties (see their Sect.3a).Equation (7b) contains the approximation to compute σ q .It is obtained by fitting a first-order polynomial in RR cb to σ q .RR cb is chosen as the predictor since the increased PBL variability is mainly due to cold pool formation.Note that, even without precipitation, Eq. (7a) will predict a small increase in MSE cb .This is consistent with Fig. 3a and with the presence of turbulent eddies under shallow convection.Equation (7b) also sets an upper bound on σ q to express the fact that the pool of warm air available for updraft formation is limited, especially when cold pools begin to fill up the boundary layer.
The fit described by Eq. ( 7b) is illustrated in Fig. 4, using points from KWAJEX (full circles), ARM (open circles) and BOMEX (blue cross).Figure 4 indicates that Eq. ( 7b) is able to capture the overall values of σ q for KWAJEX and ARM.As a numerical example, MSE cb is larger than MSE by about 1 K on Fig. 3a, which matches the value of (L/c p )σ q predicted by Eq. ( 7b) assuming no precipitation.The large spread by small precipitation amount, especially in ARM, is due to points from the decay phase where cold pools need time to dissipate.On the other hand, Eq. (7b) will overestimate σ q and MSE cb in BOMEX.Since this does not seem to negatively impact our results (see Sect. 4.2), use of a more complicated expression for σ q seems unwarranted.
SCAM experiments
We now use the results of Sect.8a)-(8b).The augmentation is done in the con-vection scheme, but similar results can be obtained by increasing TKE in the boundary layer scheme.This is because of the tight coupling existing between the two schemes when employing a CIN/TKE closure, as noted in Sect.2.2.In UWS, TKE simply equals TKE dry , which is provided by the UW boundary layer scheme.
Cloud-base thermodynamic properties are expressed in UWSDpbl as the mean over the 200-400 m layer plus one standard deviation in humidity σ q (see Eq. 7a), instead of their surface or minimum values in UWS (see Sect. 2.2). σ q is predicted with Eq. (7b).Finally, the proportionality constant scaling the evaporation rate of falling precipitation is increased from 2 × 10 −6 to 1.5 × 10 −5 to be consistent with the values obtained from the SAM simulations (not shown).
It is important to note that the modifications in TKE and cloud-base thermodynamical properties introduced in UWS-Dpbl require PBLH and RR cb as predictors.PBLH is passed over from the boundary layer scheme.For RR cb we employ the precipitation averaged over the last hour to avoid undesirable effects associated with the on-off nature of convection schemes.The precipitation update also occurs at the end of the convection scheme and not in an iterative way.This prevents the scheme from adjusting within the loop rather than with time when transitioning from shallow to deep convection.
Figure 5 shows the diurnal cycle of precipitation for ARM days 176, 178, 179 and 180 for the simulations CAM, UWS, UWSDpbl and the SAM LES simulation.Day 174 exhibits similar features but is not included here for brevity.The default CAM configuration shows too weak a diurnal rainfall modulation that causes excessive morning precipitation.This problem is especially visible on day 178, which constitutes the most archetypical example of surface forced convection during the period.
Both UWS and UWSDpbl better capture the timing of precipitation.The onset of precipitation coincides with SAM on days 176, 178 and 180 (Fig. 5a, b, d), while it is delayed on days 179 (Fig. 5c) and 174 (not shown).However, UWS and UWSDpbl also strongly underestimate the precipitation amounts.The cloud-base improvements in UWSDpbl increase the simulated amounts on day 178 but the impact remains generally small.This is understandable; the cloudbase improvements only affect the simulation of strongly precipitating convection; if the convection never produces significant rainfall, these improvements have no chance to modify the simulation.
Hence, the inclusion of precipitation-related modifications in cloud-base properties is insufficient to transform a shallow convection scheme into a realistic deep convection scheme.Analysis of the different days suggests that UWS and UWS-Dpbl have difficulties in transitioning to precipitating deep convection due to too large entrainment/detrainment rates.We address this problem in the next section.
Entrainment
As in the previous section, we first employ the SAM simulations to derive formulations for entrainment and detrainment that work for both shallow and deep convection.We then implement and test them in combination with our cloudbase property modifications with single-column model experiments.
SAM results
Our approach retains the idea of buoyancy sorting described in Sect.2.2, in which entrainment and detrainment rates are computed as = 0 χ 2 c and δ = 0 (1 − χ c ) 2 (i.e., Eqs.5a and b), but SAM is used to revise the formulation of 0 .
In order to estimate 0 from our SAM experiments, we first compute and δ using the equations for a simple plume model, as given in Eqs. ( 1) and (2) and done in previous LES studies.We sample all the cloudy points to compute the updraft mass flux and average it over one-hour time intervals.For the updraft property ψ u , we choose the massflux weighted frozen moist static energy since it is approximately conserved (S ψ = 0).The mass-flux weighted frozen moist static energy is again sampled over all cloudy points and hourly averaged, while ψ corresponds to the domain and hourly averaged frozen moist static energy.Solving Eqs. ( 1) and (2) for and δ, we can then compute o from the buoyancy sorting relations (5a)-(5b).This presupposes that entrainment and detrainment rates indeed follow buoyancy sorting principles.
The result of this procedure is shown in Fig. 6. Figure 6 shows as an illustration profiles of o obtained for two different times in the KWAJEX simulation.The black solid line is associated with shallow cumuli with cloud tops reaching up to 2 km.The red solid line is under deep convection.Figure 6 serves to illustrate that 0 both varies with height and with the convective phase.At any given height, the values are larger during shallow cumulus convection.This is con- sistent with previous LES studies (e.g Kuang and Bretherton, 2006;Khairoutdinov and Randall, 2006).Such studies have hypothesized that deep convective clouds, because of their larger size, entrain less than shallow cumuli.
Based on Fig. 6, the following generalized profile is used to diagnose 0 : with z cb the height of the cloud base.α is implicitly computed by specifying 0 at two "anchor" heights within the cumulus layer, namely the cloud base z cb and a reference height z 1 that roughly corresponds to the minimum height of a cumulus updraft that will generate significant precipitation.The resulting relations read: In these formulae, 0 is in Pa −1 , RR cb in mm day −1 , w cb is the updraft velocity at cloud base (m s −1 ), ρ cb is air density at cloud base (kg m −3 ) and g is gravity.The velocity at cloud base is computed from the SAM mass-flux weighted velocity, sampled at all cloudy points and hourly averaged.Our specific choice of z 1 is somewhat arbitrary; other choices can produce similar results as long as Eq. ( 10c) is appropriately adapted.
Figure 7 shows scatter plots supporting Eqs. ( 10b)-(10c).Beginning from Fig. 7b and corresponding Eq. (10c), 0 (z 1 ) is set proportional to the inverse of the precipitation at cloud base.The correlation coefficient amounts to 0.6.An upper bound, obtained in Eq. ( 10c) by setting RR cb = 0.1 mm day −1 , is set on 0 (z 1 ) to avoid large values for small precipitation amounts.
Covariability between 0 and precipitation, as displayed by Fig. 7, is expected because higher precipitation amounts foster cold pool development which organizes the boundary layer.This produces larger and more coherent updrafts which have a lower bulk-mean entrainment rate, as noted above.Lower entrainment rates in turn favor the development of deeper clouds, hence sustaining a strong positive feedback between 0 and RR cb .
Note that Fig. 7b only includes the onset/mature precipitation phase, as marked in Fig. 1, to determine 0 (z 1 ).During the decay phase, precipitation amounts are small, like in the onset phase, but mixing rates are small.Including those points in the regression reduces the slope of the regression line and results in too small mixing rates during the onset phase.This manifests itself by an overly rapid transition to deep convection in the single-column model experiments.The overestimation implied by Eq. ( 10c) for the decay phase does not seem to have any detrimental effect on the simulations.
At cloud base 0 is chosen inversely proportional to the velocity at cloud base, as indicated in Fig. 7a and corresponding Eq. ( 10b).The correlation coefficient is 0.8.We do not use RR cb as a supplementary predictor since it does not add significant skill to this regression.Using w cb is analogous to the approach of Neggers et al. (2002), who proposed = 1/(w u τ c ), where w u is the updraft velocity (m s −1 ) and τ c = 300 s is an empirical mixing timescale.In fact, our formulation would imply = 4.1 × 10 −3 χ 2 c /w cb , which yields the same result for a typical cloud-base value χ c = 0.9.
We also note that for values w cb = 0.5 m s −1 and z cb = 500 m typical of BOMEX, our formulation implies 0 = 8 × 10 −3 m −1 = 4/z cb , which is at the low end of the range of possible cloud-base values given in Table 1 of Park and Bretherton (2009) for the default UW scheme.
As a final illustration, the profiles of 0 reconstructed by using Eqs.( 9) and ( 10) and the SAM values for ρ cb , w cb and RR cb , have been plotted as dotted lines in Fig. (6).Although not perfect, the fit captures the overall shape of the bulk entrainment rate profile and the corresponding difference between the shallow and the deep phase.
The formulation of 0 in Eqs. ( 9)-( 10) is admittedly empirical and tuned to our SAM simulations and to the way we computed it, which is a contentious issue by itself.It would be desirable in the future to use a more theoretically elegant approach tuned against a broader ensemble of simulations and observational constraints.However, our approach does try to build in some theoretically expected relationships between mixing rate and environmental variables and, as shown later, seems to produce plausible results.Equations ( 9)-( 10) keep the essence of a bulk entrainment rate varying with height and implicitly with cloud size.The use of precipitation at cloud base generalizes the specification of an inverse cloud radius as a predictor for entrainment rates (as in e.g.Kain, 2004) by allowing this radius to vary based on precipitation.Our approach can also produce similar results to decreasing entrainment rate at high ambient relative humidity, a method successfully applied by Bechtold et al. (2008), to the extent that higher environmental relative humidity will correlate with deeper clouds that yield more precipitation.Due to the strong feedback existing between entrainment and precipitation, there is obviously a causality issue.Given that removing rain evaporation has been shown to yield smaller clouds, larger entrainment rates and less precipitation (e.g.Khairoutdinov and Randall, 2006), there is some justification for using RR cb as a predictor.This is also consistent with principles of organization (Mapes and Neale, 2011).
The main difference to entrainment/detrainment formulations currently applied in convective parameterizations is that Eqs. ( 9)-( 10) do not require an explicit distinction between shallow and deep convection.Current formulations multiply their mixing rates by different prefactors.Here, through the production of precipitation and through changes in the environmental properties (as expressed by χ c ), the mixing rates are allowed to vary with time and can support both shallow and deep convection.Equations ( 9)-( 10) embody buoyancy sorting and organizational principles, which should apply to convection in general independently of the cloud depth.To which extent such a unified formulation can actually reproduce convection is investigated in the next section.
SCAM experiments
The revised entrainment-detrainment formulation is tested in SCAM by introducing it into UWSDpbl.As in UWSDpbl, we employ the precipitation averaged over the last hour as a predictor for RR cb .w cb is diagnosed with Eq. (8b), while the other terms in Eqs. ( 9)-( 10) are directly available.Two other changes are made to the default mixing scheme.First, no water is detrained before performing buoyancy sorting, as this tends to improve the results.Second, χ c is limited to a maximum value of 0.5 above 6 km to avoid the development of instabilities due to compensating subsidence in cases of an increasing mass flux with height.The new simulation is called UWSDall (see Table 1).
Figure 8 shows the diurnal cycle of precipitation for ARM days 176, 178, 179 and 180 for UWSDall, CAM, UWS and SAM.Comparison to Fig. 5 reveals a strong impact of the new entrainment formulation.UWSDall produces stronger precipitation than UWSDpbl.The amounts are of comparable magnitude to the SAM simulation.Despite a tendency to produce too large precipitation amounts at the beginning of the onset phase, UWSDall clearly improves the simulated precipitation diurnal cycle as compared to CAM.This is especially true on day 178 (see Fig. 8b), where most convective parameterizations would fail (see Guichard et al., 2004).
UWSDall, in contrast to UWSDpbl, can realistically transition to deep convection.In principle, the moistening of the environment during the day through detrainment from previous shallow convection should increase χ c , so the mass flux decreases less rapidly with height and at some point significant mass flux reaches into the mid-troposphere.Nevertheless this effect did not appear sufficient in our singlecolumn model experiments, in contrast to results from cloudresolving studies (see especially Chaboureau et al., 2004).An additional and explicit sensitivity of fractional entrainment and detrainment rates to precipitation is required for the UW scheme to realistically transition from shallow to deep convection with the right diurnal timing.
Figure 9a-d shows cloud cover, mass flux (from the cloudy points), relative humidity and temperature profiles for UWS-Dall, CAM, UWS and SAM on day 178 averaged over the precipitation phase (10:00 to 18:00 LT).CAM simulates excessive cloud cover at all levels (see Fig. 9a) and an unrealistic mass flux profile (see Fig. 9b) compared to SAM.UWSDall underestimates the cloud cover above 2 km.Since the computed cloud cover contains contributions from convective clouds, layered clouds and stratocumulus, where the cloud amount of the latter two categories is parameterized as a function of relative humidity, the observed underestimation is sensitive to the chosen relative humidity threshold for the onset of cloud formation.The mass flux profile in UWS-Dall is much more similar to SAM, with only a slight remaining underestimate of mass flux between 1.5 and 10 km.This good agreement implies that the new entrainment formulation is able to capture typical entrainment and detrainment rate profiles in ARM.Similar conclusions hold for other times and ARM days.
In terms of relative humidity and temperature, Fig. 9c, d indicates that UWSDall outperforms CAM and UWS.The UWSDall curve tends to agree well with the SAM results.The relative performance of the simulations is casedependent.Significant improvements are obtained on days 178 and 179 (in which the diurnal cycle of surface fluxes is the main convective forcing) while all simulations perform similarly on the remaining days, on which large-scale advective forcing is more important (not shown).
One of the main biases of the simulations is visible in Fig. 9d and especially in Fig. 9e. Figure 9e shows specific humidity profiles at 15:00 LT, the time of maximum precipitation.CAM, UWSDall and UWS are all moister than SAM.They all exhibit a well-mixed boundary layer (see profile below about 1 km), while SAM only remains well mixed in the upper part of the PBL (between about 300-900 m).This bias is a fundamental consequence of the interaction of the boundary layer scheme with the deep convection.Both the UW PBL scheme and Holtslag and Boville (1993) do not consider horizontal heterogeneity within the boundary layer.To maintain convection, they must sustain a convective PBL that extends from the surface to the convective cloud base, or else the CIN will become too large to allow further cloudbase mass flux.The convective PBL must be nearly well mixed.On the other hand, the SAM humidity profile is due to cold pools in which moist, cool air spreads out along the surface in some parts of the domain, while updrafts are driven by surface fluxes and organized surface convergence in other parts of the domain.This does not mean that UWSDall does not feel the presence of cold pools.Cold pools only require spatially localized rain evaporation.Rain evaporation is present in the UW convection scheme and directly feeds back into the layer-mean temperature and moisture equations at each grid level, thereby affecting the PBL.Through the implemented relations, rain evaporation will also influence the development of moist convection.The resulting changes in convective activity will then feed back onto the PBL mainly through changes in PBL height (see Sect. 2.2).This again affects the mean PBL properties and the future development of convection.The feedback loop is thus consistent, but, due to the design of the PBL scheme, the full PBL has to uniformly respond to such changes.
Finally, Fig. 10 shows time series of MSE averaged over the lowest 1 km of the atmosphere, as a rough estimate for the PBL, for ARM. Figure 10 illustrates the other main deficiency of the single-column model experiments.All the SCAM simulations exhibit warmer MSE than SAM during the phase of heavy precipitation (compare to the precipitation time series in Fig. 8).The apparent missing stabilization of PBL MSE in SCAM is a direct consequence of not having explicit downdrafts in UWS and UWSDall.CAM does include downdrafts, but only saturated downdrafts.Yet most of the downdrafts appear to be unsaturated in SAM.
The absence of downdrafts in UWSDall does not preclude the use of Eqs. ( 6), ( 7), ( 9) and (10).Our approach recognizes that cold pools, whether created by subcloud evaporation as we can do in CAM, or created also by organized convective downdrafts as visible in SAM, affect the convective development.The fact that UWSDall can track precipitation and exhibits some reduction in MSE in Fig. 10 indicates that our modifications can indeed introduce a feedback between convective rainfall and changes in the boundary layer structure.The reported biases in MSE, especially towards the end of the different days, have no strong influence since we use prescribed large-scale forcing and simulate each day separately.
Figure 11 displays the results obtained for KWAJEX for the different simulations.We do not show precipitation since all the simulations perform well due to the use of a prescribed omega field.The different profiles in Fig. 11a-d have been averaged over the full time period.As in ARM we can recognize the improvements in the simulated cloud cover and mass flux profiles in UWSDall as compared to CAM and UWS.UWSDall also captures the relative humidity profile very well, while both CAM and UWS tend to overmoisten the troposphere, especially above 3 and 1 km, respectively.Finally, no strong biases can be detected in the simulated temperature profile in UWSDall.
As in ARM, Fig. 11e reveals the bias toward a well-mixed PBL in the SCAM simulations.CAM and UWSDall appear too cold and too dry, while they were too warm and too moist in ARM (Fig. 9d, e).Time series of mean PBL MSE (not shown) reveals that the depletion of MSE in CAM and UWSDall during the precipitating phase is similar both in ARM and KWAJEX.Since the depletion is much stronger in SAM in ARM than in KWAJEX due to stronger downdrafts, this results in a warm and moist (cold and dry) bias in ARM (KWAJEX).We thus conclude that the ventilation of the PBL is too strong in UWSDall, which partly compensates for the missing downdrafts.In opposition, UWS never exhibits a strong depletion in MSE and thus is characterized by a warm and moist bias in all the cases.
Finally, the results for BOMEX are displayed in Fig. 12 with profiles of liquid water potential temperature, total specific humidity, cloud cover and mass flux for UWS, CAM, UWSDall and SAM.The profiles have been averaged over hours 3 to 6 of the BOMEX integrations, as in Park and Bretherton (2009).CAM exhibits similar biases to those noted in Park and Bretherton (2009) with excessive cloud cover throughout the cumulus layer.This bias is mainly removed in UWS and UWSDall.Although differences exist in the simulated profiles between UWS and UWSDall in Fig. 12, UWSDall is still able to simulate a typical case of shallow convection as well as UWS.In particular, with UWSDall, as with UWS, the simulated clouds remain shallow.Employing the Zhang and McFarlane (1995) scheme as the sole convective parameterization in CAM would erroneously simulate some deep convection for BOMEX.
Hence in terms of large-scale variables UWSDall agrees well with SAM in many respects.It provides improved single-column simulations of tropical oceanic, mid-latitude continental and shallow convection than the default version of the CAM model.It also gives more realistic simulations than UWS of both deep convection cases.
Sensitivity
In the previous section, we demonstrated that UWSDall compares better to SAM than either CAM or UWS.However, it remains to be shown whether all the included modifications are important for these improvements.From the results in Sect. 3 it is clear that the mixing rates need to be reformulated.The necessity of the changes in cloud-base mass flux and cloud-base thermodynamic properties are investigated in this section.
To that aim we perform three sensitivity experiments called UWSDe0, UWSDe0mf, and UWSDe0sq (see Table 1).UWSDe0 is identical to UWSDall except that it only includes entrainment/detrainment effects, not the modifications to TKE (Eq. 6) and thermodynamic properties (Eqs.7a, b).UWSDe0mf and UWSDe0sq build on UWSDe0: UWSDe0mf adds only the changes in cloud-base mass flux via changes in TKE (Eq.6), while UWSDe0sq adds only the changes in cloud-base thermodynamic properties (Eqs.7a, b) via changes in σ q .
Figure 13 shows the corresponding time series of precipitation for the ARM days 176, 178, 179 and 180.The differences between UWSDe0, UWSDe0mf and UWSDe0sq are larger on days 178-179, which are dominated by surface flux forcing, than on days 176 and 180 (and in the KWAJEX simulation), which have stronger advective forcing.All simulations initiate convection at the same time, which is expected since both cloud-base changes only affect the parameterization when there is already convective rainfall.However for days 178-179, all three new cases produce a period of rainfall with too weak a maximum and lasting too long compared to both SAM and UWSDall.We conclude that both cloudbase changes are required to make a sufficiently strong feedback between convective rainfall and changes in the boundary layer structure.The increase in precipitation in UWSDe0mf and UWSDe0sq versus UWSDe0 follows from an increased mass flux at all heights.This stands in better agreement to the SAM values (not shown).The enhanced mass flux in UWSDe0mf is a direct consequence of both enhanced cloud-base mass flux and more frequent triggering of convection, as expected from Eq. ( 6).The enhanced mass flux in UWSDe0sq follows from an enhanced entrainment rate and decreased detrainment rate at cloud base, which thus allow more plumes to be retained in the updraft.The latter changes in and δ relate to a value of χ c larger in UWSDe0sq than in UWSDe0, as expected from the use of moister updraft parcels.
For most other variables, the differences between UWSDe0mf, UWSDe0sq and UWSDe0 are small, both in ARM and KWAJEX.The exceptions are of course the TKE values and the cloud-base thermodynamic properties.
Figure 14 displays scatter plots of TKE in SCAM versus SAM for the ARM, KWAJEX and BOMEX cases.On the left, we show UWS as an example for the simulations which do not include the TKE increase due to cold pool activity (i.e., UWS, UWSDe0, UWSDe0sq).On the right, UWSDall is chosen as an example for the two remaining simulations, where Eq. ( 6) is used.
As indicated by Fig. 14 and as expected, TKE is strongly underestimated in UWS (or equivalently UWSDe0 and UWSDe0sq), while UWSDall (and UWSDe0mf) are in better agreement with SAM.The latter two simulations are able to capture the increase in TKE during precipitation events and thus confirm the appropriateness of Eq. ( 6).The overall underestimation in Fig. 14b is due to a slight underestimation of the boundary layer height in UWSDall.The points where a strong discrepancy between SCAM and SAM values remains visible in Fig. 14b correspond to those times where UWSDall produces no or only weak precipitation, while SAM records strong precipitation.
In terms of cloud-base thermodynamic properties, the use of Eq. (7b) yields an increase in cloud-base MSE.This increase amounts to up to 2 K in UWSDe0sq (and UWSDall) with respect to UWSDe0 (or UWS, UWSDe0mf).Given the existing biases in the PBL (see Sect. 4.2) this agrees better with SAM for KWAJEX, but less well for ARM.
Conclusions
This study has aimed to improve the simulation of deep convection with coarse-resolution climate models.Our specific goal has been to develop and assess the suitability of a unified convection scheme, capable of handling both shallow and deep convection.Our approach is based on the hypothesis that the main difference between shallow and deep Black, white and red circles are for KWAJEX, ARM, and BOMEX, respectively.For KWAJEX and ARM, only points with precipitation are plotted.The BOMEX point corresponds to the mean over the simulation hours 3 to 6.A 1:1 line has also been added to the plots.convection is precipitation, so that improving the representation of some key effects of precipitation in a shallow convection scheme can allow it to be extended into a unified scheme.
We considered previously studied cases of shallow convection (BOMEX), tropical oceanic convection (KWAJEX) and mid-latitude continental convection (ARM).We used large-eddy simulations of the three cases as benchmarks for parameterization formulation and improvement.We implemented our improved relations in the UW shallow convection scheme and tested the results in the SCAM single-column modeling framework.
We included three main effects of precipitation on convective development, encompassing cloud-base mass flux, cloud-base humidity and entrainment/detrainment rates.Rain evaporation generates cold pools in the PBL, forcing convergence and thus favoring cloud formation.This expresses itself by an increase in boundary-layer TKE, which in the UW scheme is a primary control on cloud-base mass flux.We found that the increase of TKE compared to that in the dry convective boundary layer scales with precipitation at cloud base times the height of the PBL (see Eq. 6).Rain evaporation also modifies the probability distribution function of cloud-base thermodynamic properties, increasing horizontal humidity variance.Cumulus updrafts tend to form over the moister parts of the PBL, so to predict cumulus base humidity we explicitly include a parameterization of humidity variance in terms of cloud-base precipitation rate (see Eq. 7).Finally, the formation of cold pools organizes the planetary boundary layer and the entire cumulus ensemble and indirectly lowers the bulk entrainment rate 0 .This effect is represented through a dependence of the cumulus updraft lateral mixing rate on precipitation at cloud base (see Eqs. 9-10).These modifications were implemented in the UW shallow convection scheme.In all cases, the new scheme performs as well as or better than the default CAM version.It also outperforms the simulations using the default UW shallow convection scheme as the sole convective parameterization.For our tropical oceanic convection case, the new unified scheme especially improves relative humidity, cloud cover and mass flux profiles.The performance in terms of mid-latitude continental convection is more case-dependent.The main improvement is in the simulated timing of the diurnal cycle when surface fluxes are the dominant forcing for convection.The new unified scheme removes the premature onset of precipitation, which is a common pitfall of deep convective parameterizations, and is able to simulate the peak rainfall rate and duration of rainfall reasonably well.Finally, the scheme can still realistically simulate shallow oceanic trade-cumulus convection.
The main biases, which are present not only with the new scheme but in all of our single-column model experiments, are that the simulated PBL structure tends both to be too well mixed and to insufficiently reduce boundary-layer MSE during deep convection as compared to LES, especially for midlatitude continental convection.We attribute those biases to a combination of two factors.First, to maintain convection, the PBL schemes must sustain a convective PBL that extends from the surface to the convective cloud base.Second, the UW convection scheme does not explicitly consider downdrafts, while the Zhang and McFarlane (1995) scheme only includes saturated downdrafts.Yet most of the downdrafts appear to be unsaturated in the LES.
Of the three tested modifications (i.e., in cloud-base mass flux, cloud-base thermodynamic properties and bulk entrainment rate), changing the bulk updraft lateral mixing rate has the largest impact.Without this, the UW scheme has difficulty in simulating a realistic transition from shallow to deep convection.This is true even though its buoyancy sorting algorithm should allow it to be sensitive to freetropospheric relative humidity and previous cloud-resolving modeling studies (e.g., Chaboureau et al., 2004) have indicated that moistening of the troposphere through detrainment from shallow and/or congestus clouds controls the transition to deep convection.Expressed in other words, precipitation (or its evaporation) is a strong positive feedback in the transition from shallow to deep convection in our single-column model experiments, which helps explain why this transition is rather difficult for cumulus parameterizations to simulate.The impacts of our modifications made to the cloud-base mass flux and cloud-base thermodynamic properties are subtler.Separately, they only have small impacts but taken together, they enhance the sensitivity of convection to prior precipitation and enhance the precipitation peaks.Their inclusion seems especially important for the timing and amplitude of the convective diurnal cycle over mid-latitude continental areas.
All in all our approach does allow for a unified representation of moist convection.It also allows for a representation of the organizational effects of precipitation, which have been shown of importance for convection and are generally not included in convective parameterizations.Finally, it allows for tighter interactions between the planetary boundary layer and convection.Although included in the convection scheme, our modifications directly affect the mean boundary layer properties through the tight coupling produced by the use of a CIN/TKE closure.As indicated in Fletcher and Bretherton (2010), this type of closure maintains the cumulus base near the top of the PBL: an increase in cloud-base mass flux due to cold pool effects will feed back on the height of the PBL, thereby affecting the PBL properties.This is an advance over existing PBL schemes.Our proposed modifications are consistent even without explicitly including a downdraft scheme.Our approach recognizes that cold pools, whether created by subcloud evaporation, or created also by organized convective downdrafts, affect the convective development.Cold pools only require spatially localized rain evaporation in the PBL, not coherent downdrafts descending from high above the PBL top; in fact the downdrafts in tropical marine convection are not very organized or deep.
Our approach may be criticized as quite empirical and biased towards the employed sampled data.As KWAJEX contains many data points and exhibits weak variability, it has the strongest influence on the estimated coefficients.Nevertheless we still considered quite a large data sample and built our different relations on theoretical expectations.The simplicity of the derived relations allows for an easy implementation/testing with other mass flux schemes, as long as such schemes employ a closure related to the PBL state.It also serves as a good proof of concept for our working hypothesis, letting room for more elaborate future refinements.Key unresolved issues remain the formulation of unsaturated downdrafts and a better theoretical foundation for formulating appropriate entrainment/detrainment rates, both issues with which deep convective parameterizations have been struggling for a long time.As a next step, global climate model simulations with CAM will be performed with the new unified scheme.
Fig. 1 .Fig. 1 .
Fig.1.Time series of precipitation at cloud base; black curve for onset and mature precipitation phase, grey for decay phase, and turbulent kinetic energy averaged over the planetary boundary layer; red curve for onset and mature phase; orange for decay phase, for (a) ARM and (b) KWAJEX.
Figure 1 Fig. 2 .Fig. 2 .
Figure1shows the time series of TKE and precipitation at cloud base RR cb for ARM and KWAJEX obtained from the SAM output statistics (and thus based on hourly and are for the onset/mature phase in which shallow convection is developing into deep precipitating convection, while open circles are for the decay phase.The times classified into the different phases, subjectively determined from the domain-mean precipitation time series, are indicated in Fig. 1 for reference.
Fig. 3 .Fig. 3 .
Fig. 3. Profiles of mass flux as a function of MSE for ARM day 178 at (a) 11:00 and (b) 14:00 LT (local time).White and black lines represent domain-averaged MSE (K) and saturation MSE (K).Grey line indicates the profile of cloud fraction (CLD), while the dashed arrow indicates MSE cb .
Fig. 4 .
Fig. 4. Scatter plot of σ q versus precipitation at cloud base for KWAJEX (full circles, 1169 points), ARM (open circles, 410 points) and BOMEX (blue cross, 1 point) based on hourly statistics.The red line denotes the fit through the points (see Eq. 7b).
3.1 to modify cloud-base characteristics of the UW shallow convection scheme to help make it more suitable for deep convection.The new simulation is called UWSDpbl.In contrast to UWS, it employs the mass flux closure developed byFletcher and Bretherton (2010) based on the same set of LES simulations as we use.This closure, like the default UW shallow cumulus mass flux closure, relates the mass flux to an exponential function of the ratio between CIN and TKE, but multiplies this function by a different prefactor.The closure reads:M cb = 0.06ρw cb exp(−CIN/TKE)(8a)w cb = 0.28 TKE + 0.64 (8b)with M cb mass flux at cloud base and w cb velocity at cloud base.As an addition, UWSDpbl employs Eq. (6) to predict the cold pool contribution augmenting TKE in the mass flux closure Eqs. (
Fig. 6 .Fig. 6 .
Fig.6.Profiles of 0 for two illustrative examples during KWAJEX: black line for the shallow p line for the deep convection phase.The solid lines are from the SAM output, while the dashed obtained using Eqs.(9) and (10).
Fig. 11.Same as Fig. 9 but for KWAJEX.The profiles in (a)-(d) have been averaged over the full time period, while panel (e) displays a specific time under strong precipitation (hour 230 in the simulation). 45
Fig. 11 .
Fig. 11.Same as 9 but for KWAJEX.The profiles in (a)-(d) have been averaged over the full time period, while panel (e) displays a specific time under strong precipitation (hour 230 in the simulation).
Fig. 12 .Fig. 12 .
Fig. 12. Profiles of (a) liquid water potential temperature (K), (b) total specific humidity (g/kg), (c) cloud cover and (d) mass flux (kg m −2 s −1 ) averaged over hours 3 to 6 of BOMEX, for the same simulations as in the previous figures.
Fig. 14 .Fig. 14 .
Fig. 14.Scatter plots of PBL averaged TKE in (a) UWS and (b) UWSDall versus SAM values.Black, white and red circles are for KWAJEX, ARM, and BOMEX, respectively.For KWAJEX and ARM, only points with precipitation are plotted.The BOMEX point corresponds to the mean over the simulation hours 3 to 6.A 1:1 line has also been added to the plots.
|
2014-10-01T00:00:00.000Z
|
2010-05-01T00:00:00.000
|
{
"year": 2011,
"sha1": "d2034464287e828bcfbe2a23514033be36435c91",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/11/10389/2011/acp-11-10389-2011.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b99f988cd4e98edeae58f4916fe562d28a845f94",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Geology"
]
}
|
139247416
|
pes2o/s2orc
|
v3-fos-license
|
U-drawing of Fortiform 1050 third generation steels. Numerical and experimental results
Elasto–plastic behavior of the third generation Fortiform 1050 steel has been analysed using cyclic tension–compression tests. At the same time, the pseudo elastic modulus evolution with plastic strain was analysed using cyclic loading and unloading tests. From the experiments, it was found that the cyclic behavior of the steel is strongly kinematic and elastic modulus decrease with plastic strain is relevant for numerical modelling. In order to numerically analyse a U-Drawing process, strip drawing tests have been carried out at different contact pressures and Filzek model has been used to fit the experimental data and implement a pressure dependent friction law in Autoform software. Finally, numerical predictions of springback have been compared with the experimentally ones obtained using a sensorized U-Drawing tooling. Different material and contact models have been examined and most influencing parameters have been identified to model the forming of these new steels.
Introduction
During the last decades, many new grades of high-strength steel materials have been developed [1][2][3][4]. However, it is well known that the formability of steels decreases with increasing strength. This is also valid for the newly developed third generation high-strength steels as well [5]. The light weighting potential of these new commercial steels is said to be around 20% in comparison to already used Dual Phase steels. For example, the Dual Phase 780 steel has a yield strength of 480 MPa and an ultimate strength of 830 MPa. Having the same formability and comparable forming limit curve, the Fortiform 1050 steel, the material studied in this paper, has a yield strength of 760 MPa and an ultimate strength of 1100 MPa.
Besides the significant decrease in formability, the higher post-forming springback is one of the biggest technological problems when defining and developing new high strength sheet metal components. Current industrial problems when using these materials are premature cracks and excessive set-up times needed for springback compensation.
In order to achieve a good accuracy when numerically predicting the final geometry of the components two main aspects must be considered: the material model and the restraining forces due to the friction between the tool and the material. If these variables are not accurately defined the numerical predictions can be far away from the experimental results [6][7].
Concerning the material, a good definition of the hardening behavior is very important when the material suffers alternative tensile and compression cycles [8]. This is the case of deep drawing processes when the material goes through the drawbeads and/or the die radius. Meanwhile mild steels present nearly an isotropic hardening behavior, high strength steels present a kinematic or mixed hardening behavior [9][10]. Consequently, a poor definition of the hardening behavior may result in very low accurate results of springback. The coefficient of friction (COF) is also a significant parameter to take into account when trying to obtain accurate predictions in numerical simulation [11][12]. COF influences the restriction level of the material flow through the tools and an inaccurate definition of this parameter can induce undesirable splits, insufficient deformations and, moreover, unexpected springback phenomena. A lower COF induces lower stress-states and as a consequence higher elastic recovery [13]. Therefore, it is necessary to correctly define the COF in order to accurately predict the final geometry of the component through the numerical simulation.
Among the several works published for AHSS steels, no scientific paper has been found where the above mentioned aspects have been studied for a third generation steel. For this reason and because some new third generation steel grades are currently being launched to the market by several steel makers, the current work was carried out aiming to study the Fortiform 1050 third generation steel behavior under stamping conditions. Advanced material and tribological characterization have been performed and an U-Drawing operation is numerically and experimentally studied to analyze the effect that the different numerical models have in the final springback predictions.
Material characterization
The studied material is an electrolytically galvanized third generation Fortiform 1050 steel, from Arcelor Mittal, having a thickness of 1.2 mm. Chemical composition and mechanical properties are shown in table 1. Besides the mechanical properties, the Lankford or anisotropy coefficients of the material have also been obtained following the ASTM E 517-00 standard and using GOM ARAMIS digital image correlation technique. The Lankford coefficients at different directions and the monotonic hardening curve are shown in figure 1a. for Fortiform 1050 material Based on these results, the monotonic behaviour of the material has been modelized using a combined Swift-Hockett/Sherby hardening model (see equation 1). The parameters of the model are as follows, ε 0 =0.00312, m=0.14, C=1725, σ i =766. 8 As explained in the introduction, the cyclic hardening of the material is very important to predict the springback and therefore the final geometry of the deep drawn components. Therefore, tensilecompression tests have been carried out in order to identify the kinematic behavior of the material. A servo hydraulic MTS 810 Material Test System has been used for the experiments. Force data has been acquired through an axial load cell and strain data has been measured with small strain gauges to obtain continuous measurement.
The material has been subjected to cyclic tension compression test for hardening characterization during the experimental test. A maximum strain of +2% in tension and -2% in compression has been achieved during the tests. The experimental results and the experimental test equipment used to avoid specimen buckling are shown in figure 1b. The experimental results have been fitted to the kinematic model implemented in Autoform R7. The parameters used in the model are K=0.012, ξ=0.8.
Finally the pseudo elastic modulus evolution with plastic strain has also been characterized for the material. For doing so, cyclic loading and unloading tests have been carried out. The evolution of the pseudo elastic modulus can be observed in figure 2. In the case of the pseudo elastic modulus evolution, the experimental results have also been fitted to the kinematic model implemented in Autoform R7. The parameters used in the model are g=0.166,c=4.43.
Tribological characterization
1.2379 tool steels hardened at 60 HRc have been used for the strip drawing tests. Same machining protocol as followed in industrial toolmakers has been used for machining and polishing the tool inserts. Roughness in longitudinal direction of tool inserts is approximately Ra0.4.
The Fortiform 1050 steel specimens are electrolytically galvanized and EDT textured. The longitudinal and transversal surface roughness of the as received material is Ra1. 2 Mild oil conditions have been used to perform the strip drawing tests and the experimental Udrawing tests. The lubricant amount of the sheets is 1.5-2.0 g/m2. Strip drawing tests have been performed to identify the friction coefficient to be used in the numerical simulations. A range of contact pressure from 1MPa to 20MPa has been covered and a sliding velocity of 10 mm/s has been defined during the tests. The exemplary curves obtained from the normal and tangential force sensors are shown in figure 3a and the friction coefficient for the Fortiform1050 material is shown in figure 3b.
U-Drawing experimental test
Experimental U-drawing tests have been performed at Mondragon University aiming to identify the best numerical models to predict springback when using third generation high strength steels. The tooling is modular and die inserts, punch inserts and drawbead inserts can be exchanged to obtain different test variables. For the present study no drawbead inserts have been used in order to avoid the influence of the drawbeads in the results. The characteristic dimensions of the configuration used for this study are summarized in table 2 while the schematic view of the tooling is shown in figure 4. A hydraulic press has been used to perform the drawing experiments. The drawing speed has been set to 1 mm/s, the drawing stroke has been 70 mm and 110 mm wide specimens have been used for all the tests. The blankholder force has been defined as 186kN what has given a contact pressure in the blankholder from 7.5MPa at the beginning of the drawing operation to 17.5MPa at the end of the operation due to the flow of the material.
U-Drawing test numerical simulation
Eight different numerical models have been compared in the current study using the Autoform R7 software. All simulations were defined with a sheet thickness of 1.2 mm with elastic plastic shell elements, an initial element size of 20 mm with a maximum of 4 refinement levels and 11 layers through the thickness (final validation conditions in Autoform R7). For all the models the elastic modulus has been set to 205 GPa and Hill48 yield criteria has been defined by means of the above mentioned Lankford coefficients.
Regarding the hardening behavior of the material, four different material models have been created named as Conventional, Young, Kinematic and Full. All the models have been defined using a combined Swift Hockett-Sherby hardening model. For this definition monotonic tensile test data has been used. The Conventional model does not consider any kinematic behavior of the material meanwhile the other three models do consider it.
In the case of the Full model, both the pseudo elastic modulus evolution and the kinematic behavior have been considered. For doing so, the results of the tension-compression and cyclic loading unloading tests have been fitted to the kinematic model of Autoform R7. When doing the fitting, the four parameters of the kinematic hardening model, shown in table 3, have been calculated. In the case of the Young model, only the evolution of the pseudo elastic modulus has been considered by fitting two of the parameters of the kinematic hardening model in Autoform R7 to the results achieved in the cyclic loading unloading tests. And finally, the Kinematic model only considers the kinematic evolution of the material and the other two parameters of the kinematic model in Autoform R7 have been fitted to the results achieved in the tension compression test. The coefficients of all the models are summarized in table 3.
Regarding the tribological behavior, two models have been used in the simulations. On the one hand, a constant coefficient of friction of 0.15 named as Constant has been defined. This COF value is widely used in the industry nowadays. On the other hand, a more advanced pressure dependent coefficient of friction, named as Pressure, has been used. The coefficients of both models can also be found in table 3. Experimentally deep drawn specimens have been digitalized using a Mitutoyo 3D measurement machine. For geometrical accuracy comparison, GOM ATOS software and technique has been employed. The numerical and experimental results are shown in figure 5. In order to quantify the geometrical differences, the distance between the experimental component and the numerical results has also been calculated. The deviation for the different models is shown in figure 6.
Conclusions
Many authors have recently demonstrated the importance that the hardening law, the apparent elastic modulus change and the coefficient of friction have on springback predictions. In this article the importance of using a conventional or a mixed kinematic hardening model and a constant or variable coefficient of friction has been analyzed using the Fortiform 1050 third generation steel. For the selection of the best model the final springback prediction has been used.
Regarding the material hardening model, it has been found that the conventional model, where no evolution of the pseudo elastic modulus nor the kinematic behavior are considered, gives as a result an underestimation of the springback predictions. On the other hand, if both aspects are taken into consideration, the springback predicted by the simulation is overestimated given as a result poor Regarding the friction between the material and the tooling, the coefficient of friction has been measured by means of strip drawing tests. At low contact pressures the coefficient of friction is about 0.145 meanwhile when increasing the contact pressure up to 20MPa the coefficient of friction decreases down to 0.12. The Filzek model is able to accurately represent the evolution of the coefficient of friction. Furthermore the introduction of the pressure dependent coefficient of friction increases the springback for all material models.
As a final conclusion, and based in the material and component geometry used at the present study, it can be stated that the material model which best predicts the final geometry of the component is the one that only considers the evolution of the pseudo elastic modulus, named as Young model at the present study. In terms of the friction model, it can be stated that the used of the pressure dependent coefficient of friction increases the springback of the component for all the analyzed material models. Therefore, engineers should take into consideration the change of the pseudo elastic modulus and the variable coefficient of friction when simulating drawing processes for third generation steels.
|
2019-04-30T13:03:54.194Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "25297049aea3e9cfb43369f7541040b4c8235fd8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/896/1/012118",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bc29d8f9aad39e81f37a8da82d8d79e4666c71fa",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
55942457
|
pes2o/s2orc
|
v3-fos-license
|
Measures of the vibration function pass in the wall element diagnostic test
Building constructions as well as their constituent structural elements must meet the strength requirements, which will not jeopardize the safety of their use. As part of the research experiment, the focus was on the issue related to strength testing properties of masonry elements using a non-invasive test method involving the measurement of vibrations. 40 samples of bricks. Half of them were deliberately damaged so that it would be possible to assess the suitability of the measurement method used and its variability resulting from damage to the masonry element.
Introduction
Modal analysis is widely used in removing defects caused by infrastructure vibrations, structure modification, updating of the analytical model, or state control, and is also used to monitor the vibrations of structures in the aviation industry and land engineering mechanics [2,5,3].
The vibration test of building structures is not only to the measurement of the vibrations we are interested in.In order for researchers to be able to obtain interesting measurement values (such as the FRF function, stabilization diagrams and accompanying vibration estimators), it is necessary to carry out a whole sequence of operations, which results in the possibility of obtaining interesting data and results.
As part of this study, the methodology of investigations of masonry elements and course of conduct in order to obtain reliable results of conducted vibration tests were described.It was used to describe this process an example of the use of experimental modal analysis in the study of brick elements.
Traditional experimental modal analysis (EAM) uses input (excitation) and output (response) and measures it to estimate modal parameters, consisting of modal frequencies, attenuation and vibrations.However, traditional EAM has some limitations, such as [1,6,8]: -in traditional EAM, artificial excitation is normally carried out to measure the vibration frequency; -traditional EAM is usually carried out in a laboratory environment, but in many cases the true state of degradation may differ significantly from those tested in a laboratory environment.This article presents the results of testing a brick masonry element in a dry environment using experimental modal analysis and appropriate software (partly authoring) used to carry out and visualize the results of such research.
Building structure dynamics
One of the basic criteria used in the design of modern building structures are the dynamic properties of the structure.They have a direct impact on system vibrations, noise emitted, fatigue strength and structural stability.Analyzes of dynamic properties in most cases encountered in practice are made based on analysis of the behavior of the conglomerate model [4,7,10].
In most cases structural models (MES) are used to describe the dynamics of the structure, which consists in discretization of the system with a continuous distribution of parameters with the adoption of some simplifying assumptions, eg.related to the deflection line of the modeled element (transfer method).However, the models built in this way, for the purposes of dynamics analysis, give approximate results, the use of which is very limited.They require tuning based on the knowledge of properties measured on the actual object.
The use of vibrations in the study of the degradation (quality) of building structures results from the following reasons: -vibration processes reflect physical phenomena occurring in constructions (deformations, stresses, cracks), on which the degree of destruction (fitness) and proper functioning depend, which results from the nature of the vibration process spreading; -easy measurement of vibration processes in conditions of normal operation of the facility, without having to turn it off and special preparation, it enables a non-removal assessment of the state of destruction; -vibration processes are characterized by high speed of information transfer in a time unit, defined by the Shanon formula: (1) -depending on the spectral width of the process F and the ratio of the power of the useful signal NS to the power of interference noise NZ; -vibration processes are characterized by a complex time, amplitude and frequency structure, which ensures proper evaluation of the state of the entire structure as well as its individual elements.During the operation of the structure, due to the existence of a number of external factors (extortion of the environment, extortion from other structures) and internal (aging, wear, cooperation of elements) in the structure there are disturbances of equilibrium states which propagate in the elastic medium -the material from which the structure is built.The disorder is dynamic and maintains the conditions of balance between the state of inertia, elasticity, damping and extortion.This results in the dissipation of wave energy, their deflections, reflections and mutual overlap.The existence of sources and the spread of disturbances causes the occurrence of vibrations of the construction elements and the surrounding environment.
When separating the input processes, structure and output processes in the analysis of the dynamic state of the structure, one should remember about their random character.
The internal input, treated as a set of forcing quantities -defining the structure (shape, quality, clearance, etc.) and the method of elements cooperation is shaped in random conditions during production, which is revealed by random properties during operation.
External input, defining the conditions of cooperation of structural elements with other elements of the system (load changes, speed, environmental impact) is in practice also random.
The richness of the possibility of randomness and the existence of disturbances is the reason for additional assumptions about the inputs and the transformations of the structure destruction states.They refer to assumptions about linearity, stationarity and ergodicity of objects models and processes [6,8,9].
Measuring software
The latest measurement equipment purchased from SignalLab cooperation named Sigview was used to measure the time of force and response of the system as well as to determine the FRF function.This software allows you to easily perform modal analysis of brick elements, as well as any other building constructions -fig. 1.In the measurement system, all the data needed to be calibrate, and the measuring path have been defined.For the needs of research carried out in this stage, it was started by defining the number of active measuring channels.Their number is limited only by the number of inputs on the measurement card, which is different for different models of measurement segments.
Two measurement channels were defined for the measurement using experimental modal analysis.According to the theoretical assumptions of the experimental modal analysis, the first sensor was reserved for a modal hammer (vibration excitation), and in place of 2 piezoelectric sensor was connected to measure the element's response to extortion -Fig.2. As part of the examinations that served to describe the course of investigation during the research, brick wall elements were examined -Fig.3.In the implementation of this kind of experiment, experimental modal analysis was used for measurements.For this purpose, 40 bricks were tested, out of which 20 samples were damaged intentionally, to show differences in the obtained test results, which demonstrates the suitability of a given test method for assessing masonry component degradation.After calibrating the measurement system, it was possible to start measurements.According to the assumptions of the modal analysis, the samples were suspended on an unstretchable line, which allowed to release all ties.Tests carried out only in the Z axis, because since they were brick masonry elements, it is interesting to pass the vibration signal in the direction of the compressive forces acting on the wall.During the measurement, the vibrations were forced by a modal hammer in the -Z direction, while the reception took place in the + Z axis due to the sticky sensor at the bottom of the element.As a result of the conducted tests, the time of force of forced force (modal hammer) and time courses of response (piezoelectric sensor) were obtained, and their visualizations are shown below in Fig. 4.
Methodology of data acqusition in the form of frf functions generation and calculation of it's surface area
After recording the time courses of force and response, these results should be subjected to further transformation.The aim of these operations is to obtain the FRF transition function and later to create stabilization diagrams on its basis, thanks to which it is possible to generate the natural frequency of the material being tested.The course of action is described in detail below to obtain the results that are interesting for the researcher.Two new programs operating in the MATLAB environment have been developed for further data processing.
During the vibration tests, the possibilities of acquiring new cognitive values, which may be useful to assess the destruction of the tested wall elements, were analyzed.Such information, apart from time and force transitions and stabilization diagrams together with the generated natural frequencies of the tested elements, may carry the numerical value of the surface area of the obtained signal transition function by the element being tested.To be able to calculate the area under the function curves, a new proprietary software named FUNCTIONS ANALYSIS was created.The program interface view is shown in the figure below.The range of functions that we mark in the program for counting the surface area we are interested in is graphically marked by red.In the example drawing below, all the functions have been marked and the area of functions in the full frequency range was calculated -Fig.6.
Results
During the tests, the vibration signal transition function was generated by the structure (FRF function) and stabilization diagrams with the applied vibration frequencies for each element.
The following are the averaged results of the tests in the form of stabilization diagrams, which were prepared for 10 measurements (10 extortions and 10 responses) for the good components and damaged wall elements.
Summary
The publication describes a thoroughly complex process of obtaining measurement data, which can be considered as a unified system of exploitation of test results and their validation and generation of significant values of natural frequency The presented research results indicate that it is possible to distinguish between material properties, which affects the ability to distinguish their strength properties.The tests also confirmed the suitability of measuring equipment for tests using exploitative modal analysis performed on real structural elements.
Fig. 5 .
Fig. 5.The main screen of the FUNCTIONS ANALYSIS program.
Fig. 7 .
Fig.7.Area of the FRF function for solid and damaged brick in the X axis.
Fig. 8 .Fig. 9 .
Fig.8.Coherence function surface fields for full and damaged brick in the X axis
Fig. 10 .
Fig. 10.Coherence function surface fields for full and damaged brick in the Y axis.
Fig. 11 .
Fig. 11.Area of the FRF function for solid and damaged brick in the Z axis.
Fig. 12 .
Fig.12.Coherence function surface fields for full and damaged brick in the Z axis.
|
2018-12-07T16:04:46.332Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "1d4700a3630f217e149734542e48e8653556138c",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/41/matecconf_diagnostyka2018_02018.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1d4700a3630f217e149734542e48e8653556138c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
210840929
|
pes2o/s2orc
|
v3-fos-license
|
Elizabethkingia anophelis Infection in Infants, Cambodia, 2012–2018
We describe 6 clinical isolates of Elizabethkingia anophelis from a pediatric referral hospital in Cambodia, along with 1 isolate reported from Thailand. Improving diagnostic microbiological methods in resource-limited settings will increase the frequency of reporting for this pathogen. Consensus on therapeutic options is needed, especially for resource-limited settings.
45 × 10 9 /L; C-reactive protein 195 mg/L; and total bilirubin 252 μmol/L. Lumbar puncture was omitted due to thrombocytopenia. A blood culture was transferred with her from the local hospital.
The day after transfer, she experienced symptoms of meningitis, including fever and seizures. We initiated anticonvulsant therapy and changed her antimicrobial therapy to intravenous meropenem (40 mg/kg 3×/d). Blood culture microscopy subsequently showed gram-negative bacilli, identified as E. anophelis on hospitalization day 3 by matrixassisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry using bioMérieux VITEK MS in an in vitro diagnostic mode using the spectrum knowledge base version 3.2.0 (bioMérieux, https://www.biomerieux.com). At this stage antimicrobial drugs were changed to intravenous ciprofloxacin (10 mg/kg, 2×/d) and vancomycin (15 mg/kg, 1×/d); a blood culture collected before the change confirmed bacteremia caused by E. anophelis.
The patient was extubated on day 6 and underwent lumbar puncture because her platelet count had improved. Cerebrospinal fluid was cloudy, with a leukocyte count of 265 cells/µL (75% polymorphs), glucose of 1 mmol/L, and protein of 13 g/L. Gram stain microscopy revealed no organisms, and culture was negative. After 28 days of ciprofloxacin/vancomycin, she was clinically well and discharged home.
At her 1-month follow-up appointment, she displayed clinical features of raised intracranial pressure, including neurologic deficits. Cranial ultrasound showed hydrocephalus, a suspected sequela of meningitis, and she was referred for neurosurgical opinion.
After the case described was identified, we retrieved all isolates in −80°C storage that had been identified since January 2012 as Chryseobacterium meningosepticum, C. miricola, or Elizabethkingia spp. We included in our study the first isolates from a given clinical episode: 4 identified as C. meningosepticum, 3 as E. meningoseptica, and the isolate already identified as E. anophelis. From subculture, we analyzed these using VITEK MS MALDI-TOF mass spectrometry. We identified We describe 6 clinical isolates of Elizabethkingia anophelis from a pediatric referral hospital in Cambodia, along with 1 isolate reported from Thailand. Improving diagnostic microbiological methods in resource-limited settings will increase the frequency of reporting for this pathogen. Consensus on therapeutic options is needed, especially for resource-limited settings.
To provide regional context for these results, 2 microbiology laboratories in Mae Sot, Thailand, and Vientiane, Laos, also reanalyzed stored clinical isolates as we described. In Mae Sot, a single isolate of E. meningoseptica from a neonatal blood culture was reidentified as E. anophelis. In Vientiane, 9 isolates of C. meningosepticum were reidentified as E. meningoseptica, and the identity of 1 E. meningoseptica isolate remained the same.
Conclusions
Although reports of E. anophelis are rare, cases are reported from countries in southern Asia, including Singapore (3), Taiwan (5), and Hong Kong (6). Our findings are consistent with reports of E. anophelis infection from other countries demonstrating it to be an opportunistic organism affecting more vulnerable patient groups (6). The mortality rate associated with E. anophelis is high (50%), and isolation of E. anophelis from blood in two thirds of the children in this study demonstrates its importance as a human pathogen.
Previous reports of community-and hospital-acquired E. anophelis infection among infants have proposed a range of transmission routes, including vectorborne (Anopheles mosquitoes) (1,2,7), waterborne (8), and vertical transmission (9). With no temporal clustering, and with most cases occurring among older infants, we suspect that unidentified environmental reservoirs are possible sources of these cases.
Previously, studies relied on 16S rRNA testing to identify E. anophelis, with biochemical phenotypic methods unable to distinguish between Elizabethkingia spp. (10). Although this method provides high discriminatory power, its use in diagnostic microbiology is limited to established laboratory settings. It also requires highly trained staff to interpret results, which are rarely available within a clinically useful timeframe. Until late 2017, oxidase-positive gram-negative isolates were identified at the microbiology laboratory at Angkor Hospital for Children by biochemical phenotypic methods (API 20NE, bioMérieux); identification is now done by MAL-DI-TOF mass spectrometry. Misidentification of Elizabethkingia spp. using biochemical methods has been reported (2,6); however, updated MALDI-TOF databases provide reliable differentiation (10). As the resolution that MALDI-TOF mass spectrometry provides in pathogen identification expands, and its use becomes available in low-and middle-income countries, we expect to see higher reported incidence of E. anophelis infection. Conversely, it may become apparent that the burden of E. meningoseptica is not as high as previously thought, with retrospective studies already showing E. anophelis as the predominant species of its genus (6,10,11). In our study, this possibility was not found to be the case in Laos, suggesting possible regional variation. E. anophelis demonstrates phenotypic and genotypic resistance to multiple antimicrobial drugs, and, without epidemiologically based interpretive cutoffs, selection of therapeutic options is challenging (4,5,10,12). High MICs to ceftriaxone are consistent with β-lactam resistance reported elsewhere, and carbapenem resistance should also be expected (4,5,10,12). Following Clinical and Laboratory Standards Institute guidelines (M100-29; 2019) (13) for "other non-Enterobacteriaceae," these isolates were susceptible to ciprofloxacin and sulfamethoxazole/trimethoprim. This finding is not consistent with other regional data that show greater rates of resistance to these drugs (5,10). E. anophelis has been shown to be susceptible to piperacillin/tazobactam and to rifampin (4,10), which were not tested against in this study and are not currently available as therapeutic options in the study setting. It is unusual for gram-negative organisms to exhibit susceptibility to vancomycin, and interpretation of MICs to this drug should be approached with caution. Use of Etest in this study was a methodological limitation; the preferred method of broth microdilution was not available.
In summary, updates of mass spectrometry platforms have enabled identification of clinical E. anophelis isolates in Cambodia and Thailand. As diagnostic microbiology capacity expands in low-and middleincome countries, further reports of this organism are expected. Because of the associated high mortality rates for this pathogen, consensus on therapeutic options for infection caused by E. anophelis is needed, especially in resource-limited settings with restricted choices for antimicrobial drugs.
|
2020-01-02T21:47:36.425Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3080b872243cbf0614a23adc05cb6743e70e4c8c",
"oa_license": "CCBY",
"oa_url": "https://wwwnc.cdc.gov/eid/article/26/2/pdfs/19-0345.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c879a170e95c96e56e9128c7846bc592cbdec50b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247829418
|
pes2o/s2orc
|
v3-fos-license
|
Comprehensive Analysis of the Carcinogenic Process, Tumor Microenvironment, and Drug Response in HPV-Positive Cancers
Human papillomavirus (HPV) is a common virus, and about 5% of all cancers worldwide is caused by persistent high-risk HPV infections. Here, we reported a comprehensive analysis of the molecular features for HPV-related cancer types using TCGA (The Cancer Genome Atlas) data with HPV status. We found that the HPV-positive cancer patients had a unique oncogenic process, tumor microenvironment, and drug response compared with HPV-negative patients. In addition, HPV improved overall survival for the four cancer types, namely, cervical squamous cell carcinoma (CESC), head and neck squamous cell carcinoma (HNSC), stomach adenocarcinoma (STAD), and uterine corpus endometrial carcinoma (UCEC). The stronger activity of cell-cycle pathways and lower driver gene mutation rates were observed in HPV-positive patients, which implied the different carcinogenic processes between HPV-positive and HPV-negative groups. The increased activities of immune cells and differences in metabolic pathways helped explain the heterogeneity of prognosis between the two groups. Furthermore, we constructed HPV prediction models for different cancers by the virus infection score (VIS) which was linearly correlated with HPV load and found that VIS was associated with drug response. Altogether, our study reveals that HPV-positive cancer patients have unique molecular characteristics which help the development of precision medicine in HPV-positive cancers.
INTRODUCTION
Human papillomavirus (HPV) is an important carcinogen since the HPV proteins E6 and E7 are intimately related to the events that cause malignant transformation of HPV-infected cells (1,2). A global case statistics reported that cancers caused by HPV infection account for at least 5% (3). Persistent high-risk HPV infection can cause cancer in many different anatomical sites including cervix, penis, head and neck, lungs, prostate, bladder, and breast (4)(5)(6)(7)(8)(9)(10)(11). Therefore, HPV has received more and more attentions as an independent carcinogen.
Present pan-cancer studies mainly focus on the impact of HPV on the tumor immune microenvironment, and most of them explain the possible benefits of HPV infection to patients from the perspective of immunotherapy. Gameiro et al. explained that the antitumor immunity activated by HPV might be the main source that improved the prognosis of HPV-positive patients in head and neck squamous cell carcinoma and such patients were suitable for immunotherapy (12). Varn et al. highlighted the changes in tumors caused by diverse virus infections and suggested that different families of viruses should be distinguished when designing immunotherapy methods (13). Cao et al. stated that viruses might help tumors escape the PD-1 immune checkpoint pathway in multiple cancer types (14). Tumorigenesis depends not only on the alterations of the tumor microenvironment but also on gene mutations and the synergy of multiple carcinogenic pathways (15)(16)(17). However, those studies considered neither the differences of the carcinogenic processes between the HPV infection and other elements nor the possible impact of the expression level of HPV. Therefore, there is an urgent need for comprehensive and detailed analyses on the carcinogenic process, tumor microenvironment, and even the treatment outcome affected by both HPV and its expression level.
Here, we analyzed a total of 3,542 human samples representing 10 different cancers to describe how HPV caused cancers and shaped the tumor microenvironment at the genomics and transcriptomics level in The Cancer Genome Atlas (TCGA). Survival analysis showed that HPV played an important role for patients' prognosis. Furthermore, we analyzed the differences in the carcinogenic processes between HPV-positive and HPVnegative groups from three aspects: driver genes, genome instability, and mitotic carcinogenic pathways. The results implied that HPV might trigger cancer through the cell-cycle disorder rather than genome instability. The tumor microenvironment is significantly related to the improvement of cancer patient survival and treatment effect (18,19). In order to explain why HPV-infected patients' survival was better than that of non-infected patients, we explored the impact of HPV on immune cell infiltration and metabolic pathway activity in the tumor microenvironment. We also found that the differences in the carcinogenic process and the tumor microenvironment mostly tended to appear in the tumor types with high HPV expression level. Next, we constructed HPV status prediction models to yield a virus infection score (VIS) for each cancer. VIS was positively correlated with HPV expression, and the classification efficiency of VIS was verified by both internal data from TCGA and external data from Gene Expression Omnibus (GEO). These models were also extended to Genomics of Drug Sensitivity in Cancer (GDSC) data and yielded VIS which represented the HPV-like status of GDSC cell lines. The higher VIS was related to the chemotherapy effect of TCGA patients and the drug sensitivity in the GDSC cell lines. In general, our research will help researchers to better understand the impact of HPV on the host genome and tumor microenvironment, and it will also be helpful in chemotherapy and immunotherapy for tumor patients with high HPV expression.
Datasets
TCGA samples were collected from the UCSC Xena pan-cancer project (http://xena.ucsc.edu/). The expression data were transcript per million (TPM) values with log 2 (x+0.001) transformed, and non-silent mutation was defined as gene-level mutation calls, where 1 represents non-silent mutation and 0 represents wild type. HPV expression (normalized reads per million, NRPM) was collected from a previous study (20), and the samples with more than 10 NRPMs were defined as infected by HPV. Only tumor types with at least 10 HPV-positive samples were considered, including cervical squamous cell carcinoma (CESC), uterine corpus endometrial carcinoma (UCEC), colon adenocarcinoma (COAD), rectum adenocarcinoma (READ), glioblastoma multiforme (GBM), ovarian serous cystadenocarcinoma (OV), esophageal carcinoma (ESCA), stomach adenocarcinoma (STAD), head and neck squamous cell carcinoma (HNSC), and kidney renal clear cell carcinoma (KIRC). At last, we collected 3,254 tumor samples in total (Supplementary Tables S1, S2) with matched clinical data and chemotherapy response data from previous studies (21,22). The driver gene and viral integration site for HNSC were collected from two other studies (23,24).
External data with HPV status for model validation were obtained from the gene expression omnibus (GEO) with accession numbers GSE117973 and GSE151666. Cell line expression data and drug sensitivity data were downloaded from Genomics of Drug Sensitivity in Cancer (GDSC: https:// www.cancerrxgene.org/, release8.2).
Survival Analyses
The log-rank test was performed to evaluate the prognosis difference between HPV-positive and HPV-negative patients in each cancer type. In order to further explore the importance and impact of HPV on patient survival compared with other common clinical indicators, we used multivariate Cox regression for HPV status, age, gender, clinical stage, and TNM staging. Next, we performed stepwise regression based on the Akaike information criterion (AIC) to select variables which have important impact on patients' survival. Survival analysis was performed by "survival" package in R.
Calculation of Pathway Activity Scores and Collection of Immune Indicators
We collected gene sets for DNA damage repair (DDR) pathways (25), mitotic oncogenic pathways (17), and metabolic pathways (26). Pathway activity scores were calculated using the single sample gene set enrichment analysis (ssGSEA) method in R package "GSVA". The abundance of immune cells were derived from xCell which was a gene signature-based method to quantify 64 cell types through ssGSEA (27).
Construction of the HPV Status Prediction Model
In order to observe the spatial proximity of these samples, we put all samples to a two-dimensional coordinate system using Uniform Manifold Approximation and Projection (UMAP) dimensionality reduction through the "umap" package and then clustering analysis was performed by the "dbscan" package.
The differential gene expression analysis between HPV-positive and HPV-negative samples within a given cancer type was performed by the "DEseq2" package (29). The genes with both | Log2 fold change| > 1 and adjusted p-value < 0.05 represented the different transcriptome features between the two groups. To increase reliability and accuracy for predicting HPV status, the lasso regression model was constructed for each cancer type with the sample's HPV status as the response variable and the gene expression level as the predictor variable by the "glmnet" package. HPV signature gene sets (the predictor variables) were derived by stepwise regression and were used to calculate the virus infection score (VIS) by the corresponding lasso model in each cancer type. VIS was defined as the sum of (regression coefficient* signature gene expression level) in each sample (Supplementary Table S3). The relationship between VIS and NRPM was estimated by the Spearman correlation coefficient. AUC was calculated by the "pROC" package to verify the performance and ability of each cancer prediction model, and two external GEO datasets (GSE117973 and GSE151666) were further used. In addition, the prediction models were extended to GDSC cell line data to capture HPV-like infected samples which had similar transcriptomic features with HPV-positive cancer patients.
Connection Between VIS and Drug Response
To evaluate the connection between VIS and drug response, we combined all cancer types' VIS after z-score transformation and divided TCGA samples into four groups according to chemotherapy response: "complete response" (CR), "partial response" (PR), "stable disease" (SD), and "clinical progressive disease" (CPD). We explored the distribution of VIS in the four groups and calculated the proportion of chemotherapy response in different groups segmented by scaled VIS. GDSC data were divided into two categories according to the threshold of scaled VIS = 1, and the difference of the half maximal inhibitory concentration (IC50) was analyzed between the two categories.
Statistical Analyses
Fisher's exact test was used to evaluate the difference of gene mutation frequency between HPV-positive and HPV-negative groups. All the comparisons of pathway activity and other indicators between the two groups were performed by the twotailed Wilcoxon-sum rank test. In the GDSC data set, the difference of IC50 between two groups was calculated by the two-tailed T test or Mann-Whitney U test when the data were not normally distributed. All statistical analyses were performed by R.
HPV Improves Overall Survival for Four Cancer Types
Clinically, HPV-positive patients in HNSC have a better overall survival than HPV-negative patients (30). To confirm the impact of HPV on the prognosis for HPV-related cancers, we applied the log-rank test to analyze the differences in the survival times between HPV-positive and HPV-negative groups. In 4/10 cancer types including CESC (p = 0.076), HNSC (p = 0.00075), STAD (p = 0.012), and UCEC (p = 0.013), HPV-positive patients exhibited a significantly better prognosis ( Figure 1A). In particular, the survival rate of HPV-positive patients did not drop rapidly as HPV-negative patients within the first 5 years in HNSC. To further demonstrate the importance of HPV infection on patient survival, we examined the hazard ratio of HPV infection compared with other common clinical indicators through the multivariate Cox proportional hazard model among the above four cancer types. HPV remained as a favorable prognostic for the four cancer types after stepwise regression screening based on Akaike information criteria ( Figure 1B). This result implied that HPV could be an indicator of patient prognosis, which was as important as the clinical stage. These analyses hint that HPV infection induces an underlying mechanism that makes the prognosis of hosts better than that of non-infected samples.
HPV-Positive Patients Have Stronger Cell Cycle Activity in the Carcinogenic Process
The mutation frequencies of several driver genes in the HPVpositive group with CESC and HNSC were significantly lower than those in the HPV-negative group (Figure 2A and Supplementary Table S4). The lower mutation frequency of TP53 in the HPV-positive group of CESC and HNSC indicated that their genomes were more stable. In the HPV-positive group, the lower mutation frequency of ARID1A in CESC as well as FAT1, CDKN2A, and FGFR3 in HNSC demonstrated that abnormal cell proliferation of HPV-positive patients did not arise from driver gene mutations. Although CYLD and ZNF750 mutations were enriched in HPV-positive HNSC, the samples with these two gene mutations together accounted for only 20% of HPV-positive ones. These results indicated that those genes in the HPV-positive group were not the same as TP53 which was the main cause of cancer in the HPV-negative group. Additionally, TP53 mutations in UCEC were enriched in the HPV-negative group, and PTEN mutations were enriched in the HPV-positive group. The total number of mutations with TP53 or PTEN exceeded 80% in both HPV-positive and HPV-negative patients. This result illustrated that there was no difference in the driver gene level between HPV-positive and HPV-negative groups in UCEC. We further applied the t-test to compare the expression levels of different mutation driver genes with the HPV-negative group in the three cancer types. As shown in Supplementary Figure The red horizontal lines correspond to the 95% CI, on which the dot reflects the hazard ratio. (Nx, regional lymph nodes could not be evaluated; N1, lymph node metastases with a maximum diameter of less than 3 cm; N2, lymph node metastases with a maximum diameter of less than 6 cm and greater than 3 cm; N3, the maximum diameter of metastatic lymph nodes is greater than 6 cm).
FGFR3, and TP53 group was higher in HPV-positive than those in HPV-negative group in HNSC. The expression of PTEN and TP53 was higher in the HPV-positive group than those in the HPV-negative group in UCEC, but there were no differences in CESC.
We examined the differences of DDR pathway activity and other genomic instability indicators between HPV-positive and HPV-negative groups ( Figure 2B). The DDR pathway activity in CESC and HNSC was enhanced in the HPV-positive group, which might be related to the less mutation of TP53. The DDR pathway activity was decreased in the HPV-positive group of UCEC, COAD, and READ, and there were a few changes in the rest of the cancer types. It is worth noticing that the alternation of indicators for genome instability was consistent with the DDR repair pathway activity only in HNSC. One possible explanation is that there may be other DDR repair mechanisms in addition to the 10 DDR repair pathways.
Next, we compared the difference in mitotic oncogenic pathways between HPV-positive and HPV-negative groups ( Figure 2B). HNSC was the most affected cancer by HPV infection, and the activity of the P53 pathway was also related to the low mutation rate of the TP53 gene in the HPV-positive group. In CESC and HNSC, HPV activated the cell cycle through different pathways like the PI3K and MYC signaling pathway in HNSC and the TGF-b signaling pathway in CESC. In addition, the cell-cycle activity of the HPV-positive group in COAD was lower than that of the HPV-negative group, indicating that the impact of HPV infection in this cancer was different from that in CESC and HNSC. In GBM, STAD, and UCEC, the cell-cycle activity did not change significantly, indicating that although individual mitotic oncogenic pathways of these cancers could be affected by HPV, it was not reflected in the cell cycle. In addition, we observed a considerable number of overlaps between DE genes and these essential pathways (Supplementary Table S5). Interestingly, we noticed that the changes in the DDR and mitotic oncogenic pathways were related to HPV expression. The cancer types with large-scale variations in the DDR and carcinogenic pathway activity exhibited a high HPV expression level. In most cases of CESC, HNSC, UCEC, COAD, and READ, the NRPM value exceeded 100 or even reached 1,000 ( Figure 2C). This phenomenon revealed that the impact of HPV on the host carcinogenic process might depend on its expression level. In summary, the carcinogenesis of the HPVpositive group in CESC and HNSC was triggered by the active cell cycle after HPV infection rather than genome instability, which was a major difference between HPV-positive and HPVnegative patients in CESC and HNSC.
HPV Affects the Tumor Microenvironment
To obtain insights into the immune infiltration affected by HPV, we examined the different abundance of immune cell infiltration between HPV-positive and HPV-negative groups. The results showed that HPV infection affected the tumor immune microenvironment in 8/10 cancer types and 49/64 cell types ( Figure 3). The immune cell infiltration of HNSC was most widely affected by HPV. The HPV-positive groups of CESC and HNSC had the common characteristics of elevated B cells and CD8+Tcm infiltration. The upregulation of immune cell infiltration may be the reason for the better prognosis in HPVpositive patients, such as B cells and CD8+ Tcm in CESC, CD8+ Tcm in HNSC, NKT cells in STAD, and B cell in UCEC. These cells can directly or indirectly kill tumor cells. At the same time, the stromal cells of CESC and HNSC were decreased on a large scale, which was helpful to improve the prognosis of patients (31,32).
We next examined the alteration of immune indicators by HPV infection ( Figure 4A). The CYT score increased in the HPVpositive group of CESC, HNSC, and COAD, indicating that HPV stimulated the enhancement of the cytotoxic T cells (CTL) of these three types of cancer. Studies have shown that cancer-testis antigen (CTA) contributes to tumorigenic signal transduction (33), and it has been regarded as a potential target of treatment (34,35). The CTA score in HPV-positive patients of CESC, HNSC, COAD, READ, and UCEC were decreased, implying that the treatment strategy for the CTA antigen might not work for these cancers. The reduction of neoantigens in the HPV-positive group of HNSC may be due to its lower mutation load. TCR is responsible for the detection of human "non-self" antigens (36). The increased TCR in the HPV-positive group of CESC, HNSC, and ESCA meant enhanced ability of T cell recognition. The higher BCR of the HPVpositive group in HNSC, COAD, and ESCA also indicated that HPV as a foreign substance stimulated the activation of the host humoral immune system.
The metabolic pathways were also affected by HPV infection. HNSC, UCEC, COAD, and READ patients received the energy for tumor cell growth through at least one metabolic pathway for the integration of energy or the tricarboxylic acid cycle. The upregulation of carbohydrate metabolism, nucleotide metabolism, and vitamin and cofactor metabolism metabolic subtypes is always associated with poor prognosis (37). The downregulation of these pathways in the HPV-positive group of HNSC and UCEC may be another reason for their better prognosis.
The integration of HPV DNA into the host genome is an important event that leads to abnormal proliferation and malignant progression during HPV-mediated carcinogenesis (38,39). The NHEJ (non-homologous end joining) pathway was more active in the HPV-positive group of HNSC (FDR = 2.84E-09), which provided the necessary conditions for HPV integration. HPV-integrated coding genes in HNSC tended to be enriched in GO terms that negatively regulated the host's immune response and cell adhesion (Supplementary Figure S2). Among the 60 HPV-integrated protein-coding genes, 47 genes were upregulated and 13 were downregulated according to the Tukey standard. The expression of HPV-integrated genes was increased abnormally in HPV-positive patients, which included the famous immune checkpoint genes CD274 and PDCD1LG2 ( Figure 4C).
Construction of the HPV Status Prediction Model by Transcriptome Characteristics
To explore the potential connection within and between tumor types of HPV-related cancer patients, we used the UMAP method to reduce the dimensionality of transcriptome in all samples and then projected samples into a two-dimensional coordinate system (Figures 5A, B). Cancer samples tended to cluster according to the cancer type and were also closer according to similar tissues among cancer types (the top-left corner were four types of pan-digestive tract cancers, and the center of the coordinate system was occupied by gynecological cancers). It is worth noting that CESC samples were close to HNSC samples and HPV-infected samples in HNSC tended to cluster with CESC, implying that HPV-positive samples in HNSC and CESC samples were relatively similar at the transcriptomic level ( Figure 5B). It is reasonable that HNSC and CESC are both squamous cell carcinomas in terms of cell origin (40). This also explains the similar changes in several pathways and the tumor microenvironment in the HPV-positive group of the two cancers. In order to construct and evaluate the performance of HPV prediction models based on HPV-related transcriptome features, we used lasso regression to screen differential genes related to HPV infection status, then obtained the signature gene set and prediction model of 10 cancers (Supplementary Table S3). Next, lasso regression combined with the signature gene sets was used to calculate the virus infection scores (VISs) for each sample. The VIS was significantly positively correlated with the NRPM value (correlation coefficients from 0.47 to 0.96, Figure 5C). We also applied the prediction model for a specific cancer type into other cancer types and used the AUC value to evaluate the accuracy of each model across cancer types ( Figure 5D). The prediction model had the best efficiency in predicting the HPV infection status for its own cancer. However, some models still had high AUC values (AUC >0.90) when they were applied into other cancer types, such as the models established in CESC and HNSC as well as the models in COAD, READ, and UCEC ( Figure 5D).
High AUC values were still achieved when two sets of external data (HNSC: GSE117973, CESC: GSE151666) were used to verify the classification efficiency of the model ( Figure 5E), resulting in that the models of CESC and HNSC were interchangeable (AUC >0.9).
VIS Is Associated With Drug Response
To explore whether there is a relationship between VIS and drug response, we divided the TCGA samples into four groups according to the RECIST standard. We found that VIS was related to the chemotherapy response of the TCGA sample. When the scaled VIS >2, the rate of CR to the chemotherapy of the TCGA samples in stage III and stage IV was 94% without drug resistance ( Figures 6A, B), and the ratio of CR was much greater than that of the scaled VIS <2 group (OR = 11.10, p = 0.0034). We also found that scaled VIS had a connection with drug response in the GDSC cell line data of high-confidence cancer types whose correlation coefficients between VIS and NRPM were greater than 0.8. Some drugs had lower IC50 values in the cell lines with high-scaled VIS, indicating that these drugs were more sensitive in high-scaled VIS cell lines ( Figure 6C). Interestingly, more than half of samples (59%) in the CR group with scaled VIS >2 used platinum-based chemotherapy drugs in the TCGA dataset, and the lower IC50 value of cisplatin was also found in high-scaled VIS cell lines. We also found that the efflux gene ATP7B was significantly lower in the scaled VIS >2 group, which caused the different chemotherapy outcome by cisplatin (Supplementary Figure S3). There were no immunotherapy drugs in the above analysis, so we explored the feasibility of immunotherapy for scaled VIS >2 samples. Studies have shown that the HLA family (41), immune cells (42), and immune checkpoint (43) can affect immunotherapy. We found out the significantly increased expression of the HLA family, abundance of immune cells (except NKT, macrophages), and expression of immune checkpoints (except CD276) in the scaled VIS >2 group in TCGA data ( Figures 6D-F). These differences indicate that patients with higher VIS may be likely to benefit from immunotherapy.
DISCUSSION
We have discovered that HPV contributes to favorable prognosis for CESC, HNSC, UCEC, and STAD ( Figure 1B), implying that even if HPV is a carcinogen, it can also activate uncertain mechanisms of the host to prolong the survival. However, HPV viral load was not significantly correlated with overall survival (Spearman's rank correlation, p = 0.2012). HPVpositive clinical associations were further analyzed by the chisquare test or the Fisher test in R (Supplementary Table S6). HPV-positive tumors were more associated with lower stages (staging in UCEC, pathological T in STAD and HNSC, pathological N in HNSC, and pathological M in CESC) than the HPV-negative tumors. The above associations were the composite effects of the carcinogenic process and the tumor microenvironment.
The HPV E6 and E7 oncoproteins are the dominant paradigm for tumorigenesis. The expression of E6 stimulates p53 degradation, while the expression of E7 degrades Rb, leading to an increase in E2F-dependent transcription and a deregulation of the cell cycle without control of DNA replication, DNA repair, and apoptosis (44). In HPV-positive cervical cancer cell lines, the knocked-down E6/E7 could increase p53 at the protein level, thus hindering cell growth and triggering cell death in vitro and in vivo (45). The significantly lower frequency mutation ( Figure 2A) and higher expression (Supplementary Figure S1) of TP53 in the HPV-positive group of HNSC and UCEC implied a more functional P53 expression and might explain in part increased chemosensitivity and radiosensitivity (46,47). The loss control of the cell cycle induced by upregulated E2F1 in the HPVpositive group of CESC, HNSC, and UCEC and downregulated RB1 in the HPV-positive group of STAD played an important role on the formation and progression of the four cancers. The third oncoprotein E5 expressed together with two regulatory proteins (E1 and E2) attributes to p53-dependent enhanced proliferation in vitro and activates the FGFR pathway to accelerate tumorigenesis in vivo (48). Members of the FGFR family were upregulated in the HPV-positive group of the four cancers (Supplementary Table S6), implying combined inhibition of FGFR and mTOR for targeted therapy (48,49).
We identified two cancer types, CESC and HNSC, whose driver genes were enriched in HPV-negative patients, especially the low-frequency mutation of TP53 which was a common feature of HPV-positive patients in CESC and HNSC. The important role of the P53 protein can maintain genome stability (50,51); thus, the activation of DNA damage repair pathways in HPV-positive patients of these two cancers was observed (Figures 2A, B). Mutated ARID1A can cause abnormal cell proliferation and block immune checkpoint therapy (52)(53)(54), and the co-occurrence of the lower mutation frequency of ARID1A and higher infiltration of CD8+Tcm may enable HPV-positive patients in CESC more suitable for immune checkpoint therapy (Figures 2A, 3). We also found that the mutation frequency of the antitumor gene PTEN (55,56) in the HPV-positive group of CESC was significantly lower than that in the HPV-negative group. The low mutation frequency of the two famous tumor-suppressor genes TP53 and PTEN in patients in the CESC HPV-positive group indicated that these patients received more "help" in the process of fighting against tumor cells. The mutation frequencies of several tumor-suppressor genes including FAT1, CDKN2A, FGFR3, and CASP8 were lower in the HPV-positive group of HNSC, and even the mutation frequencies of FAT1, CDKN2A, and CASP8 were 0 (Supplementary Table S4 processes regulated by these genes were not disrupted. Knocking down FAT1 and CASP8 separately or together resulted in enhanced cell motility and clonal development (57). Due to significantly lower mutation frequencies of the famous tumor-suppressor gene TP53 in the HPV-positive group of CESC and HNSC, we conjectured that the DNA damage repair mechanism was stronger in the HPV-positive group of CESC and HNSC. Therefore, we calculated the activity of 10 DNA damage-repair pathways through the ssGSEA method and examined the difference of pathway activity and other genomic instability indicators between HPV-positive and HPV-negative groups. The result was consistent with our conjecture. We did not observe changes in the genomic instability indicators in CESC, which might be that the DDR pathway was not activated as highly as that in HNSC. These results indicate that genomic instability might not be the major cause for the occurrence of CESC and HNSC in the HPV-positive group. Together with the differences in carcinogenic pathways, we found that HPV infection in CESC and HNSC allowed patients to bypass the genomic instability in the carcinogenic process and directly captured the characteristics of the active cell cycle, thereby causing abnormal proliferation (58)(59)(60). These results also remind us that the detection of TP53 mutations cannot be fit for all the people at risk of cancer, and gene mutation testing combined with HPV status is the best way to predict the risk of HNSC because of the low TP53 mutation frequency in HPVpositive patients. We also found that the most common mutations in TP53 were R248Q/W (19 of 431 mutations), E285K (3 of 28 mutations), and R273C/H/S (23 of 217 mutations) for HNSC, CESC, and UCEC, respectively. The R248 in p53's DNA-binding domain (DBD) could interact with DNA's minor groove directly and the R248Q mutation caused conformation alterations in areas of DBD far from the mutation site (61). Tumor mutations at site E285 in the H2 region of p53 may decrease essential interactions that stabilize H2, implying that the inactivation mechanisms may be linked to the loss of local structure around H2, reducing overall stability to a meaningful degree (62). Garg et al. found that the oncogenic p53 variations R273 (R273H, R273C, and R273L) not only lose their DNA-binding capabilities but also have different structural stability, aggregation, and toxicity profiles and lead to different types of cancer pathogenesis in vivo (63).
It is important to explore the impact of HPV infection on the tumor microenvironment (TME), because the immune infiltration and metabolism in TME are associated with patients' prognosis (64)(65)(66). HPV infection stimulates the immune system response which may be the reason why HPVinfected patients have better prognosis than non-infected patients in CESC, HNSC, UCEC, and STAD. The immune system of HNSC had the strongest response after being stimulated by HPV ( Figure 3). Interestingly, CD8+ memory T cells in CESC and HNSC were both increased in HPV-positive patients, implying that HPV vaccine injection may have the potential to prevent HPV infection which leads to the occurrence of HNSC. We also found that an increase in CD8+ T cells required the cooperation of dendritic cells. Upregulated dendritic cells presented more antigens to CD8+ T cells and made them upregulated ( Figure 3). The stromal cells were also affected by HPV in the tumor microenvironment. In the patients with HPVpositive CESC and HNSC, the reduction of multiple stromal cell types ( Figure 3) exerted a positive effect on the prognosis of patients (31,32). TCR and BCR increased in the HPV-positive group of HNSC without the increase of mutation load and neoantigen, implying that HPV might express viral antigens and be recognized by T cells and B cells. Notably, a general trend could be observed where significant differences in the carcinogenic process and tumor microenvironment occurred in cancers with high HPV expression levels (Figures 2A-C, 3). Expression analysis revealed that HPV integration disrupted gene expression, but the upregulation of CD274, PDCD1LG2, FOXA1, and TNFSF4 provided opportunities for tumor immunotherapy ( Figure 4C). Although HPV-integrated genes were enriched in GO terms that negatively regulate immunity, the presence of HPV still irreversibly activated the cell-mediated immune response (Supplementary Figure S2 and Figure 3).
Since HPV has a significant impact on the tumor microenvironment which is crucial to the chemotherapy effect of cancer patients (67-69), we analyzed whether the HPV could affect chemotherapy response. After developing HPV prediction models by transcriptome characteristics, the prediction score VIS was positively correlated with the abundance of virus expression and the correlation coefficient ranged from 0.47 to 0.96. The prediction models of the cancer types with a high correlation coefficient (R > 0.8) were extended to the GDSC data for the HPV-like propensity of the cell line. When scaled VIS reached a certain level (scaled VIS > 2), the patients were quite sensitive to the chemotherapy in TCGA ( Figure 6A). We further studied the sensitivity of HPV-like cell lines to drugs and screened out some drugs which were associated with scaled VIS ( Figure 6C). Although we have not collected appropriate immunotherapy data, we still analyzed the relationship between VIS and the signature of immunotherapy, and the results showed that patients with high VIS may benefit more from immunotherapy ( Figures 6D-F). If there are suitable data in the future, we can further explore the potential application of VIS as an immunotherapy marker.
Although there was a higher occurrence of HPV infection in males with STAD or HNSC (Supplementary Table S6), gender was not an important factor for overall survival ( Figure 1B). We also explored whether or not there was any difference in immune cell infiltration and drug response between males and females with HPV or without HPV for STAD and HNSC. The result showed that there was no significant difference in immune cell infiltration between males and females in HPV-positive patients, but there were some significant differences (such as for CD8+ Tcm) in HNSC HPV-negative patients ( Supplementary Table S8). Similarly, there was no significant difference in drug response both in the HPV-positive group (Fisher's exact test, p = 0.72) and in the HPV-negative group (Fisher's exact test, p = 0.83). Gender cannot be a considerable factor for HPV-positive cancer.
In conclusion, we conducted a multilevel analysis of a variety of cancer types with HPV infection, including the carcinogenic process of cancer and the tumor microenvironment, and propose that the high level of HPV expression may provide references for precision medicine for related cancer patients.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
|
2022-03-31T16:04:15.130Z
|
2022-03-22T00:00:00.000
|
{
"year": 2022,
"sha1": "3f15a5af2fa719376a1414c3a58f2e3fb5d3812e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.842060/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "1cec0d066576acdd515b8a2e63cc68298103a1c8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212844510
|
pes2o/s2orc
|
v3-fos-license
|
PUMICE LAYER: A SOLUTION TO DIMINISH THERMAL ON HORIZONTAL LEFTOVER PLACE IN ROOFTOP
There are many ways to solve thermal on buildings, such as the installation of horizontal and vertical sun-shading devices on four-direction facades. However, rooftops are often ignored. In low-rise to high-rise buildings’ rooftop, there are leftover places exposed to solar heat radiation all day. Some rooftop places are equipped with polymer thermal roof insulation, and some are even without outer thermal insulation. The research aims to find a solution to diminish the horizontal thermal radiation by using eco-friendly material, pumice, as an outer thermal insulation. Exploiting method was used on one roof model as a conventional rooftop (without outer insulation), and another as a modified rooftop model covered with pumice. Couple HOBO data logger U12-012 temperature sensors were used to measure rooftop surface temperatures and room model temperatures. Results show that the thermal radiation were blocked efficiently: 26C on pumice covered rooftop. It saved 8.4C room temperature.
INTRODUCTION
Over the last decades due to the high demand, small shops and office spaces have been built quickly in the urban and suburban areas of Surabaya. They are usually three to four-story buildings; the first and second floor are used for business, while the third and four are for living areas (see figure 1 B & C). But mostly shop-houses are used for business purposes only. The rooftop of shop-houses are mostly concrete constructed horizontally, and built without any external insulation layers on the rooftops. The rooftop space is usually a storage of water tanks and outdoor air conditioning units with leftover spaces (empty spaces), see figure 1 A. Many shops or office houses have a big leftover place on the rooftop which are exposed to solar radiation every day, month and year. Each flat or tilted outer concrete rooftop built responds to and suffers from climate zones, thermal roofs, and solar irradiance, affecting room temperature beneath the uninsulated flat rooftop. The horizontal solar irradiance in the tropical zone (Surabaya 7 o 17'-21') on the roof is greater than vertical solar irradiance on the façade. By using online solar irradiance calculator that collected solar insolation data for 22 years on all cities around the world, we could obtain the solar insolation data for all directions and time (http://www.solarelectricityhandbook.com/solar-calculator.html). Based on the solar irradiance calculator, the average amount of solar insolation from January to July ranges from 4.68 to 4.98 kWh/m 2 /day, and fluctuates from 5.48 to 5.97 kWh/m 2 /day from July to November except in December (4.9 kWh/m 2 /day) (see figure 2). The yearly average of solar insulation is 4.71 kWh/m 2 per day on solar-calculator, meanwhile the result from the solar insolation experiment was 5.1 kWh/m 2 per day (figure 4) (Mintorogo, 2009). If one shop house's dimension is 5 meter x 20 meter, then the flat rooftop will be 100 m 2 . This means 510 kWh/day of thermal solar insulation affects the room thermal beneath the rooftop. This phenomena would be different if the solar insulation hit the Photovoltaic panels (PV) laid on the flat rooftop which attracts photons and releases electrons in the form of electric current (renewable energy) (Carl, 2014). The research objective is to look for ecofriendly, sustainable insulation that can be applied to outdoor rooftop insulation and keep our environment clean and free of chemical substances. Celik (2016) stated that pumice has its physical volcanic evolution: permeable and low thermal conductivity that has better thermal solution for roof insulation. A roof insulation can be made by building up roofing membranes made of hot-tar, single-ply, polyurethane substances, asphalt, and bitumen. However, those are not natural insulation materials, which means they are not environmentally-friendly (See figure 5A & 5B).
Materials
Pumice grains are used as outdoor eco-insulation on horizontal flat concrete in this research. Pumice is very light and porous, it is a natural igneous rock that is formed from glassy volcanic eruption lava which solidifies, and is an easy adsorbent matter to water and air (Kitis et al., 2007). Typically, pumice is lightcoloured and non-crystalline. Figure 6A shows pumice obtained from Lombok's (Eastern Indonesia) Mountain areas. The size ranges from 3 cm to 6 cm (figure 6B). The porosity of Pumice reaches 90% of its body. Pumice also floats on water (Ismail et al., 2014).
Experimental Models
There were two experimental models: 1) one model (bare rooftop) was as a reference: fundamental state, no insulations applied.
2) The other model (modified rooftop) was filled with pumice as an outer eco insulation (see figure 7A and B). Both models measured the temperatures: 1). the reference rooftop surface temperature and the room model temperature.
2) The pumice rooftop surface temperature and the pumice rooftop room temperature (figure 7C).
Measured Tools
Measured tools were from ONSET-USA, HOBO Data Logger U12-012, and thermocouple probe. 4 units of Data Loggers were used to record the surface and room model temperatures. One unit HOBO data logger measured the surface rooftop temperature (thermocouple probe), and another measured room model temperature with an attached sensor on Hobo tool ( figure 8A & 8B).
Methods
The research is an experimental in situ on the seventh floor of the horizontal flat concrete rooftop at Petra Christian University. Two types of rooftop models (conventional and covered pumice) were measured at the same time that running for months. Several significant thermal data were obtainedthermal in critical months like in June (less thermal on rooftops due to position of the sun on Northern 23.5 o ) and September/October (the hottest month due to position of the sun above the equator). The results of the thermal were: 1) conventional model roof surface temperature and room temperature. 2) coveredpumice model roof surface temperature and room temperature.
RESULTS AND DISCUSSIONS
With pumice as outer insulation sheltered on rooftop, the room temperature must be cut down several degrees Celsius. Table 1 shows the roof surface temperature on bare rooftop during June 2018 reached 48.9 o C at noon. The rooftop surface temperature can be cut down by 23.5 o C by putting pumice with 0.015 m thickness on top of the flat roof. By cooling down the roof surface temperature to 23.5 o C, room thermal on pumice model was cooled down by 6.4 o C (see table 2). Table 1 also shows that from 11am to 1 pm the rooftop surface temperature at reference model increased by 3 o C (45.9 to 48.9 o C). Meanwhile, the roof surface temperature covered with pumice materials increased by 1.4 o C merely, and roof surface thermal remains stable throughout the day (morning-evening-night). The rooftop surface temperature with pumice decreased by 3.4 o C from 6 pm to 5 am. Meanwhile, the surface temperature at reference model started to decrease by 4.7 o C from 6 pm to 5 am. Conventional roof surface (no materials on the rooftop) releases stored thermal on the roof faster than the pumice roof (preventing thermal outflow) during the night (cool night radiation). However, pumice roof achieves better thermal load on rooftop during the day (lower and stable thermal loads) ( Table 3 shows the heat of reference roof model at 11 o'clock reached the highest temperature at 53.5 o C in September due to the high concentration of solar radiation on the flat surface. As a comparison, the highest roof temperature of of the reference model rooftop in June 2018 was only 48.9 o C at noon (table 1) ) respectively. In addition, the pumice-layered rooftop gradually began to lose small amount of stored roof heat to the cool sky (after sun-set) at 6 pm to 6 am (the next morning). The pumice rooftop lost less heat to conventional rooftop by about 3 o C.
CONCLUSION
The results show that in June or even September (the hottest month), a rooftop covered with pumice as an outer insulation on flat roof can affectively decrease more room thermal by 6.5 o to 8.4 o C during the daytime compared to conventional flat bare rooftop (rooftop without insulation). Nevertheless, conventional rooftop will release roof mass thermal faster compared to the pumice rooftop, about 5 o C during the night-time. Because of pumice very porous structure, the stored heat radiation can be released every second. Pumice layer on the top of the flat leftover roof sustained stable average thermal rooftop of around 26 o C (an average temperature) during the day and room temperature of 29 o C at night-time in September 2018.
|
2020-03-05T10:24:26.355Z
|
2020-02-28T00:00:00.000
|
{
"year": 2020,
"sha1": "6e5deb64e56316ab51c995ee1e4c05782e0e435b",
"oa_license": "CCBY",
"oa_url": "https://dimensi.petra.ac.id/index.php/ars/article/download/22582/20008",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "30b60f00d3b9e0af86cbd078b467c6bb7d77c765",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
17503111
|
pes2o/s2orc
|
v3-fos-license
|
Dog Breed Differences in Visual Communication with Humans
Domestic dogs (Canis familiaris) have developed a close relationship with humans through the process of domestication. In human-dog interactions, eye contact is a key element of relationship initiation and maintenance. Previous studies have suggested that canine ability to produce human-directed communicative signals is influenced by domestication history, from wolves to dogs, as well as by recent breed selection for particular working purposes. To test the genetic basis for such abilities in purebred dogs, we examined gazing behavior towards humans using two types of behavioral experiments: the ‘visual contact task’ and the ‘unsolvable task’. A total of 125 dogs participated in the study. Based on the genetic relatedness among breeds subjects were classified into five breed groups: Ancient, Herding, Hunting, Retriever-Mastiff and Working). We found that it took longer time for Ancient breeds to make an eye-contact with humans, and that they gazed at humans for shorter periods of time than any other breed group in the unsolvable situation. Our findings suggest that spontaneous gaze behavior towards humans is associated with genetic similarity to wolves rather than with recent selective pressure to create particular working breeds.
Introduction
Domestic dogs (Canis familiaris) have been living close to humans (Homo sapiens) for at least 15,000 to 50,000 years, a relationship that probably came about through multiple domestication events [1][2][3][4][5]. Dogs are currently thought to be one of the best models for understanding cognitive skills in cross-species communication [6][7][8][9], and a number of studies have focused on the ability of dogs to comprehend and respond to various types of human communicative signals (e.g., [6,8,10,11]). For instance, it is known that dogs are able to process many types of human gestures including pointing, bowing, nodding, head turning and gazing as cues for finding the location of hidden food [11].
It has been suggested that the skills required by dogs to interact with humans were acquired through the process of domestication (e.g., [8,12]). Comparative studies of dogs and their closest living relative, the wolf (Canis lups), have shown that hand-reared wolves are less responsive to human social cues and less prone to showing human-directed gaze signals than domestic gazing behavior (i.e., the 'wolf remnant' hypothesis), Primitive and Molossoid groups showed similar gazing behavior while both groups were outranked by Hunting/Herding breeds. On the other hand, studies evaluating gaze responses in a direct human-to-dog feeding interaction (with food in sight but out of reach) found significant breed differences in human-directed gazing behavior (e.g., [32] [33]). For instance, in one study, Retrievers (a hunting breed specialized in retrieving prey) spontaneously gazed at humans for longer periods of time than German Shepherds (a herding or livestock protecting breed) or Poodles (a companion breed) [34]. Although these results seem to support the 'working purpose' hypothesis, the limited number of breeds and working types included in these studies does not allow any firm conclusion to be drawn.
In summary, previous data from studies on breed differences in communicative behavior provides partial support for both the 'wolf remnant' and the 'working purpose' hypotheses, and hence it is not clear whether genetic similarity to wolves or to working types has a greater influence on modern dogs' abilities to communicate with humans. Given that spontaneous gazing at humans can facilitate the initiation and maintenance of dog-human communication and bonding [16,32,33,35], further research examining how the domestication process has contributed to modern dog's use of gazing behavior towards human is warranted.
The aim of the present study was to estimate the influence of selective pressures on the ability of dogs to spontaneously produce communicative signals such as eye contact and gazing towards humans. In particular, we predict that if the genetic remnant of wolves has a significant influence on modern dog's behavior then Ancient breeds would show less human-directed gazing behavior than other purebred dogs. In contrast, if selection for some specific working purposes had a significant influence in the development of dog communicative abilities, then particular working breed groups would display a greater capacity for human-directed gazing behavior. Although dog's genetic similarity with wolves and its selection history for working purposes has been closely intertwined, the recently published data on the genetic clustering of dog breeds brings a tentative solution to estimate indirect impact of genetic factors on dog's behavior [21].
To advance the current scarcity of data on the production of communicative signals by dogs, the present study aims to address some of the methodological issues of previous studies. Firstly, we tested a wide range of 26 pure breeds including major modern pure breeds as well as ancient breeds. These breeds were further classified into broader breed groups (i.e., Ancient, Herding, Hunting, Retriever-Mastiff and Working) that cluster genetically clusters according to recent genomic analysis [21]. Grouping breeds in this way is important in order to estimate the possible effect of selective pressures that may be shared by more than one breed, as well as to compare inter-breed variation in ability to exchange communicative signals with humans. Secondly, we used two different experimental paradigms to draw spontaneous gaze responses towards humans when requesting out-of-reach food rewards: the 'unsolvable task' and the 'visual contact task' . The use of multiple behavioral tasks allows us to examine whether each breed group has a consistent behavioral pattern for sending communicative signals to humans independently of the situation or task. Finally, for comparative purposes, we will also analyze our data using the same breed classification used in the previous study that examined dogs' communicative abilities in a relatively large number of breeds [29].
Ethical Statement
The current study was conducted in strict accordance with the 'Guidelines for the Treatment of Animals in Behavioural Research and Teaching' by the Animal Behavior Society/Association for the Study of Animal Behaviour, and was approved by the ethical committee at the Wildlife Research Center, Kyoto University (WRC2010EC001). Dogs were recruited through advertisements in veterinary clinics, trimming salons, local parks and breed specialists. Signed informed consent for participation in this study was obtained from the owners.
Subjects
All subjects were purebred dogs living as companion animals at their owner's home. Highly trained dogs (i.e., dogs that engage in sport activity with their owners such as agility, disc, and other games and/or dogs that have training experience for working purpose) were not included in this study. A total of 125 adult dogs participated in this study. Subjects comprised 60 females and 65 males from 26 different breeds. Five dogs (1 Border Collie, 1 Doberman, 1 Portuguese Water Dog, and 2 Shiba Inu) were excluded from the analysis because they were not able to complete the behavioral experiments: two dogs did not take the food rewards from the experimenter's hand, and the other three never approached the apparatus used in one of the experiments. As a result, 120 dogs consisting of 57 females and 63 males with a mean age of 68.26 months (5.67 years old) were included in this study (Table 1).
Based on the recently published data on the genetic clustering of dog breeds [21], subjects were classified into five breed groups: Ancient, Herding, Hound, Retriever-Mastiff and Working. The Ancient group consisted of five breeds with similar genetic components to gray wolves [1,21,22], and that were originally from outside central Europe (i.e., Middle-East Asia, East Asia or Siberia). The other four breed groups included 21 breeds originally from European countries that differed in their primary use (i.e., working function) as well as genetic relatedness [21].
Behavioral Experiments
To evaluate breed differences in producing visual signals, we tested the dog's spontaneous gaze at human faces using two experimental tasks: the visual contact task and the unsolvable task (see S1 and S2 Movies). Small pieces of food (e.g., chipped beef) were used as rewards in both tasks. To test the dog's motivation for the reward, the experimenter offered the dog one piece of food before and after each task, confirming that all dogs were highly attracted to the reward. Each dog was tested separately in a familiar environment (e.g., in a room (N = 115) or garden (N = 5) at the owner's home) with no leash. In all cases, the test was carried out in a restricted area of at least 2 square meters. To counterbalance the effect of order of task, half of the subjects was given the visual contact task first, and the other half of the subjects was given the unsolvable task first. This counterbalancing procedure was conducted for each breed group. The owner was present throughout the experimental session and was instructed not to give any feedback to the dog for any of its responses. All experimental sessions were videotaped. Visual Contact Task. The visual contact task used in this study is a modification of the one used in Study 2 by Jakovcevic et al. [34]. The task consisted of two phases lasting 90 seconds each. In the first warm-up phase, the experimenter moved around the test area while calling the dog's name and making physical contact with the dog in a friendly manner. The dog was off leash and free to moving around the testing area. During this phase, the subject received at random intervals (mean 14.56 seconds) a total of four pieces of food directly from the hand of the experimenter. Food rewards were placed in a container visible to the subject but out of his/her reach. To focus the dog's attention towards the feeding place, the experimenter stood at the exact same position, i.e., next to the food container, when giving the food rewards to the subject. Importantly, during this warm-up phase, the experimenter avoided any eye contact with the dog.
Right after the 90-second warm-up phase had elapsed, the test phase started. At this point, the experimenter took one last piece of food and gave it to the dog while standing by the food container. Immediately after that, the experimenter stopped moving and initiated eye contact with the dog. The experimenter offered continuous eye contact but the dog was able to move freely and was not forced to make and/or maintain eye contact with the experimenter until the end of the second phase. The dog's gaze responses during the second phase were subjected to analysis.
Unsolvable Task. The 'unsolvable task' [16] consisted of six consecutive 'solvable' trials (i.e., the dog could reach the food reward) followed by a single 'unsolvable' trial. The experimental apparatus comprised a 12 × 20 cm transparent plastic container and a 30 × 30 cm wooden board. After calling the dog's name, the experimenter set a piece of food at the center of the wooden board and then put the plastic container over it. The bait of the apparatus was visible to the dogs, but out of their reach (i.e., the experimenter held up the apparatus in front of his/her face while baiting it, and prevented the dogs from touching the apparatus). The experimenter then placed the apparatus on the ground so that the subject was able to manipulate it and get the food reward by removing the container. During the solvable trials all dogs learned how to get the food reward from the apparatus. After the sixth solvable trial, the experimenter presented to the dog one unsolvable trial in which the container was fixed to the wooden board in such a way that the dog could not get the food anymore. During the unsolvable trial, both the experimenter and the owner stood quietly behind the dog at a distance of approximately 1.5 m, while the dog (off leash during the whole experimental session) was free to move around the experimental area. The owner was instructed not to respond to any of the dog behaviors except for eye contact. The dog's behavior was recorded for 60 seconds after the unsolvable trial was presented.
Analysis
The dog's behavior in the two experiments was coded based on the subsequent video analysis. Behavioral coding was made on the 0.3-second time scale by two independent observers naïve to the purpose of the study.
For the visual contact task, we measured: (1) duration of the first gazing (i.e., time from the moment the dog turned/lifted its head towards the experimenter for the first time until the moment it turned its head away from him), and (2) total duration of gazing at the experimenter during the 90-second test phase.
For the unsolvable task, we measured three behavioral variables: (1) latency to the first gazing (i.e., the time elapsed from the moment the unsolvable trial started to the moment the dog turned/lifted its head for the first time back towards the experimenter or the owner), (2) total duration of gazing at the person, and (3) total duration of physical contact with the apparatus (i.e., the time the dog spent manipulating the apparatus including touching, scratching, pushing, sniffing and licking). To evaluate the general tendency of the dog's gaze responses towards humans, gazing at the experimenter and gazing at the owner were combined.
A subset of the videos (N = 30; 25.0%) was randomly selected and coded by an observer naïve to the purpose of the study. Inter-observer reliability testing using Cohen's Kappa indicated a strong agreement between coders (visual contact task, first gazing duration: k = 0.691, p < 0.001; total gazing duration: k = 0.935, p < 0.001; unsolvable task, latency to the first gazing: k = 0.760, p < 0.001; total gazing duration: k = 0.862, p < 0.001; total duration of apparatus manipulation: k = 0.715, p < 0.001).
To examine the effect of breed group on the dog's gaze responses, we used generalized linear models (GLM). The explanatory variables were breed group (Ancient, Herding, Hound, Retriever-Mastiff or Working), age, their interaction, and sex, while the response variables comprised each of the five behavioral variables. According to the distribution of the response variables, we applied the negative binomial error structure with log link function for the five behavioral variables. ' Ancient' and 'female' were entered as reference categories when constructing the parameter estimates (ß) using GLM. To test the fixed effect of each explanatory variable, the likelihood ratio test with chi-square statistics was carried out (type III test). We used the Steel-Dwass test as a supplementary post-hoc test. Effect of age was estimated by calculating Spearman's ρ or Pearson's r. Analyses were run on R version 2.15.2. (R foundation for Statistical Computing).
Finally, the mean duration of physical contact with the apparatus was not affected by any of the three explanatory variables included in the analysis (i.e., breed group, ß = -0. 31
Comparison with a previous study
We then followed the classification of Passalacqua et al. [29] and re-ran the analysis using only the dog breeds that were used in their study ( [29], there was no significant difference between Hunting/Herding and Molossoid breeds in any communicative behavior in this additional analysis.
Discussion
To estimate the influence of selective breeding on modern dog's ability to exchange visual communicative signals with humans, the present study examined potential breed group differences in human-directed gazing behavior using two behavioral tasks. During the tests, almost all dogs gazed spontaneously at humans suggesting that modern domestic dogs frequently send visual signals to humans as communicative cues when seeking food rewards [16,36].
However, not all breeds were equally prone to using these social cues. We found that it took longer for Ancient breed dogs to establish eye contact with humans, and that they gazed at human faces for shorter periods of time than other breed groups in the unsolvable situation. It could be argued that these inter-breed differences are merely the result of inter-individual differences in motivation for seeking food rewards and/or the dog's persistency to engage in a problem-solving task [37,38]. However, this is unlikely since breed differences were not found in the duration of physical manipulation of the apparatus during the unsolvable task, and all dogs consumed the piece of food offered by the experimenter at the end of the experimental sessions. Thus, these results suggest that the level of engagement in the unsolvable task did not differ according to breed groups, but that Ancient breeds were particularly less prone to use gaze signals with humans even though they are equally motivated to seek the reward.
To explore situation-dependency of behavioral patterns among the different breed groups, we used multiple behavioral tasks. Results indicated that a statistically significant effect of breed group was found only in the unsolvable situation. Thus, dog breed differences in humandirected gazing behavior seem to vary depending on task or situation. In the visual contact task human's gaze preceded dog's gaze, and the dogs had only to keep eye-contact with humans for begging for the reward. On the other hand, in the unsolvable task the dogs had to divert their attention from the experimental apparatus and spontaneously produce gazing behavior-turn back and look at the humans-in an attempt to send them communicative signals. The latter could be considered more complex due to the involvement of a problem-solving component, and the maintenance of the dog's gaze was lower than in the former (i.e., the allocation of time in human-directed gazing was shorter in the unsolvable task [20.96%] than in the visual contact task [51.85%]; Steel-Dwass test, P < 0.001). It is possible that, regardless of breed group difference, dogs have commonly developed an ability to maintain eye-contact with humans in response to human-given gaze, whereas dog's ability for spontaneously producing gazing behavior towards humans has been partially influenced by genetic factors associated with breed clustering.
Previous studies using the 'unsolvable task' yielded similar results when analyzing species or breed differences in the use of gaze signals towards humans. Miklósi et al. [16] found that wolves showed a longer latency to the first gazing behavior, and a shorter duration of total gazing towards humans compared to domestic dogs. Passalacqua et al. [29] examined breed difference in human-directed gazing behavior and reported that Primitive breeds, which were comparable to Ancient breeds used in this study, gazed at humans for shorter periods of time than Hunting/Herding breeds, although the total duration of gazing behavior did not differ between Primitive and Molossoid breed groups. Together with our results on Ancient breeds, these findings support the 'wolf-remnant' hypothesis since both non-domesticated canine species and Ancient dog breeds are less likely to produce spontaneous gaze signals towards humans.
Recent genomic studies of modern purebred dogs have identified major breed clusters distinguishing dogs with similar genetic signatures to wolves (i.e., Ancient breeds) from those under more recent intense artificial selection [21,22]. In the present study, we found a clear behavioral distinction between Ancient breeds and other breed groups, which corresponds to a larger genetic distance between them. Given that dogs of the Ancient breed group are diverse in geographical origin, morphology and working purpose [1], it is likely that a genetic component shared among those breeds (i.e., genetic similarity with wolves) may have a significant impact on dog's human-directed gazing behavior. Thus, that Ancient breeds engaged in less gazing behavior suggests that a dog's communicative ability to convey visual signals to humans may be linked to their genetic similarity to wolves, providing further support to the 'wolf remnant' hypothesis.
The idea that canine behavior has been significantly altered by divergence between wolf-like Ancient breeds and other modern primary breeds is also supported by several sources of published data on the sensitivity of dogs to human-given social cues. Studies on wild dog breeds, which have experienced less of artificial selection, have shown that although dingoes and New Guinea singing dogs are able to respond to human social cues they seem to be less sensitive than other domestic dogs [30,31]. Therefore, it seems plausible that a dog's predisposition for communicating with humans has been enhanced by the artificial selection involved in the creation of modern European breeds [30,31].
However, other studies of breed differences have reported a greater influence of selective breeding on a dog's social cognitive skills involved in specific 'cooperative' work with humans such as retrieving prey, hunting with human partners and herding or guarding sheep [28][29][30]34]. This 'working purpose' hypothesis is supported by the findings of the study by Passalacqua et al. [29] in which the Hunting/Herding group was found to look towards humans for longer times than the Primitive or the Molossoid breeds. In contrast, the current study showed no clear differences within the different types of working groups (Herding, Hunting, Retriever-Mastiff and Working). The discrepancy among the studies could be due to the different categories of breeds used in each study. Perhaps the inclusion of different breeds may have lead to the discrepant results. For instance, the three mastiff-type breeds (Boxer, Bull Terrier, and Rottweiler) that were included and classified as Molossoid breeds in Passalacqua et al. [29] were not present in our study.
It is possible that the complicated breeding history of modern purebred dogs makes it hard to detect any clear genetic or behavioral signature created during the selection process for particular working purposes. In fact, researchers are faced with the difficult challenge of estimating a single original purpose along with the resulting selective force for each breed [20]. For instance, the German Shepherd is thought to be originally bred for herding and guarding livestock but subsequently has also been used for search and rescue, as well as for police and military roles [24,25]. Moreover, the current breeding of show dogs and companion dogs may be also associated with modifying behavioral traits in purebred dogs, an idea that has recently received support from a study on dog's personalities [39]. If this is the case, then lineage differences within a single breed could also lead to behavioral differences. Since modern purebred dogs have been established through various selective pressures at different points during their breeding history, the domestication of dogs can be considered to be still in progress [20,39]. Further investigations focusing on a more detailed analysis of breeding processes is warranted to elucidate the influence of a specific selective pressure on canine behavior. For instance, with the further progress in canine genomic study, it would be important that future studies take into account the actual genetic distance of each particular breed from the wolf.
Our results also show an effect of age on the use of visual signals towards humans, which may reflect the effect of a dog's prior experiences on communication with humans. The present study tested only adult dogs (more than 12 months old), with the assumption that their behavior has already been fully formed by social experience through everyday interaction with their human partners. We found that older dogs gazed for longer times at the experimenter in the visual contact task. This result may be in line with previous findings showing that dog's performance is associated with living conditions and early experiences [40][41][42]. For instance, household dogs gaze at humans for longer than shelter-housed dogs in a similar visual contact situation [41]. Moreover, while dog's performance utilizing human gestural cues to locate hidden food appears at an early age and does not improve with developmental changes [15], dog's use of gazing behavior towards humans greatly improves with age [29]. Although all subjects were household pet dogs that had not received any professional training, we cannot rule out the possibility that differences in everyday interaction with their owners and/or previous experience in requesting help from humans could have accounted for part of the observed variability. In fact, it is likely that the ability to interact with humans has been shaped by a complex interaction between the breed's inherited character and the individual dog's experience during ontogeny [7,29,43]. Further research should try to evaluate the degree to which prior experience in similar scenarios (e.g., how much they beg for food while their owners are eating) is relevant, and incorporate that measure into the analyses.
In conclusion, the present study shows that dog breed difference in human-directed gazing behavior between Ancient breeds and other breed groups is much larger than those among non-Ancient purebred breeds. This pattern is particularly apparent in the unsolvable situation, with Ancient breeds less prone to sending spontaneous gaze signals towards humans than other European breeds. Our findings suggest that cross-specific communicative ability is acquired during an earlier split between wolf-like Ancient breeds and other primary breeds, although it might have been enhanced over the course of breed creation, which continues up to the present day.
|
2018-04-03T01:17:58.331Z
|
2016-10-13T00:00:00.000
|
{
"year": 2016,
"sha1": "57cb6e9cf335afa8140e2eb37ee65db8ebb2582a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0164760&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57cb6e9cf335afa8140e2eb37ee65db8ebb2582a",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
12077994
|
pes2o/s2orc
|
v3-fos-license
|
Inhibitory Control and the Adolescent Brain: a Review of Fmri Research
Adolescence is a developmental period frequently characterized by impulsive behavior and suboptimal decision making, aspects that often result in increased rates of substance abuse, unprotected sex, and several other harmful behaviors. Functional magnetic resonance imaging (fMRI) studies have attempted to reveal the brain mechanisms that underlie the typical inhibitory control limitations associated with this developmental period. In the present review, all available studies in the PsycINFO, PubMed, and Web of Science databases that investigated this issue utilizing fMRI were analyzed. In contrast to adults, adolescents exhibited decreased activity in several brain regions associated with inhibitory control such as the dorsolateral prefrontal cortex, anterior cingulate cortex, and fronto-striatal regions. The decreased activity found in these regions may underlie the diminished inhibitory control abilities associated with this development period.
Introduction
During adolescence, important neurodevelopmental processes such as myelination and gray matter pruning still take place in regions typically involved in cognitive control (Gogtay et al., 2004).The immaturity of these regions is thought to underlie the suboptimal decision making and actions that are typically encountered in this population (Casey & Jones, 2010), which can ultimately result in increased risky and harmful behaviors such as experimentation with drugs and criminal activity (Eaton et al., 2006).Adolescence is also a period during which the symptoms of major psychiatric disorders such as schizophrenia and attention-deficit/ hyperactivity disorder begin to manifest (Insel, 2010).Thus, understanding the neurobiological aspects of the adolescent brain may provide a better understanding of healthy adolescents, and help to provide potential treatments for neural and psychological disorders typically associated with this developmental period (Casey & Jones, 2010).
One aspect often implicated in the onset and maintenance of certain mental disorders during adolescence is the ability to suppress the cognitive processing of undesired information (Luna, Padmanabhan, & O'Hearn, 2010), a cognitive function often referred to as inhibitory control (Miller, 2000).Prior behavioral studies showed that adolescents differ from adults in their inhibitory control abilities (e.g., Luna, Garver, Urban, Lazar, & Sweeney, 2004).These differences are usually attributed to the protracted development of brain regions that may be necessary for the full operation of this function.As shown by prior research (Gogtay et al., 2004), myelination and gray matter pruning processes are still ongoing during adolescence and early adulthood in prefrontal regions that are often engaged in tasks that require a certain level of inhibitory control (Ridderinkhof, Ullsperger, Crone, & Nieuwenhuis, 2004;Badre & Wagner, 2004).
Specifically in adults, functional magnetic resonance imaging (fMRI) studies that have investigated inhibitory control suggest critical roles for the dorsolateral prefrontal cortex and anterior cingulate cortex.The dorsolateral prefrontal cortex is thought to be involved in the implementation of control (MacDonald, Cohen, Stenger, & Carter, 2000), whereas the anterior cingulate cortex is often engaged during conflict resolution and error monitoring (Botvinick, Braver, Barch, Carter, & Cohen, 2001).Parietal areas also appear to be relevant for inhibitory control in adults (Garavan, Ross, & Stein, 1999), apparently by supporting attentional processes that enable the implementation of inhibitory control (Corbetta & Shulman, 2002).Motivated perhaps by the implications of the development of inhibitory control for overall mental health, accumulating neuroimaging research has investigated the brain regions that are differentially activated in adolescents compared with adults when inhibitory control is exerted (e.g., Velanova, Wheeler, & Luna, 2009).Research on such a topic can potentially clarify, for example, whether the suboptimal behavior often presented by adolescents is caused by impaired error monitoring and diminished engagement of the anterior cingulate cortex or whether potential functional alterations of the dorsolateral cortex are associated with difficulties preparing for and implementing inhibitory control during this developmental period.
Thus, the goal of the present article is to expand the understanding of the functional aspects of the healthy adolescent brain that underlie the diminished inhibitory control capacities typically found in this population.To achieve this, a review was conducted of fMRI studies that investigated inhibitory control in healthy adolescents.Reports of fMRI studies that investigated inhibitory control in healthy adolescents that are indexed in the PubMed, PsycINFO, and Web of Sciences databases are covered in the present review.To facilitate the comprehension of the procedures adopted by the studies covered herein, the Results section of the present article is divided into subsections according to the behavioral task utilized to engage inhibitory control.Studies that employed anti-saccade tasks are discussed first followed by studies that employed Go/No-Go tasks and other experimental manipulations.
Methods
A literature search was conducted by selecting articles from the Web of Science, PubMed, and PsycINFO databases that reported fMRI experiments that manipulated inhibitory control in adolescents.The search was performed in July 2012 and updated in December 2012 using the keywords "inhibitorycontrol" or "response-inhibition" in combination with the keywords "fMRI" and "adolescents."To find potentially relevant studies that were not indexed in these databases, searches of the reference lists of the selected articles were performed after selecting articles from the databases.
The inclusion criteria were the following: (1) the article must be published in a peer-reviewed journal, (2) the article must report experiments in which inhibitory control was investigated in adolescents using behavioral tasks and brain activity monitoring by fMRI, and (3) only healthy participants were studied (i.e., non-clinical sample).Articles that reported pharmacological studies and manuscripts that did not meet the aforementioned inclusion criteria were excluded.
Results
From the initial database search, 361 articles were found using the aforementioned keyword combinations.Of these, 159 were found in PubMed, 163 were found in Web of Science and 39 were found in PsycINFO.After eliminating duplicate manuscripts, the articles were selected based on the inclusion and exclusion criteria mentioned above, yielding 11 papers for the final analysis.The searches of the reference lists did not yield articles that were not already included from the database search.
Anti-saccade task
In the anti-saccade task, participants initiate each trial by looking at a fixation point in the center of a computer screen.They are subsequently required to look at the opposite side (i.e., mirror position) of a target that can appear on either the left or right side of the fixation point.To perform this task, the participants must first inhibit the automatic response of looking in the target direction (i.e., pro-saccade) and then convert the input of the target's location using a motor command to look in the opposite direction from the target (i.e., antisaccade; Munoz & Everling, 2004).To examine antisaccade inhibitory responses, anti-saccade performance is often contrasted with performance on a pro-saccade task (i.e., the participants are instructed to look at the target stimulus).Eye tracking equipment is utilized to monitor eye movements, enabling the recording of response accuracy and response time, which are usually the dependent variables in studies that utilize this paradigm.
Neuroimaging and neurophysiological findings from human and primate studies demonstrated that certain brain regions such as lateral parietal areas, superior colliculus, frontal eye fields, supplementary eye fields, and the dorsolateral prefrontal cortex are critical for prosaccade and anti-saccade processing and performance in adults.More specifically, lateral parietal regions appear to represent an interface between sensory and motor processing (Colby & Goldberg, 1999).The superior colliculus plays a role in the generation of saccadic activity by integrating exogenous and endogenous inputs (Trappenberg, Dorris, Munoz, & Klein, 2001).The frontal eye field is critical for the motor execution of voluntary saccades (Pierrot-Deseilligny, Rivaud, Gaymard, Muri, & Vermersch, 1995).The supplementary eye field plays a role in monitoring the context and consequence of oculomotor movements (Stuphorn, Taylor, & Schall, 2000).The dorsolateral prefrontal cortex plays an important role in the preparation of anti-saccade movements, inhibition of automatic prosaccade responses, and decision processes that support oculomotor movements (Pierrot-Deseilligny, Muri, Nyffeler, & Milea, 2006;Munoz & Everling, 2004).
Prior behavioral studies demonstrated that behavioral performance on this task differs according to age (Fischer, Biscaldi, & Gezeck, 1997).Children exhibit slower reaction times and make more errors in the anti-saccade task than adolescents and adults.Adult-like performance on these measures appears to be reached during midadolescence (Munoz, Broughton, Goldring, & Armstrong, 1998), although some improvements are still found until 25 years of age (Fischer et al., 1997), presumably reflecting the protracted development of regions that are intrinsically related to inhibitory control (Gogtay et al., 2004).These findings are consistent with the characterization of maturity as not simply having the ability to perform a task but also performing a task at adult-like levels.Children are already capable of performing response inhibition tasks, but these tasks are not mastered until adolescence (Luna & Sweeney, 2004).
In an early attempt to investigate the brain correlates of differences in inhibitory control between age groups, Luna et al. (2001) utilized fMRI to examine the brain regions that were differentially activated during the performance of an anti-saccade task in three age groups: 8-13 years old (11 children; mean age = 10.9 years; standard deviation [SD] = 1.5 years; eight females), 14-17 years old (15 adolescents; mean age = 15.7 years; SD = 1.2 years; six females), and 18-30 years old (10 young adults; mean age = 24.2years; SD = 2.9 years; six females).These authors found that adults showed more activated voxels and a higher percent signal change during the correct performance of anti-saccade trials compared with the other age groups in the superior frontal eye field, lateral cerebellum, and superior colliculus.Adolescents showed more activated voxels and a higher percent signal change than the other age groups in the inferior frontal eye field, pre-supplementary motor area, and right dorsolateral prefrontal cortex (Table 1).
Notably, these authors found differences in hemodynamic activation as a function of age in brain regions that are typically associated with oculomotor control such as the frontal eye field and supplementary motor area (Pierrot et al., 2006;O'Driscoll, Alpert, Matthysse, Levy, Rauch, & Holzman, 1995).The experimental design used by these authors, however, hindered any strong interpretations of these findings.They used a blocked design (i.e., experimental blocks with only one experimental condition each) instead of event-related designs (i.e., experimental blocks with all conditions intermixed).The main problem with results from such a design is perhaps the impossibility of separating the brain activation elicited by correct antisaccade responses from the brain activity elicited by incorrect anti-saccade responses, precluding the analysis of "pure" anti-saccade responses (Henson, 2006).
In a more recent study, Velanova, Wheeler, & Luna (2008) used an anti-saccade task in participants with similar ages as the participants included in the study reported by Luna et al. (2001): 18-27 years old (28 adults; mean age = 20.8years; SD = 2.79 years), 13-17 years old (35 adolescents; mean age = 15.32 years; SD = 1.63 years), and 8-12 years old (35 children; mean age = 10.50 years; SD = 1.39 years).No significant differences in IQ were found across groups as measured by the Wechsler Abbreviated Scale of Intelligence (WASI; Wechsler, 1999).The data analysis, however, utilized an eventrelated design, allowing the opportunity to separately analyze brain activity elicited by correct and incorrect anti-saccade responses (Henson, 2006).Although this study showed that regions typically associated with oculomotor control had greater activation when participants performed correct anti-saccade responses, no differences as a function of age were found.The engagement of oculomotor regions such as the frontal eye field and parietal regions increased similarly in all age groups when their responses were correct.
Interestingly, correct anti-saccade trials elicited greater engagement of the dorsal anterior cingulate cortex in adults than in adolescents and children.Prior research suggested that this region is important for error processing (Polli, Barton, Cain, Thakkar, Rauch, & Manoach, 2005), and increased activation of this region as a function of age was interpreted as an improvement in error control function (i.e., error-regulatory function).Furthermore, this study showed that a shift occurs during development from predominantly frontal activity to predominantly posterior activity during anti-saccade responses.These results suggest that the improvement in inhibitory control during adolescence results from the augmented engagement of the anterior cingulate cortex for error control and involvement of posterior regions to support attentional and sensory processing (Table 1).
Similar to inhibitory control, reward processing is an important function that can underlie impulsive behavior in adolescents.Prior research showed that regions that support cognitive control and higher order processing remain immature during adolescence, but reward-related regions appear to be well-developed during this period (Casey, Getz, & Galvan, 2008).An important question is how regions that support these processes interact in a task that engages inhibitory control and reward.Another question is how reward can influence inhibitory control.
To examine these issues, Geier, Terwilliger, Teslovich, Velanova, & Luna (2010) conducted an experiment in which 18 adolescents (13-17 years old; mean age = 15.3 years; eight females) and 16 young adults (18-30 years old; mean age = 21.7 years; 10 females) performed an anti-saccade task in which reward was manipulated probabilistically.More specifically, before each anti-saccade trial, a cue indicated whether a monetary reward would be given in the case of a correct response.This manipulation allowed the researchers to examine whether developmental differences existed in the anti-saccade trials when the reward was given and investigate brain activity during different stages of reward processing such as the processing of incentive cues and response preparation.
Behaviorally, both groups were faster and made more correct anti-saccade responses in rewarded trials compared with neutral trials, although this difference reached significance in the adolescent group only.During the processing of incentive cues presented at the onset of each trial indicating whether the trial was rewarded, adults showed more positive activity in the ventral striatum than adolescents when the trials were rewarded.When the preparation of responses was analyzed (i.e., a blank screen that preceded the anti-saccade task by 1500 ms), adolescents exhibited heightened activation in ventral striatum during the rewarded trials than during the neutral trials.Adults exhibited reduced activity during reward trials in this same region (Table 1).Overall, Geier et al. (2010) demonstrated that when reward is provided during the performance of an inhibitory control task, adolescents exhibit reduced activity compared with adults in the ventral striatum when initially processing the incentive cues.During response preparation, however, this pattern was reversed (i.e., adolescents exhibited enhanced activity in the ventral striatum compared with adults).Given that the ventral striatum is heavily involved in reward processing, this finding can be interpreted as reflecting weaknesses in the initial assessment of reward and increased reactivity to the anticipation of reward in adolescents compared with adults.
Although Geier et al. (2010) demonstrated that regions that support reward in an inhibitory control task remain immature during adolescence, they did not determine whether these immaturities found in adolescents are similarly exhibited by children.To examine this possibility, Padmanabhan, Geier, Ordaz, Teslovich, & Luna (2011) administered the same task developed by Geier et al. (2010) in 10 adults (18-25 years old; mean age = 20.6 years; SD = 2.2 years; six females), 10 adolescents (14-17 years old; mean age = 15.8 years; SD = 1.2 years; six females), and 10 children (8-13 years old; mean age = 11.1 years; SD = 1.5 years; six females).No significant differences in IQ were found across groups measured by WASI (Wechsler, 1999).Behaviorally, these authors showed that although both children and adolescents exhibited inferior performance compared with adults when the trials were not rewarded (i.e., neutral trials), they reached adult-like performance when reward was added.In contrast to Geier et al. (2010), no separate neuroimaging analyses of preparation, incentive, and anti-saccade responses were performed.These authors simply contrasted blood oxygen leveldependent (BOLD) responses for all rewarded and nonrewarded anti-saccade trials.All age groups engaged oculomotor control regions such as the frontal eye field, supplementary eye field, inferior parietal sulcus, parietal regions, and dorsal anterior cingulate cortex.Similarly, regions involved in reward were also activated across ages such as the ventral striatum, orbitofrontal cortex, and anterior cingulate cortex (Table 1), suggesting that the fundamental circuitry that sustains reward and inhibitory control is already developed during childhood and adolescence.
In contrast to adults and children, adolescents exhibited enhanced BOLD responses during rewarded trials compared with neutral trials across the right inferior parietal sulcus, bilateral putamen, and bilateral ventral striatum.As suggested by Padmanabhan et al. (2011), the enhanced activity in the inferior parietal sulcus and putamen during rewarded trials indicates that these regions actually support the improvement in performance in these trials and have been previously associated with oculomotor control, response planning (Everling & Munoz, 2000), and reward processing (Delgado, Locke, Stenger, & Fiez, 2003).The authors also suggested that ventral striatal activity may underlie the tendency of adolescents to favor immediate over delayed rewards because this region is heavily associated with various aspects of reward processing (e.g., Bjork, Knutson, Fong, Caggiano, Bennett, & Hommer, 2004).The studies discussed above found differences between age groups in regions involved in the performance of anti-saccade tasks, but they did not investigate whether these differences reflect transient trial-by-trial activations or activations that persist during the entire task.Velanova et al. (2009) investigated this issue by studying three age groups: 8-12 years old (26 children; mean age = 10.5 years; SD = 1.4 years), 13-17 years old (25 adolescents; mean age = 15.3 years; SD = 1.6 years), and 18-27 years old (27 adults; mean age = 20.7 years; SD = 2.7 years).No significant differences in IQ were found between adolescents and adults as measured by the WASI (Wechsler, 1999).These authors found that some of the regions that exhibited sustained effects also exhibited increased activation during development.These regions consisted of the right dorsolateral prefrontal cortex, left anterior prefrontal cortex, right superior temporal/parietal cortex, and bilateral occipital regions.The authors suggested that the protracted development of these regions results in suboptimal sustained inhibitory control during adolescence.
To investigate the effective connectivity (i.e., direct influences between neural populations) that support inhibitory control across development, Hwang, Velanova, & Luna (2010) analyzed the data previously reported by Velanova et al. (2009) using Granger causality analysis (Roebroeck, Formisano, & Goebel, 2005).These authors found an increase in the strength and number of top-down connections from frontal regions to other cortical and subcortical regions from adolescence to adulthood.As suggested by the authors, these increases in frontal top-down effective connectivity may support the improvement in inhibitory control across development (Hwang et al., 2010).
Overall, studies that utilized anti-saccade tasks to investigate inhibitory control in adolescents found that adolescents exhibited decreased activation in the dorsal anterior cingulate (Velanova et al., 2008) and dorsolateral prefrontal cortices (Velanova et al., 2009;Luna et al., 2001) compared with adults, a finding that presumably reflects their diminished capacity of error monitoring and task implementation, respectively.When reward was provided during the performance of the anti-saccade task (Geier et al., 2010;Padmanabhan et al., 2011), adolescents exhibited reduced ventral striatum activity compared with adults during the initial processing of incentive cues but exhibited enhanced activity in this region compared with adults during the latter preparation of responses.These findings can be interpreted as neural evidence of the adolescents' limitations in the assessment of reward and enhanced reactivity to the anticipation of reward (Geier et al., 2010).Although differences across ages in regions involved in the motor execution of saccadic movement were not evident, with the exception of the study that adopted a blocked design (Luna et al., 2001), Granger causality analysis demonstrated an increase in effective connections from frontal regions to other cortical and subcortical regions during development (Hwang et al., 2010).This aspect, in conjunction with the heightened functionality of the dorsolateral and dorsal anterior cingulate cortices, may underlie the inhibitory control advantages that are typically found in adults compared with adolescents.
Go/No-go task
The Go/No-go task consists of the presentation of a series of stimuli.For a given stimulus type, the participants are required to make a motor response (Go).For another stimulus type, the participants are required to withhold a motor response (No-go; Watanabe et al., 2002).Trials that require a Go response are typically more frequent than trials that require a No-go response.Accuracy and reaction time are recorded for each response type.Brain responses to this task have been widely investigated using both event-related potentials (ERPs) and fMRI (Falkenstein, Hoorman, & Hohnsbein, 1999;Simmonds, Pekar, & Mostofsky, 2008), suggesting that the No-go trials in this task elicit brain activity that supports inhibitory control (c.f., Nieuwenhuis, Yeung, van den Wildenberg, & Ridderinkhof, 2003).Specifically, ERP studies have typically shown that this task (No-go trials) elicits early negative effects (~200 ms poststimulus onset) distributed over frontally located electrodes, an ERP component named N200 (Luck, 2005).Recent ERP findings suggested that this negative effect is generated by neural activity in the left anterior region of the mid-cingulate cortex (Huster, Westerhausen, Pantev, & Konrad, 2010).FMRI research, however, indicates that several other regions also appear to be activated during No-go trials such as ventrolateral and dorsolateral prefrontal regions (Liddle, Kiehl, & Smith, 2001) and posterior intraparietal and occipitotemporal areas (Watanabe et al., 2002).
In an early attempt to determine whether the involvement of regions that are engaged during the Go/ No-go task is modified during development, Tamm, Menon, & Reiss (2002) Between-group IQ differences were not reported in the manuscript.Specifically in their Go/No-go task, a series of letters were presented (2 s each).The participants pressed a key in response to every letter except the letter "X."In the Go block, the participants were presented a series of letters except the letter "X."In the No-go block (i.e., experimental block), the participants were presented the letter "X" during half of the trials, thus requiring the emission of responses during half of the trials and suppression of responses in the other half (i.e., when an "X" was shown).Behaviorally, no accuracy differences were found across development, although response times decreased with age.When the Go/No-Go blocks were contrasted with the Go blocks, age-related increases in activation were found in the left inferior frontal gyrus/insula area, extending to the orbitofrontal gyrus.In other words, as age increased, activation in these areas during inhibitory control increased.A limitation of this work, however, was that a blocked design rather than an event-related design was utilized (Henson, 2006).Thus, activation reflected a mixture of regions involved in the task and possibly regions involved in error processing, conflict processing, response preparation, response state (set), and stimulus analysis.Furthermore, with the experimental design used by these authors, identifying activation elicited purely by No-go trials was not possible because the experimental blocks were a mixture of Go and No-go trials.This limitation in their study design may be the reason for the lack of replication of prior adult findings in this experiment such as the findings reported by Liddle et al. (2001) and Watanabe et al. (2002).
In a more recent study, Stevens, Kiehl, Pearlson, & Calhoun (2007) used a Go/No-go task to investigate the functional neural networks that support inhibitory control in 50 healthy participants who were grouped by age into adolescents (11-17 years old; mean age = 14.7 years; SD = 2.0 years) and adults (18-37 years old; mean age = 25.1 years; SD = 5.7 years) with no significant differences in the gender proportion between adolescents and adults.They first identified functionally connected regions wherein activation was elicited by the Go/No-go task using a multivariate analysis method (i.e., independent component analysis) that identified brain regions with similar temporal patterns of signal changes.Dynamic causal modeling (Friston, Harrison, & Penny, 2003) was then applied to these regions to identify their influences on each other (i.e., effective connectivity).The authors found that response inhibition in this task was led by the control exerted by fronto-striatal-thalamic networks over parietal-premotor networks.When the demand for response inhibition increased, fronto-striatalthalamic circuits released parietal-premotor networks from their control, resulting in greater engagement of the latter regions in performance on the Go/No-go task.Compared with adults, adolescents exhibited diminished integration between the regions that comprised the fronto-striatal-thalamic network, an aspect that was associated with decreased behavioral performance.This finding was interpreted as evidence that these regions are less specialized in inhibitory control during adolescence compared with adulthood.
These findings are consistent with the study reported by Rubia et al. (2006) in which a Go/No-go task was also used to investigate inhibitory control in 23 adults (20-43 years old; mean age = 28.0 years; SD = 6.0 years) and 25 adolescents (10-17 years old; mean age = 15.0 years; SD = 2.0 years).Raven's Standard Progressive Matrices Intelligence Questionnaire (Raven, 1960) revealed between-group differences.Analyses of covariance were then conducted with IQ as the covariate to determine group differences in the Go/ No-go performance measures.Similar to the findings reported by Stevens, Kiehl, Pearlson, & Calhoun (2007), Rubia et al. (2006) found that adults exhibited an increase in activation compared with adolescents in fronto-striatal regions including the anterior cingulate gyrus and caudate.These differences between adolescents and adults were interpreted as reflecting the protracted maturation of fronto-striatal networks that are engaged during inhibitory control.
The study reported by Tamm et al. (2002) found increased activation across development in frontal regions that are often involved in cognitive control such as the left inferior frontal gyrus and orbitofrontal gyrus; however, this study adopted a blocked design that hindered stronger interpretations of these findings.Nevertheless, as demonstrated by Rubia et al. (2006) and Stevens, Kiehl, Pearlson, & Calhoun (2007), frontostriatal regions such as the anterior cingulate and caudate appear to play an important role in response inhibition during performance of the Go/No-go task, and the participation of these regions appears to significantly increase the inhibition of undesired responses as individuals become older.
In addition to studies that utilized anti-saccade and Go/No-go tasks, by the time the present review was prepared for submission, one fMRI study that used the Stroop task (Adleman et al., 2002) and one that used a stop-signal task (Rubia, Smith, Taylor, & Brammer, 2007) to investigate inhibitory control in adolescents were available in the literature.The study that used the Stroop task analyzed 11 adolescents (12.6-16.8 years old; mean age = 14.7 years; SD = 1.3 years; seven females) and 11 adults (17.4-22.7 years old; mean age = 20.0 years; SD = 1.7 years).This study included only individuals with a full-scale IQ >80 measured by the WISC-III and WAIS-III.They found that adolescents and young adults exhibited similar involvement of parietal regions while performing the task, although adults exhibited an increase in activation of the left middle frontal gyrus compared with adolescents (Adleman et al., 2002).The blocked design used by these authors, however, precluded a strong interpretation of these data.The study that utilized a stop-signal task analyzed 26 adolescents (10-17 years old; mean age = 15.0 years; SD = 2.0 years) and 21 adults (20-42 years old; mean age = 28.0 years; SD = 5.0 years).All participants were male, and IQ was measured using Raven's Standard Progressive Matrices Intelligence Questionnaire (Raven, 1960).IQ scores were entered as a covariate in the analysis.Activation in the bilateral insula, left thalamus, putamen, and posterior cingulate gyrus were negatively correlated with age, a finding that may indicate compensatory mechanisms.Young adults in this study also exhibited greater activation in the right inferior prefrontal cortex during successful inhibition and rostral anterior cingulate gyrus during inhibition failure compared with adolescents.The age range of the adolescents in this study, however, was 10-17 years, comprising an excessively variable sample in contrast to the other studies reported in the present review (see Rubia et al., 2006), a fact that can hinder the interpretation of the data.
Discussion
The studies reviewed herein demonstrate that the performance of inhibitory control in adolescents engages regions that are typically involved in inhibitory control in adults such as the dorsal anterior cingulate cortex (Velanova et al., 2008), the dorsolateral prefrontal cortex (Velanova et al., 2009;Luna et al., 2001), and fronto-striatal regions (Rubia et al., 2006;Stevens, Kiehl, Pearlson, & Calhoun, 2007).In contrast to adults, however, adolescents exhibited a decrease in activation in these regions, a finding that can be interpreted as reflecting the protracted development of these regions during this developmental period (Gogtay et al., 2004).Furthermore, investigations of the functional connectivity between regions involved in inhibitory control suggest that adolescents, in contrast to adults, exhibit reduced functional connectivity from frontal regions to other brain regions (both cortical and subcortical; Hwang et al., 2010) and from frontostriatal-thalamic networks to parietal-premotor networks (Stevens, Kiehl, Pearlson, & Calhoun, 2007).The impaired connectivity between these regions in adolescents, in addition to the aforementioned diminished activity of distinct prefrontal regions in this population, may be a likely cause of the inhibitory control limitations found in this developmental period.
The two experiments in which reward was provided (Padmanabhan et al., 2011;Geier et al., 2010) suggest that the ventral striatum plays an important role in reward processing during the exertion of inhibitory control, both in adults and adolescents.As reported by Geier et al. (2010), this region exhibited a reduction of activity in adolescents compared with adults during the assessment of reward cues, suggesting that adolescents do not process reward cues as thoroughly as adults.During the preparation to respond to rewarded trials, however, adolescents exhibited greater activity in this region compared with adults, a finding interpreted as enhanced reactivity to the expectancy of the forthcoming reward (Geier et al., 2010).Unfortunately, Padmanabhan et al. (2011) did not report analyses of the preparation and assessment of cues as did Geier et al. (2010), precluding the verification of whether the findings reported by Geier et al. (2010) are replicable.
Despite the potential benefits of a broader understanding of the peculiarities of the healthy adolescent brain (Insel, 2010), few fMRI studies have investigated this issue utilizing inhibitory control tasks.Only 11 empirical articles that investigated this issue were found in the selected databases by the time this article was submitted and revised.Although these are highly informative studies, more research is necessary to elucidate the role of specific regions in the performance of inhibitory control in adolescents such as the dorsolateral cortex, which was clearly less activated in adolescents compared with adults in two experiments (Velanova et al., 2009;Luna et al., 2001), but apparently not in the other nine reports.
Inhibitory control, together with other cognitive control functions, is deeply entangled with other cognitive processes (Badre & Wagner, 2004).Future research would benefit from the development of tasks in which interactions between inhibitory control and other cognitive functions can be investigated in adolescents.An example of such an approach would be to develop experimental paradigms to examine memory performance as a function of different levels of cognitive control (Jaeger, Cox, & Dobbins, 2012;Jaeger, Selmeczy, O'Connor, Diaz, & Dobbins, 2012;Ghetti, DeMaster, Yonelinas, & Bunge, 2010).Another research possibility in adolescents would be the investigation of inhibitory control for emotional information or the influence of emotional state on inhibitory control abilities (Ochsner & Gross, 2005).To date, these issues have not been studied in this population, but such findings could be a valuable addition to the current knowledge on the functional organization of the adolescent brain.
studied 19 participants (8-20 years old; mean age = 14.4 years; SD = 3.1 years; 11 females) who performed a Go/No-go task in the MRI scanner.Cognitive function was assessed using the Wechsler Intelligence Scale for Children (WISC-III) and Wechsler Adult Intelligence Scale (WAIS-III).
Table 1 .
Brief description of the experiments reviewed in the present study Adults>Adolescents indicates regions that are more active in adults than adolescents.Adolescents>Adults indicates regions that are more active in adolescents than adults.
|
2016-10-14T01:18:46.145Z
|
2013-06-01T00:00:00.000
|
{
"year": 2013,
"sha1": "1ba9fc6d2cb70f826b2dba012b678706ff93ba43",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/pn/a/CTCbhSq36wJ7Kr43sLKCLpq/?format=pdf&lang=en",
"oa_status": "GREEN",
"pdf_src": "Crawler",
"pdf_hash": "1ba9fc6d2cb70f826b2dba012b678706ff93ba43",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
186206379
|
pes2o/s2orc
|
v3-fos-license
|
Eidos, INDRA, & Delphi: From Free Text to Executable Causal Models
Building causal models of complicated phenomena such as food insecurity is currently a slow and labor-intensive manual process. In this paper, we introduce an approach that builds executable probabilistic models from raw, free text. The proposed approach is implemented through three systems: Eidos, INDRA, and Delphi. Eidos is an open-domain machine reading system designed to extract causal relations from natural language. It is rule-based, allowing for rapid domain transfer, customizability, and interpretability. INDRA aggregates multiple sources of causal information and performs assembly to create a coherent knowledge base and assess its reliability. This assembled knowledge serves as the starting point for modeling. Delphi is a modeling framework that assembles quantified causal fragments and their contexts into executable probabilistic models that respect the semantics of the original text, and can be used to support decision making.
Introduction
Food insecurity is an extremely complex phenomenon that affects wide swathes of the global population, and is governed by factors ranging from biophysical variables that affect crop yields, to social, economic, and political factors such as migration, trade patterns, and conflict.
For any attempt to combat food insecurity to be effective, it must be informed by a model that comprehensively considers the myriad of factors influencing it. Furthermore, for analysts and decision makers to truly trust such a model, it must be causal and interpretable, as in, it must provide a mechanistic explanation of the phenomenon, rather than just being a black-box statistical construction. 1 https://github.com/clulab/eidos 2 https://github.com/sorgerlab/indra 3 https://github.com/ml4ai/delphi Currently, however, these models are hand-built for each new situation and require many months to construct, resulting in long delays for much-needed interventions.
Here we propose an end-to-end system that combines open-domain information extraction (IE) with a quantitative model-building framework, transforming free text into executable probabilistic models that capture complex real-world systems. All code and data described here is open-source and publicly available, and we provide a short video demonstration 4 .
Contributions:
(1) We introduce Eidos, a rule-based open-domain IE system that extracts causal statements from raw text. To maximize domain independence, Eidos is largely unlexicalized (with the exception of causal cues such as promotes), and implements a topdown approach where causal interactions are extracted first, followed by the participating concepts, which are grounded with specific geospatial and temporal contexts for model contextualization. Eidos also extracts quantifiable adjectives (e.g. significant) that can be used to form a bridge between qualitative statements and quantitative modeling.
(2) We describe an extension of the Integrated Network and Dynamical Reasoning Assembler (IN-DRA, Gyori et al., 2017), an automated knowledge and model assembly system which implements interfaces to Eidos and multiple other machine reading systems. Originally developed to assemble models of biochemical mechanisms, we generalized INDRA to represent general causal influences as INDRA Statements, and load a taxonomy of concepts to align related Statements from multiple readers and documents.
(3) We introduce Delphi, a Bayesian modeling framework that converts the above statements into executable probabilistic models that respect the semantics of the source text. These models can help decision-makers rapidly build intuition about complicated systems and their dynamics. The proposed framework is interpretable due to its foundation in rule-based IE and Bayesian generative modeling.
Architecture: In Fig. 1, we show a high-level depiction of the information flow pipeline. First, natural language texts serve as inputs to Eidos, which performs causal relation extraction, grounding, and spatiotemporal contextualization. The extracted relations are subsequently aggregated by INDRA into data structures called INDRA Statements for downstream modeling. These serve as an input to Delphi, which assembles a causal probabilistic model from them.
Causal Information Extraction
Eidos was designed as an open-domain IE system (Banko et al., 2007) with a top-down approach that allows us to not be limited to a fixed set of concepts, as determining this set across multiple distinct domains (e.g., agronomy and socioeconomics) is close to impossible. First, we find trigger words signaling a relation of interest and then extract and expand the participating concepts (2.1), link these concepts to a taxonomy (2.2), and annotate them with temporal and spatial context (2.3). 5,6 In addition to an API that can be used for machine reading at scale, Eidos has a webapp that provides users a way to see what rules were responsible for the extracted content, as well as brat visualizations (Stenetorp et al., 2012) of the output, 5 This has some similarities to FrameNet (Baker et al., 1998), whose Causation frame has targets (triggers) and frame elements (participating concepts) that are associated with a taxonomy (the FrameNet hierarchy). In our case, the concepts come from a domain-specific taxonomy. 6 We assume here that causal relations are specified within sentences rather than across sentences at the document level, and that the concepts involved in the causal relations can be linked to an appropriate taxonomy. facilitating rapid development of the interpretable rule-grammars.
Reading Approach
To understand our top-down approach, let us consider the individual steps involved in processing the following sentence: The significantly increased conflict seen in South Sudan forced many families to flee in 2017.
(1) We begin by preprocessing the text with dependency syntax using Stanford CoreNLP (Manning et al., 2014) and the processors library 7 .
(2) Then, Eidos finds any occurrences of quantifiers (gradable adjectives and adverbs). These are common in the high-level texts relevant to food insecurity, such as reports from UN agencies and nonprofits, but they are difficult to use in quantitative models without additional information. In the example above, the word significantly is found as a quantifier of increased. Delphi uses these quantifiers to construct probability density functions using the crowdsourced data of Sharp et al. (2018), as detailed in 4.
(3) Next, Eidos uses a set of trigger words to find causal and correlational relations with an Odin grammar (Valenzuela-Escárcega et al., 2016). Odin is an information extraction framework which includes a declarative language supporting both surface and syntactic patterns and a runtime system. Eidos's grammar was based in part on the biomedical grammar developed by Valenzuela-Escárcega et al. (2018) but adapted to the open domain and our representation of concepts. This rule grammar is fully interpretable and easily editable, allowing users to make modifications without needing to retrain a complex model. In the example sentence from earlier, the extraction of a causal relation would be triggered by the word forced, with conflict and families identified as the initial cause and effect, respectively.
(4) The initial cause and effect are then expanded using dependency syntax following the approach of Hahn-Powell et al. (2017). Namely, from each of the initial arguments, we traverse outgoing dependency links to expand the arguments into their dependency subgraph. Here, the resulting arguments are significantly increased conflict seen in South Sudan and many families to flee in 2017. (5) Relevant state information is then added to the expanded concepts. Representing the polarity of an influence on the causal relation edge (i.e., in terms of promotes or inhibits) can be lossy, so Eidos instead uses concept states (i.e., concepts can be increased, decreased, and/or quantified). In the example above, Eidos marks the concept pertaining to conflict as being increased and quantified. If desired, the promotion/inhibition representation with edge polarity can be straightforwardly recovered. The final output of the Eidos system for the running example sentence, as displayed in the Eidos webapp, is shown in Fig. 2.
Concept linking
The Eidos reading system, with its top-down approach, was designed to keep extracted concepts as close to the text as possible, intentionally allowing downstream users to make decisions about event semantics depending on their use cases. As a result, linking concepts to a taxonomy becomes critical for preventing sparsity.
Eidos's concept linking is based on wordembedding similarities. A given concept (with stop words removed), is represented by the average of the word embeddings for each of its words. A vector for each node in the taxonomy is similarly calculated (using the provided "examples" for the node), and the taxonomy node whose vector is closest to the concept vector is considered to be the grounding. In practice, Eidos returns the top k groundings, allowing for downstream disambiguation. The concept linking strategy is modular and allows for grounding to any taxonomy provided in the human-readable YAML format. With this method, Eidos is able to link to an arbitrary number of taxonomies, at both high and low levels of abstraction.
Temporal and geospatial normalization
Time normalization The context surrounding the extractions is often critical for downstream reasoning. Eidos integrates the temporal parser of Laparra et al. (2018) that uses a character recurrent neural network to identify time expressions in the text which are then linked together with a set of rules into semantic graphs which follow the SCATE schema (Bethard and Parker, 2016) and can be interpreted using temporal logic to obtain the intervals referred to by the time expressions.
After the time expressions are identified and normalized, an Odin grammar attaches them to the causal relations extracted by Eidos. If the document creation time is provided, it is also parsed by our model and used as the default temporal attachment for those causal relations without a temporal expression in their close context. Geospatial normalization Eidos's geospatial normalization module (Yadav et al., 2019) has two components: a detection component consisting of the word-level LSTM named entity recognition (NER) model of Yadav and Bethard (2018), and a normalization component which implements population heuristics (i.e., selecting the most populous location (Magge et al., 2018)) and filters using a distance-based heuristic (Magge et al., 2018).
Assembly of causal relations
The output of Eidos is processed by INDRA into a collection of INDRA Statements, each of which represents a causal influence relation. INDRA is also able to process the output of multiple other reading systems that extract causal relations from text (these systems are not described in detail here). INDRA implements input processor modules to extract standardized Statements from each reading system. A Statement represents a causal influence between two Concepts (a subject and an object), each of which is linked to one or more taxonomies (see Section 2.2). The Statement also captures the polarity and magnitude of change in both subject and object, if available. Finally, one or more Evidences are attached to each Statement capturing provenance (reader, document, sentence) and context (time, location) information. This common representation establishes a link between diverse knowledge sources and several model formalism endpoints.
Given the attributes of each Statement and a tax-onomy to which Concepts are linked, INDRA creates a Statement graph whose edges capture (i) redundancy between two Statements (ii) hierarchical refinement between two Statements, and (iii) contradiction between two Statements. Statements that are redundant, or in other words, capture the same causal relation, are merged and their evidences are aggregated. A probability model is then used which captures the empirical precision of each reader to calculate the overall support (a "belief" score) for a Statement given the joint probability of correctness implied by the evidence. As a seed to this probability model, INDRA loads empirical precision values collected via human curation for each Eidos rule. INDRA exposes a collection of methods to filter Statements that can be composed to form a problem-specific assembly pipeline, including (i) filtering by Statement belief and Concept linking accuracy (ii) filtering to more general or specific Statements (with respect to a taxonomy), and (iii) filtering contradictions by belief. INDRA also exposes a REST API and JSON-based serialization of Statements.
INDRA contains multiple modules that can assemble Statements into causal graphs (for visualization or inference) and executable ODE models. In the architecture presented here, Delphi (our Bayesian modeling framework) takes INDRA Statements directly as input, and serves as a probabilistic model assembly system.
Causal Probabilistic Models from Text
Statements produced by INDRA are assembled by Delphi into a structure called a causal analysis graph, or CAG. In Fig. 3, we show the CAG resulting from our running example sentence (cell [1]). The node labels (conflict and human migration) in the CAG correspond to entries in the high-level taxonomy that the concepts have been grounded to.
Representation We represent abstract concepts such as conflict and human migration as real-valued latent variables in a dynamic Bayes network (DBN) (Dagum et al., 1992), and the indicators corresponding to these concepts as observed variables. By an indicator, we mean a tangible quantity that serves as a proxy for the abstract concept 8 . For example, the variable Net migration (as defined in World Bank (2018)) is one of several indicators for the 8 Note that these are not the same as the indicator random variables encountered in probability theory. concept of human migration. To capture the uncertainty inherent in interpreting natural language, we take the transition model of the DBN itself to be a random variable with an associated probability distribution. We interpret sentences about causal relations as saying something about the functional relationship between the concepts involved. For example, we interpret the running example sentence as giving us a clue about the shape of ∂(human migration)/∂(conflict).
Assembly To assemble our model, we do the following 9 : (1) We construct the aforementioned distribution over the transition model of the DBN using the extracted polarities of the causal relations as well as the gradable adjectives associated with the concepts involved in the relations. The transition model is a matrix whose elements are random variables representing the coefficients of a system of linear differential equations (Guan et al., 2015), with distributions obtained by constructing a Gaussian kernel density estimator over Cartesian products of the crowdsourced responses collected by Sharp et al. (2018) for the adjectives in each relation.
(2) To provide more tangible results, we map the abstract concepts to indicator variables for which we have time series data. This data is gathered from a number of databases, including but not limited to The mapping is done using the OntologyMapper tool in Eidos that uses word embedding similarities to map entries in the high-level taxonomy to the lower-level variables in the time series data.
(3) Then, we associate values with indicators using a parameterization algorithm that takes as input some spatiotemporal context, and retrieves values for the indicators from the time series data, falling back to aggregation over a (configurable) set of aggregation axes in order to prevent null results. In Fig. 3, we show the indicators automatically mapped to the conflict and human migration nodes (conflict incidences and net migration, respectively) and their values for the spatiotemporal context of South Sudan in April 2017.
Conditional forecasting Once the model is assembled, we can run experiments to obtain quantitative predictions for indicators, which can build intuitions about the complex system in question and support decision making. The outputs take the form of time series data, with associated uncertainty estimates. An example is shown in Fig. 4, in which we investigate the impact of increasing conflict on human migration using our model, with ∂(conflict)/∂t = 0.1e −t . The predictions of the model reflect (i) the semantics of the source text (increased conflict leads to increased migration) and (ii) the uncertainty in interpreting the source sentence. The confidence bands in the lower plot reflect the distribution of the crowdsourced gradable adjective data.
Assessment
We are currently in the process of developing a framework to quantitatively evaluate the models assembled using this pipeline, primarily via backcasting. However, the systems have been qualitatively evaluated by MITRE, an independent performer group in the World Modelers program charged with designing and conducting evaluations of the technologies developed. For the evaluation, a causal analysis graph larger than the toy running example in this paper (≈ 20 nodes) was created and executed. Noted strengths of the system include the ability to drill down into the provenance of the causal relations, the integration of multiple machine readers, and the plausible directionality of the produced forecast (given the sentences used to construct the models). Some limitations were also noted, i.e., that the initialization and parameterization of the models were somewhat opaque (which hindered explainability) and some aspects of uncertainty are captured by the readers but not fully propagated to the model. We are actively working on addressing both of these limitations.
Conclusion
Complex causal models are required in order to address key issues such as food insecurity that span multiple domains. As an alternative to expensive, hand-built models which can take months to years to construct, we propose an end-to-end framework for creating executable probabilistic causal models from free text. Our entire pipeline is interpretable and intervenable, such that domain experts can use our tools to greatly reduce the time required to develop new causal models for urgent situations.
|
2019-06-07T20:44:06.206Z
|
2019-06-01T00:00:00.000
|
{
"year": 2019,
"sha1": "209f56eeccf227ef8917e09e7093f168b2679b72",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1b190805f93ea20b86b1e06e56c9f2b9cd8c43ae",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
148522862
|
pes2o/s2orc
|
v3-fos-license
|
Rules Governing the Occupation of Nationalized Territory Subject to National Law of the Jungle Nasrollah
All forests, meadows, natural woods and Woodlands are part of public property belonging to the state to protect and restore and develop these resources and exploitation of the forest is on the Bonnie Iran (1). (See Article 2 of the Law on the protection and exploitation of forests, adopted in 1346). In fact, it notices that these national lands unauthorized occupation by natural or legal persons Both public and private these organization for legal obligations mentioned in the complaint against them. In relation to possible criminal prosecution by the proprietor of the national territory, before the closing of the aforementioned organization, different approaches have been proposed. In this paper an attempt is also blind religious views, the relevant legal provisions, the legal department comments judiciary one of the above to assay and analysis.
Introduction 1.
By virtue of a national law adopted by the nation's forests 10/27/1341: "field and the peerage of all natural forests and pastures and woods and land battle of the property (2) And is owned by the government.Even though it occupied before the date of the document are the property taken." Natural forest or grassland or forest or grassland or forest thicket of words is not created by the parties.(Paragraph A of the Implementing Regulations, the national law of the jungle) The three categories of forest are satisfied.
-They are evidence of forest land and forest trees such as seedlings or sprouts provided that there is a group or scattered on the nationalization of forests and cultivated crops not or ash is an annual.
-Trees are sparse forest land vehicles exist and size of the trees to the north from Astra to Glydaghy less than one hundred cubic meters per hectare and in other areas of less than twenty cubic meters.
-How to plant a tree or forest lands that are dense or sparse and they exist in.(Paragraph 5 above article) Ranging from wooded grassland and wooded.
-Non-wooded pasture land, including mountain slopes or flat ground cover in the season because of forage plants and vehicles with regard to the history of grassland known as grassland.
-Wooded grassland pasture that has trees and get the volume of trees per hectare over one hundred cubic meters (Paragraphs 6, 7 and 8 above article).
With the Islamic Revolution and the constitution and the adoption of Sharia legal and public forests were among the public wealth.Article 45 of the constitution are the lyrics to "public wealth assets such as land your hair or abandoned, forests, marshlands, natural woods, meadows, which is optional, and so the Islamic government to act in accordance with the public interest to them." The national government's action in the meadows, forests, forest lands and natural woods, and is considered one of the rule of private property on public land could be the due to decline.In the other words, in accordance with Article 22 of the Registration Act, departments and State courts are the only one who recognizes it as the owner of the property is registered in his name in the real estate office but the sentence is contrary to the government to recognize the forest or rangeland the exception of... .The above principle, public ownership and private ownership of the land is made of the type of property -because of the general population -ended.
Considering the importance of and the need for strict enforcement of the provisions of national forests, national lands and handling procedures initially objected to the decision will be explained.
First Speech: National Land and handling procedures to protest against the nationalization of land 2.
If what national law in accordance with Article One of forests, fields and forests all the peerage as public property is owned by the state, so long as the property of the void in accordance with the criteria set forth in the law and other laws, including the protection and exploitation of forests and compliance with the procedures stipulated in the regulations related to natural resources such as forests, grasslands, and.... is detected, it cannot be considered to be national.In other words, the effect provisions in national law of the jungle, unlike the appearance of a text phrase and its reason material that, pending the determination of the detection and diagnosis of meadows and forest and offices above the effects of phonological facility.(Shamsol, 1385, pp. 65) Recognition of national resources and the excluded (3) the national law of the jungle and often in compliance with the definitions set forth in the conservation and utilization of forests with the Ministry of Natural Resources -Current Ministry of agriculture -. (See Article 56 of the Law on the protection and exploitation of forests) in recognition of the land as forest, grassland or forest lands must comply with the legal formalities.
First Paragraph: The National Land and procedures 3.
In recognition of the national territory subject to national law of the jungle, the rules implementing Article 56 of the Act of 28/04/54 protection and exploitation of forests and protect the implementing regulations of Article 2 of natural resources and reserves of the forest Act of 16/12/71 is necessary.
Visit, reporting and issuing assessment
Ministry of agriculture -of forest, Rangeland and Watershed Management -when the detection zone to deploy the agent.Agent visitor map area should visit the area and if not possible, to sketch the necessary research to develop and appropriate means to visit the area and legal execution do, thereby obtaining a building if, after the adoption of national legislation on the construction of the building and its reasons in writing and leaves visitors with a comment, the one who commissioned the investigation, submit.(article 3 of regulation implementation of article 52 of the law).
Visitors must leave the unit after receiving the report and ensure the accuracy of its contents in accordance with the provisions of article 56, an attempt to detect issuing.(Ritual this letter article 4) Leaves characterize the situation as a national resource of natural and legal definition of the profile records, layout agro, location, extent and area visited and exceptions to article 2 of the National Forest, with inquiries from the authorities cited and when viewed in the border area, mountain or river or highway or public road, there is described a convenient location and its exceptions to any specific terms.(Article 2 of the rules of procedure)
Publishing notice
After issuing a public notice of the action to detect and diagnose these provisions leaves published.The notice referred to in Article 56 of the law of conservation and utilization of forests and Article 2 of the Law of natural resources and protect the forest reserves In addition to the mass-circulation newspapers and local newspapers, to separate Tex Meyer and Moran by police officers to inform people on the streets and public places is attached.The notice must be taken of the objections and in the absence of the protest.Police officers Law required within one week's notice in writing to the reference attached and shall declare the ad.(Note 3 above article) Section II: How to deal with objections to the national recognition of the Earth 4.
Recognize Land or land belonging to natural or legal persons, as the case may be evidence of forests, forest lands, rangelands, forests and meadows of the national law of the land or land diagnosis and deterioration of private property, always interested parties are protesting.
Interested parties
Article 56 of the Law on the protection and exploitation of forests and rangelands, stated: "Interested parties can view the ministry (Ministry of Natural Resources) Protest ... "Wrote a definition of interested parties did not matter to determine whether the law of the land dispute with the implementation of Article 56 of the law of conservation and utilization of forests and regulations of the relevant issues relevant persons Benefit was determined.
Land owner farmers Layout
Land owner farmers Layout refers to people who, according to the rules and regulations of agrarian reform, agricultural Layout been entrusted to them or in statistics of the farmers who cultivated layout be.(Section A of the Executive Law amended Regulations duty land dispute issue implementing Article 56) Given that the national law of the land covered by forests and cultivated land as uncultivated land on it own have been excluded from the outside and above the law and therefore persons working in the implementation of regulations land reforms, to agricultural land, Gardens granted to them, they can identify the forest And Rangeland and Watershed country's national (uncultivated on it own) of land under protest and driving violation and revocation decisions shall be demanded.
Owners
Whether legal or natural persons referred to agricultural lands and buildings Yagh the title or certificate of ownership or representative office agrees definitive rulings issued by the court, ownership of the date of approval of forest The country is realized.(Paragraph 2 above, however, the vote of the General Board of Administrative Justice Court No.79/99, has been canceled.)Persons mentioned in Article 3, Article 2 of the Law on National Forests are the lyrics to "the land approved equipment and rural homes and land and gardens in the event of ownership documents, which are subject to the ratification of the construction of It was not a matter of law, be it national (the uncultivated on it own diagnosed) of land or land detection, object."
The owners of gardens and plants
The entities referred to in the documents referred to in paragraph 2, claiming ownership of the domain are land gardens and facilities.(Paragraph 3 of Article I of the amended by-law of the land disputes in the implementation of article 56 of law rule) These three entities can provide positive evidence of their right to national territories, and their protests on the ground or grounds of national diagnosed diagnostics, announced.It is noted that in addition to the aforementioned individuals, organizations and believers governmental institutions to implement article 56 of the Law on the Protection and Utilization of Forest and Rangelands and its subsequent amendments have appealed to the Board based its objection which can be expressed in a single article of the law determine the implementation of article 56 of the land dispute the protection and exploitation of forests.
The deadline for objections
In the former, they are not interested parties to you within one month after the written notice or notice by the Ministry of Natural Resources, one of the centers of mass circulation newspapers and one local newspaper and other usual and proper location of the Ministry and the protests ads must submit their reference issuer.(Article 56 of the law of conservation and utilization of forests) With the approval of the final implementation of Article 56 of the law of conservation land dispute exploitation of forests adopted on 22/06/67, the deadline for interested parties to protest the implementation of Article 56 has been deleted and are without observance of the time limit, the implementation of Article 56 to identify areas of challenge.
Reference to handle objections
Law to determine differences in the implementation of Article 56 of the law of land conservation and utilization of forests explosion "self-interested parties can appeal to a delegation consisting of the office of farmers, officials Forestry run, Jihad members, properties, land tenure, one judge and two members of the village council or the local tribes that formed in each city under the Ministry of Construction.The Council of the Ministry of Jihad in every city and recognize the presence of at least 5 out of 7 people found the following opinion panel judge panel will be binding unless in cases under Articles 284 and 284 repeated three Sharia Law Criminal Procedure Act 6.6.61'( Article 326 of the Civil Procedure Code) Although the latter part of the single article mentioned above, only in the three judge panel under articles 284 and 284 of the Criminal Procedure Code, adopted on 06.06.1361ability protest was repeated, but with the implementation of the reform of rule land dispute 56 law on the protection and exploitation of forests on 05/03/87, last section, the single article was amended thus: "The board or a judge, court of appeal, I think it is objectionable in the branches of.Obviously staff unofficial local experts can use as a bachelor." Accordingly, the first time a month to protest the removal of national recognition and secondly stakeholders including landowners, farmers layout land owners, owners of gardens and public facilities and institutions to implement Article 56 of the law of conservation the operation of the forest will rangelands and its appeal to the Board of a single object can determine the territorial disputes in the implementation of Article 56 of the present.And thirdly, indecisive and staff judge in the court protest and appeal the site of a property, and unless the National recognition of the definitive reference on the Board considered that the decision of a single judge of the judgment of the Board raised the objection and appeal to protest against the verdict as the Appeals Court of First Instance and the to the does not know that the within the period for appeal to the court of first instance, appeal or not to appeal out of time to be diagnosed.
Two words: national land tenure under the national law of the forest 5.
National Land Nationalization Act either forest or grassland, forests and forest lands, as the case may be owned by the stakeholders, including farmers layout land owners, and facility owners of gardens and corporations and government agencies by the parties and seized (4) And operation is unlicensed.
Forest, Rangeland and Watershed Management in accordance with Article 2 of the Law on the protection and exploitation of forests and other natural resources laws, including laws to protect the forests and reserves of the Act in order to protect and 12/07/71 natural resources of the national action plan against acting on the complaint duress.
The tenant complaints and criminal prosecution and punishment, according to the provisions of Article 56 of the Law on the protection and exploitation of forests and amendments are implemented is or not.Accordingly, in a separate paragraph for each of the above options.
First paragraph: the uncertainty of the national land tenure 6.
When the Ministry of Agriculture -Department of Natural Resources ex -Nationalization of the land is deemed to be definitively held that the decision of the Board to determine a single article of land disputes in the implementation of Article 56 of the operation of the law of conservation forests and rangelands, to protest the results of the Board for as National recognition is being verified.The decision pursuant to a final judgment (5) Certainty is reached.
Article 55 of the Law on the protection and exploitation of forests is poetry."Everyone has to capture the national resources in the forests of a country exceeds national law to a year to three years in prison will." Seems to capture the subject matter, the purpose of acquiring and creating a national land tenure rights on their objective.It is important to capture the spiritual and material elements analyzed.The element consists of a series of events which took physical possession of the land makes such as physical exercise to keep, use and exploitation of the object or change by seizing on the spiritual elements that possess the property as the owner has.(Shams, 1385, p. 123) The sanctions law enacted in 1361, after which the Penal Code Article 690 of the Penal Code duress of land was nationalized.Material provides: "Whoever by histrionics as you wake up, walls, change the interface, eliminating border, plot layout, piping streams, wells, planting trees and crops and the like, to obtain possession of the land is arable, whether cultivated or fallow crops, forests and national grasslands, mountains, gardens, nursery, water resources, the eyes of starlings, natural rivers and national parks, facilities for agricultural and livestock and agro-industry and barren and uncultivated land and property owned by the government or company and wasteland closed to state ... entitled to possession or attempted to introduce yourself to learn the punishment is imprisonment from one month to one year." According to the text of the duress of land and property owned by the state or private individuals to authorize punishment is prohibited.The criminal complaint in court if it can lead to the desired result plaintiff to prove that the belonging to After the plaintiff is required to prove their ownership in the case of property or right of possession or harassment prevention, evidence that the proof is necessary and other organs of the claims in criminal complaint should be as well regulations and standards adopted by the duress of civil litigation case assessment and judged.(Shams, 1382, p. 386) According to Article 158 of the Civil Procedure Code "is to capture suit claims that the former proprietor of the immovable property of another without his consent, he removed him from seizing the property of their recovery requests».
The lawsuit claims the parties prove their right to property that may or may not be not hang Bo Dan forcible seizure of the defendant's claim, though here too read about the property, which may or property The handle is not available.(Shams, 1382, p. 355) 40 By applying the provisions of Article 56 of the Law on the protection and exploitation of forests and subsequent regulations, former possessions and property of persons on public land and property lost and the seizure of individuals in the public interest representatives of the Ministry of Agriculture is established.Accordingly, on land formerly occupied public land, although the possessed, but adapted to the national law of the jungle, the possessions that he has no respect for the law and in accordance with Article 36 of the Civil Code "occupation that would prove to be caused by or not vector legal, valid, "the other side in history to win the general population and forcible seizure of possessions legal seized on state capture, captured the action for compliance with Article 690 of the Penal Code, does she deserve punishment Regulation will be.
Section II: The national land tenure is uncertain 7.
In relation to possible criminal prosecution national land occupied by non-deterministic different views of legal scholars and policy space is also the return of Matt.Some are captured possible criminal prosecution and others full implementation of the provisions of Article 56 of the prosecution of the proprietor, not possible either.In this section describes each of these views is discussed.
The possibility of criminal prosecution occupants
Legal proceedings against the possibility of occupying public land are uncertain terms, that the adoption of a national law of forests, fields and forest lands and pastures aristocratic part of public property and ownership of the public property as is loppy.Deemed appropriate, that the rights of the individual rights of the preferred and corruption wherever it is required there have avoided it.
Because of the lack of prosecution of public land occupied by non-deterministic, causing widespread encroachment of public land by Joe profit is therefore deemed appropriate by the prosecution to avoid of them the occurrence corruption not be done.
However, under article 2 of the law of conservation and utilization of natural resources, national forests and grasslands maintenance is the responsibility of Forests and Rangelands and Watershed country -a former Forest Service -has been performing the task, requires that the national land tenure and duress of a complaint, the Court attempted to address and prosecute and punish the perpetrators.
Unlike the group, no objection to national recognition and acceptance of the land as an implied waiver of the objection of the land and who, despite knowing the aforementioned diagnosis and to protest against it.The plan is to continue the protest and refused to issue a final ruling at the same time attempting to capture the national territory, you have to remove their possessions, their action.Otherwise, the proprietor unjust, criminal prosecution.Some courts ( Branch 1059 Tehran criminal court sentence of 198 -07.17.86) by accepting the view of the above, the proprietor of the national territory by non-deterministic and prosecuted in accordance with article 690 of the Penal Code has been sentenced to imprisonment .
Failing to prosecute captured
According to this view assumes full implementation of the provisions of Article prosecute captured only in 56 of conservation and utilization of forests and pastures and subsequent amendments that may later may prove possible and on the evidence of his Citing have found.
First) according to Article 55 Law on the Protection and exploitation of forests, "the Ministry of Agriculture and Natural Resources shall be informed as soon as the danger by their forestry officers, repel the invasion.If the Ministry of Agriculture and Natural Resources at the deadline in the notice or advertisement is detected, in this case, stop and continue to be subject to criminal prosecution thus ensuring full implementation of the provisions of Article 56 shall be."According to the note, the full implementation of the provisions of Article 56 proceedings proprietor only if it is possible.
B) Legal Office of the Judiciary opinion No. 97698 / 7-23 / 12/78 has been announced."Before the definitive diagnosis of natural resources, aggressive prosecution, no justification, but the definitive diagnosis, the perpetrator should be prosecuted" (Golduzian, 1382, p. 411).
C) According to no. 35 dated 03.29.1353 in accordance with the vote of the General Assembly of the precedents of the Supreme Court, "according to Article 56 of the Act explicitly recognize the conservation and utilization of national resources Jungles Act of 1348 has been assigned to the Ministry of Natural Resources, and the interested parties in the period after written notice of objection or notice has been given, as well as handle the objection., so in the case of Article 55 states, criminal prosecution will be allowed if the provisions of Article 56 in respect of recognition of national resources has been implemented."(Hodjati Palmer, 1385, p. 9) (6) Fourthly) narrow interpretation of criminal law in favor of the defendant, where appropriate, the prosecution charged the national diagnostic certainty before being stopped.
Fifth) the theory No. 2047 / 17.03.13827. Legal Department of the Judiciary: "The prosecution of accused in determining punishment according to Article 690 of the Penal Code, if that is allowed by criminal complaint with the different parties in the property seized, not met." since the public domain only if the full implementation of the provisions of Article 56 and the certainty of diagnosis, is established, and before the full implementation of the above provisions, the government seized property is not set, the prosecution of the proprietor to stop the certainty of diagnosis.
Single female staff to obtain ownership of the protesters attempt to call the office and authority of the Real Estate Registration has announced "The location and the location of the sub-plate does not match the ownership documents have been issued , separation of ownership documents issued to persons issued several times actual size." According to the minutes of the Board of Inquiry No. 77 dated 09.11.70 on the basis of a single article has announced, "As long as the relevant registration office, has not been issued a certificate of ownership, in an effort to address complaints from the staff of a single article is not allowed." Despite the wording of the law to stop the prosecution of the accused to fully implement the provisions of Article 56 and the verdict precedents No. 35-29 / 3/1353 General Assembly of the Supreme Court, and legal principles mentioned above, unfortunately there is no precedent in this area unit, and some national courts ruling condemning land occupied by non-deterministic, are exported, including the votes petition No. 198-17 / 7/1386 issued by Branch 1059 of Tehran's Criminal Court (especially the rights of public funds in the National Land and Natural Resources, Tehran) is, approved by Branch 42 of Tehran's Appeals Court is located, national detected, land disputes subject to final approval of the application of article 56, and the ability to recognize the object again, interested parties, including the owners attempted to protest plaque above the staff are single article.
Department of Natural Resources in 1383 in Tehran submitted a complaint to the court duress Boomehen , meet the national land tenure acting plates 79 and 78 are called the Tehran.
Branch Assistant Prosecutor General and Revolutionary Courts Boomehen with inquiries from the Department of Real Estate Registration Gholhak , ownership and registration plate above the Iranian government to put into question and the lack of registered office plaque above the corresponding branch of the government reports, because of the lack of ownership of the land occupied by that branch of the plaintiff injunctive indicted, No. 250 dated 29.2.84 and above the sentence issued by Branch 1086 of Tehran public court confirmed.
New Town Development Company subsequently campus, a separate complaint seeking prosecution brought acting plaque above, And the first branch Boomehen investigation due to a lack of ownership of the land occupied by the plaintiff dated 15.06.84 issued cease prosecution, No. 1008 dated 30.7.84 and above to the sentence issued by Branch 1086 of Tehran public court finds certainty.
Again in 1385, the District Court in Tehran 3 (especially the rights of the treasury) from the Department of Natural Resources former perfect application for re-charged and prosecuted to recommend the removal of paragraph 3 of Article 3 of the law courts and the public (due to new discoveries) will be raised defendants further pursue the issue.
District Court Branch 5 3 Tehran the culpability of the defendants issued a special litigation guardian to protect the rights of the public treasury the National Land and Natural Resources in the indictment punish the perpetrators, according to the article 690 of the Penal Code shall apply.
1059 Court of Criminal Tehran branch diagnosis dated 21/09/52 and 16/05/52 and banners on the leaves arguing that "a vote in favor of the General Board of the Supreme Court precedents No. 681 dated 07.26.84 that : "Construction Jihad Ministry of National Resources and exceptions specified in Article 2 of the Law of forests and pastures in accordance with Article 2 of the law of conservation and protection of natural resources and forest reserves in the country national resources belong to the Government of the Islamic Republic of Crimea had enough lack of ownership document issued by the government and the government does not rule out the property." and to meet the government's ownership Rqbat possessions and ill defendants and destructive interactions in society and loss of owner Muslim public treasury they adapt to Article 690 of the Penal Code to one year in prison and iodine abstraction and extermination of buildings in the national arena and restore the former situation condemns.
Lawyers arguing that: First) provisions of Article 56 (b) rule of law and protection of the interest portion of forest and meadows and its subsequent amendments in respect of the implementation of the national proceedings are not primarily objection to a single article of the Board of plaques captured or for proving not commented national recognition of the uncertainty and lack of proof so.
B) Adopt an amendment of Article 55 of the law to protect the exploitation of forest and grasslands, as well as ratings precedents No. 35-29 / 3/1352 General Board of the Supreme Court the prosecution and punishment has been suspended on full implementation of the provisions of the National Land acting Article 56 is.
C) Narrow interpretation of criminal law in favor of the defendant, requires the adjudicating and national recognition of the uncertainty, the prosecution proprietor, is suspended.
Fourthly) ratings precedents No. 681 dated 07.26.84General Board of the Supreme Court also emphasized the need for full implementation of the provisions of Article 56stressed and non-issuance of title deeds in the name of the state in state ownership does not rule out the assumption that the provisions of Article 56 of the is fully applicable to the appeal court to appeal their sentence.
Subsequently appealed the verdict, branch 42 of the Tehran Appeals Court adapt petition No. 128 dated 15.02.87 petition confirm primitive and simply because the abstraction of iodine and extermination of the building and the building in accordance with Clause 2 of Article 690 of the Penal Code, subject The petition is presented in this section violates Petition primitive act.
Conclusion 8.
As was observed with the possibility of criminal prosecution acting national territory before the full implementation of the provisions of Article 56 Law on the Protection and Utilization of Forest and Rangeland two different approaches have been proposed.
First view emphasized the necessity of observing the rights and interest of the community, in line with the national law of the jungle material and material 2. Protection Act and the exploitation of forests and rangelands, complaining of forests and pastures and chase a watershed against acting Criminal acting prescribe them.
The second view, contrary to the first view, to preserve the rights of the accused and the prosecution and punishment ban him to fully implement the provisions of Article 56Law on the protection and exploitation of forests and rangelands, are considered.According to this view, according to an article Article 55 Law on the Protection and exploitation of forests and grasslands and ratings precedent No. 35 dated 03.29.1352The General Assembly of the Supreme Court and the Theory No. 97698 / 7-23 / 12/78 Legal Department of the judiciary and the narrow interpretation of the criminal law in favor of the accused and also the ratings precedents No. 681 dated 26.07.84 Supreme Court, the proceedings before the definite diagnosis of natural resources does not exceed the legal justification and must be stopped.
It is the view of the principles of legal certainty and the rules, regulations, compliance and hope more aphids Da courts to follow precedent decision No. 35 dated 03.29.52 General Board of the Supreme Court and implementation of legal texts, precedent decision of National Land and acting prosecution to the full implementation of the provisions of Article 56 Law on the protection and exploitation of forests and rangelands, are suspended.2 -Public property or property that is used for direct all people dedicated to protecting the public interest and for the provincial government to which the public can handle it.3-Accordance with the national law of the forest (Article 2 note 3) "areas and areas of arable land and gardens, as well as utilities and rural homes in the area, according to the law, only the ownership of forests and pastures that have been made to the date of enactment of this Act shall not be subject to the law 1Ayn was "... 4-Possession, dominion and authority is customary human rights in its financial position, this authority is based on the effects of outside, however, be such that the norm as the proprietor of the copyright owner and the owner knows the financial predominate.5 -Article 22 Registration Act "legal due process of law or by the expiration of the protest and appeal and litigious means that the ruling issued by the closed cases».6-Qazvin justice majority, with the possibility to punish offenders national territory, so the lack of implementation of the provisions of Article 56, said, as foresters uncertain terms at any time of the commission of land disputes in the implementation of Article 56 of the final single article, investigation is objectionable; this objection is no statutory deadline, with regard to decision No. 35 -29/3/1353 precedents, the Commission voted to issue a single article mentioned before, accused of raping such lands, cannot prosecute.These terms have been approved by the Commission on Judicial Education.Quote from: meetings of Justice, the Penal Code, Vol. 2, p. 1461.
Footnotes 1 -
Forest, Rangeland and Watershed country is now in charge of Iran's obligations under the Ministry of Agriculture Forest Service Contract.
|
2018-12-15T06:40:40.724Z
|
2016-06-05T00:00:00.000
|
{
"year": 2016,
"sha1": "e2fddcc25fd8f269ca1a1162f97351ca9f68afdf",
"oa_license": "CCBY",
"oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/9207/8890",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e2fddcc25fd8f269ca1a1162f97351ca9f68afdf",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
25618770
|
pes2o/s2orc
|
v3-fos-license
|
Role of liver stem cells in hepatocarcinogenesis.
Liver cancer is an aggressive disease with a high mortality rate. Management of liver cancer is strongly dependent on the tumor stage and underlying liver disease. Unfortunately, most cases are discovered when the cancer is already advanced, missing the opportunity for surgical resection. Thus, an improved understanding of the mechanisms responsible for liver cancer initiation and progression will facilitate the detection of more reliable tumor markers and the development of new small molecules for targeted therapy of liver cancer. Recently, there is increasing evidence for the "cancer stem cell hypothesis", which postulates that liver cancer originates from the malignant transformation of liver stem/progenitor cells (liver cancer stem cells). This cancer stem cell model has important significance for understanding the basic biology of liver cancer and has profound importance for the development of new strategies for cancer prevention and treatment. In this review, we highlight recent advances in the role of liver stem cells in hepatocarcinogenesis. Our review of the literature shows that identification of the cellular origin and the signaling pathways involved is challenging issues in liver cancer with pivotal implications in therapeutic perspectives. Although the dedifferentiation of mature hepatocytes/cholangiocytes in hepatocarcinogenesis cannot be excluded, neoplastic transformation of a stem cell subpopulation more easily explains hepatocarcinogenesis. Elimination of liver cancer stem cells in liver cancer could result in the degeneration of downstream cells, which makes them potential targets for liver cancer therapies. Therefore, liver stem cells could represent a new target for therapeutic approaches to liver cancer in the near future.
Abstract
Liver cancer is an aggressive disease with a high mortality rate. Management of liver cancer is strongly dependent on the tumor stage and underlying liver disease. Unfortunately, most cases are discovered when the cancer is already advanced, missing the opportunity for surgical resection. Thus, an improved understanding of the mechanisms responsible for liver cancer initiation and progression will facilitate the detection of more reliable tumor markers and the development of new small molecules for targeted therapy of liver cancer. Recently, there is increasing evidence for the "cancer stem cell hypothesis", which postulates that liver cancer originates from the malignant transformation of liver stem/progenitor cells (liver cancer stem cells). This cancer stem cell model has important significance for understanding the basic biology of liver cancer and has profound importance for the development of new strategies for cancer prevention and treatment. In this review, we highlight recent advances in the role of liver stem cells in hepatocarcinogenesis. Our review of the literature shows that identification of the cellular origin and the signaling pathways involved is challenging issues in liver cancer with pivotal implications in therapeutic perspectives. Although the dedifferentiation of mature hepatocytes/cholangiocytes in hepatocarcinogenesis cannot be excluded, neoplastic transformation of a stem cell subpopulation more easily explains hepatocarcinogenesis. Elimination of liver cancer stem cells in liver cancer could result in the degeneration of downstream cells, which makes them potential targets for liver cancer therapies. Therefore, liver stem cells could represent a new target for therapeutic approaches to liver cancer in the near future.
INTRODUCTION
Liver cancer is one of the most common tumors and represents the second leading cause of cancer-related death worldwide. Its incidence continues to increase while the prognosis remains gloomy [1] . Management of liver cancer is strongly dependent on the tumor stage and underlying liver disease. Unfortunately, most cases are discovered when the cancer is already advanced, missing the opportunity for surgical resection. For patients with unresectable or metastatic disease, however, no systemic treatment has been found to prolong survival in randomized studies and no systemic chemotherapy provides a sustained remission [2] . Although Llovet et al [3] showed that sorafenib, an oral multikinase inhibitor, prolonged the median survival and the time to progression in patients with advanced hepatocellular carcinoma (HCC), most of the recent phase Ⅲ trials of multi-targeted tyrosine kinase inhibitors (TKIs) have obtained disappointing results [4][5][6] . Thus, an improved understanding of the mechanisms responsible for liver cancer initiation and progression will facilitate the detection of more reliable tumor markers and the development of new small molecules for targeted therapy of liver cancer [3] .
Primary liver cancer (PLC) is a form of liver cancer that begins in the liver. The molecular mechanism associated with initiation and progression of PLC remains obscure. HCC is the most common type of PLC, representing more than 80% of the cases of PLC. Cholangiocellular carcinoma (CCC), the second most common PLC, accounts for approximately 15% of PLC cases worldwide [7] . Combined HCC and cholangiocarcinoma (cHCC-CC) is an uncommon subtype of PLC that displays components of both HCC and CCC and now accounts for 0.4% to 14.2% of all PLC cases, with significant variations from country to country [8][9][10] . Although all three subtypes of PLC begin in the liver, they show very different biological characteristics that have remained unexplained until now.
Stem cells are undifferentiated biological cells with the capacity to undergo extended self-renewal through mitotic division (to produce more stem cells) and to differentiate into mature cells. There are two broad types of stem cells in mammals: embryonic stem (ES) cells that are found in the inner cell mass of blastocysts, and adult stem cells that are found in various adult tissues. In adult organisms, stem cells are responsible for tissue renewal and repair, replenishing aged or damaged tissues [11] . Fiftysix years ago, Wilson and Leduc suggested that liver stem cells (LSCs) are present in the adult liver [12] . Later, accumulating evidence suggested that LSCs play a pivotal role in the initiation and progression of PLC. This review summarizes and discusses current knowledge regarding the role of LSCs in the hepatocarcinogenesis of PLC.
LSC CANDIDATES
The liver is known to comprise two epithelial cell lineages, hepatocytes and cholangiocytes, which are known to originate from hepatoblasts during embryonic development. LSCs are bi-potential stem cells that are able to dif-ferentiate towards the hepatocyte and the biliary lineages. Under normal physiologic conditions, LSCs are quiescent stem cells with a low proliferating rate, representing a reserve compartment [13] . Upon acute injury, the mature hepatocytes and cholangiocytes, which can be considered conceptually as unipotent stem cells, acquire unexpected plasticity by direct dedifferentiation into LSCs, compensating for the loss [14,15] . However, when the mature epithelial cells of the liver are continuously damaged or in cases of severe cell loss, LSCs are activated as a consequence and contribute to liver regeneration [13] . There are two possible sources of liver stem cells: endogenous or intrahepatic LSCs and exogenous or extrahepatic LSCs ( Figure 1) [13,16] .
Intrahepatic LSCs
Included in the intrahepatic LSC compartment are the adult liver stem/progenitor cells (referred to as oval cells), which are present in great numbers but with a short term proliferation capacity. In 1956, the term oval cell was first assigned by Farber [17] , who observed a population of nonparenchymal cells in the portal area of the rat liver after being fed ethionine, and described them as small oval cells with scanty, lightly basophilic cytoplasm and pale blue-staining nuclei. Over the past several decades, oval cells have been shown to be localized within the canals of Hering (the most peripheral branches of the intrahepatic biliary tree) [18,19] , interlobular bile ducts [20] , or in the periductular/intraportal zone of the liver [21] . These cells are called into action when hepatocytes/cholangiocytes are insufficient or unable to respond. Numerous investigators have concluded that oval cell activation was the first step in liver regeneration in response to certain types of injury [18,22,23] .
In addition, it has been reported that mature hepatocytes have the capacity to dedifferentiate into LSCs through a transient oval cell-like stage both in vitro and in vivo, which indicates that mature hepatocytes are direct contributors to the LSC pool [14] . Moreover, some investigators observed that liver regeneration also can proceed from a novel cell type, the small hepatocyte-like progenitor cells (SHPCs), which are phenotypically distinct from fully differentiated hepatocytes/cholangiocytes and oval cells [24,25] . However, some other researchers suggest that SHPCs may represent an intermediate cell type between mature hepatic parenchymal cells and oval cells rather than a distinct stem/progenitor cell population [26,27] . Thus, further studies are required to better understand this phenomenon.
Extrahepatic LSCs
Extrahepatic LSCs comprise ES cells and bone marrow stem cells (BMSCs), which are usually present in small numbers but have a long-term proliferation capacity. These cells have been reported to be capable of selfrenewal, giving rise to oval cells and mature, fully functioning liver cells both in vitro and in vivo [22,28,29] .
ES cells, continuously growing pluripotent stem cells derived from the inner cell mass of blastocysts, are capable of indefinite continuous culture and can generate all cell types in the body. Utilizing liver-specific marker staining and subsequent functional analysis, Jones et al [30] proved that murine ES cells can differentiate into hepatocytes. Using immunohistochemical assays and reverse transcription-polymerase chain reaction tests for hepatocyte-specific proteins and mRNAs, Kuai et al [31] confirmed that mouse ES cells can differentiate into functioning hepatocytes in the presence of hepatocyte growth factor and nerve growth factor-β. Similarly, increasing evidence shows that human ES cells can be progressively differentiated into definitive endoderm, LSCs, and hepatocytes/cholangiocytes [32,33] . Recently, several newly developed techniques have been reported to facilitate the in vitro maturation of human ES cell-derived hepatocytelike cells [34][35][36] . BMSCs mainly contain two types of multipotent stem cells: hematopoietic stem cells (HSCs), which give rise to the three classes of mature blood cells; and mesenchymal stem cells (MSCs), which can differentiate into a variety of cell types such as osteoblasts (bone cells), chondrocytes (cartilage cells), myocytes (muscle cells), and adipocytes (fat cells) [37,38] . Both HSCs [39] and MSCs [40,41] have been shown to differentiate/transdifferentiate into oval cells and mature hepatic parenchymal cells, although these phenomena occur weakly and infrequently [42] . In addition, MSCs can be found in nearly all tissues, and various lines of experimental evidence have shown that non-bone marrow-derived MSCs such as adipose-derived MSCs (AD-MSCs) [43] , umbilical cord-derived MSCs [44,45] , and peripheral blood-derived MSCs [46] also can give rise to oval cells and mature liver parenchymal cells [47] .
Other cell sources
Strikingly, LSCs also can be transdifferentiated from non-hepatic sources such as pancreatic cells and induced pluripotent stem cells. Rao and Reddy first reported that massive depletion of the acinar cell pool causes a change in the oval and ductular cells that result in transdifferentiation into hepatocytes. Pancreatic hepatocytes exhibit all the morphological and functional properties of liver parenchymal cells. The cells that generate hepatocytes have been thought to be pancreatic oval cells [48] . The results of the studies by Shen et al [49] and Marek et al [50] demonstrated that a rat pancreatic cell line, AR42J-B13, can be transdifferentiated into functional hepatocytes in vitro, expressing albumin and functional cytochrome P450s, in response to treatment with dexamethasone.
Induced pluripotent stem cells (also known as iPS cells or iPSCs) are a type of pluripotent stem cell that can be generated directly from adult cells [51] . Yu et al [52] reported that liver organogenesis transcription factors (Hnf1β and Foxa3) are sufficient to reprogram mouse embryonic fibroblasts into induced hepatic stem cells. These reprogrammed cells can be stably expanded in vitro and possess the potential for bidirectional differentiation into both hepatocyte and biliary lineages. However, pluripotent stem cells readily form a teratoma when injected into immunodeficient mice, which is considered a major obstacle to their clinical application [53] . On this basis, Zhu et al [54] reported the generation of human fibroblast-derived hepatocytes that can proliferate extensively and function similarly to adult hepatocytes by cut short reprogramming to pluripotency to generate an induced multipotent progenitor cell from which hepatocytes can be efficiently differentiated. self-renewal of LSCs generates a CSC population and highlight the important role of LSCs in hepatocarcinogenesis. A study by You et al [66] showed that inactivation of the tumor suppressor gene Tg737 results in the malignant transformation of fetal LSCs by promoting cellcycle progression and differentiation arrest. In a clinical study, Ward et al [67] concluded that PLC in children often arises from the malignant transformation of LSCs at an early stage. In a similar study, Ishikawa et al [68] considered that CCC may derive from the oncogenic transformation of normal LSCs. Collectively, extensive animal modeling and clinical studies have demonstrated that PLC is a disease derived from maturation arrest of LSCs [61] . This theory has been confirmed by the discovery of putative CSCs in the liver. Analysis of the cells in PLC supports the presence of cells with functional properties of somatic CSCs (e.g., immortality, resistance to therapy, and efficient transplantability), which indicates that PLC derives from liver CSCs (LCSCs) [61] . Suetsugu et al [69] isolated CD133+ cells from human HCC cell lines and demonstrated that these cells possess cancer stem/progenitor cell-like properties. Ma et al [70,71] and Yin et al [72] also identified a CSC population in HCC characterized by a CD133 phenotype, suggesting that CD133 might be one of the markers for HCC cancer stem-like cells. Side population (SP) cells are a sub-population of cells that are distinct from the main population and exhibits distinguishing stem cell-like characteristics. In a study of SP cells in different hepatoma cell lines, Chiba et al [73] concluded that SP cells in hepatoma cell lines possess extreme tumorigenic potential, which suggests that a minor population of liver cancer cells harbors LCSC-like properties. A variety of recent studies of hepatoma cell lines and clinical samples suggest that epithelial cell adhesion molecule (EpCAM) [74][75][76] , CD13 [77][78][79][80] , CD24 [81][82][83] , CD44 [84,85] , CD90 [86,87] , intercellular adhesion molecule-1 (ICAM-1) [88] , α2δ1 subunit of voltage-gated calcium channels [89] , and OV6 [90] may serve as putative LCSC markers. The CSC theory emphasizes the role of LSCs in the hepatocarcinogenesis of PLC. Although the aforementioned proteins and/or molecules have been postulated as putative LCSC markers, no definitive markers have yet been identified directly and widely recognized. Moreover, no LCSCs have been isolated [61] . Therefore, additional studies are needed to obtain a definitive molecular marker of LCSCs and to isolate LCSCs from PLC cell lines, animal models, and clinical samples.
OF LSCS
Based on the studies mentioned above, we can scientifically conclude that PLC may derive from neoplastic transformation of LSCs. However, the underlying molecular mechanisms are poorly understood. Studies investigating cancer and CSCs show that several key genes and regulatory signaling pathways are oncogenic, such as
THE STEM-CELL ORIGIN OF PLC
Several cell types in the liver, i.e., hepatocytes, cholangiocytes, and LSCs, have the longevity that is needed to be the cellular origin of PLC [19] . Determining the identity of the founder cells for PLC is more problematic and difficult. Therefore, unveiling the mechanisms by which these cells are activated to proliferate and differentiate during liver regeneration is important for the development of new therapies to treat liver diseases.
It is well known that different tumor cells can show distinct morphological and physiological features, such as cellular morphology, gene expression (including the expression of cell surface markers, growth factors and hormonal receptors), metabolism, proliferation, and immunogenic, angiogenic, and metastatic potential. This heterogeneity occurs both within tumors (intra-tumor heterogeneity) and between tumors (inter-tumor heterogeneity) [55] . In 1937, Furth et al [56] first demonstrated that a single malignant white blood cell is capable of producing leukemia. Afterwards, the cancer stem cell (CSC) hypothesis was proposed to explain the tumor heterogeneity phenomenon [57,58] . This model postulates that most cancer cells have only a limited proliferative potential. However, a small subset of tumor cells has the ability to selfrenew and is able to generate diverse tumor cells. These cells are defined as cancer stem cells (CSCs) to reflect their stem cell-like properties: indefinite potential for selfrenewal and pluripotency. This theory assumes that only CSCs have the ability to initiate new tumors, both at primary and metastatic sites. Thus, this theory indicates that only elimination of all CSCs is fundamental to eradicate tumors [57] .
Over the past few years, there is a growing realization that many cancers contain a small population of CSCs. However, the cellular origin of PLC is controversial and whether PLC contains cells that possess properties of CSCs requires further exploration. Numerous observations indicate that any proliferative cell in the liver can be susceptible to neoplastic transformation. In the past, it has been considered that HCC is derived from dedifferentiation of hepatocytes and CCC originates from the dedifferentiation of intrahepatic biliary epithelial cells. In contrast, cHCC-CC is thought to be derived from transformed LSCs [59,60] . More recently, due to the rapid progress of stem cell research, it is widely accepted that cancer is a disease of stem cells, as these are the only cells that persist in the tissue for a sufficient length of time to acquire the requisite number of genetic changes for neoplastic development [61] .
Previous studies reported by Steinberg et al [62] have shown that transfection of an active Ha-ras protooncogene into oval cells can lead to their malignant transformation. By using hepatitis B virus X (HBx) transgenic mice and a drug-induced liver injury model, Wang et al [63] found that HBx may enable malignant transformation and the acquisition of tumorigenic potential in LSCs, suggesting that liver cancer cells are of LSC origin. The results of Chiba et al [64,65] implied that disruption of the Bmi1, Wnt, Notch, Hedgehog, and transforming growth factor-β (TGF-β), and therefore are potentially involved in the malignant transformation of LSCs [91] . Here, current knowledge of these pathways is discussed.
Polycomb group gene Bmi1
Polycomb group (PcG) proteins are a family of transcriptional repressors that epigenetically remodel chromatin and participate in the establishment and maintenance of cell fates. These proteins play a central role in hematopoiesis, stem cell self-renewal, cellular proliferation and neoplastic development. To date, four distinct PcG-encoded protein complexes have been purified from different species: Polycomb repressive complex 1 (PRC1), PRC2, Pho repressive complex (PhoRC), and Polycomb repressive deubiquitinase (PR-DUB) [92] .
Bmi1, encoded by the BMI1 gene (B cell-specific Moloney murine leukemia virus integration site 1), is the most important core subunit of the PRC1 complex, which plays a pivotal role in the self-renewal of both normal stem cells and CSCs. Increasing evidence indicates that Bmi1 protein is elevated in many human malignancies including PLC and has a vital effect on tumorigenesis, cancer progression, and the malignant transformation of stem cells. Therefore, Bmi1 was identified as an important stem cell factor and a proto-oncogene [93] .
In PLC, a number of studies have shown that Bmi1 contributes to the maintenance of tumor-initiating SP cells [94] and can cooperate with other oncogenic signals to promote hepatic carcinogenesis in vivo [95] . Our empirical results suggest that Bmi1 is highly expressed in patients with PLC and correlates positively with the proliferation and invasiveness of human hepatoma cells [96,97] . Furthermore, Chiba et al [64,65] observed that forced expression of Bmi1 promotes the self-renewal of LSCs, and the transplantation of such cells that have been clonally expanded from single LSC produces tumors that exhibit the histologic features of cHCC-CC. The above results indicate that Bmi1 plays a crucial role in the oncogenic transformation of LSCs and therefore drives cancer initiation.
Wnt signaling pathway
The Wnt signaling pathways are ancient and evolutionarily conserved pathways that transmit signals from outside of a cell through cell surface receptors to the inside of the cell and regulate cell-to-cell interactions. Wnt signaling is one of the most well studied molecular pathways during the human life span and involves a large number of proteins that are required for basic developmental processes such as embryonic development, cell fate determination, cell proliferation, cell migration, and cell polarity, in a variety of species and organs [98] .
Three major categories of Wnt signaling pathways are recognized: the canonical Wnt pathway in which the cytoplasmic protein β-catenin is a key mediator, the noncanonical planar cell polarity pathway (β-catenin independent), and the noncanonical Wnt/calcium pathway. Activation of the canonical Wnt/β-catenin pathway causes an accumulation of β-catenin in the cytoplasm and its eventual translocation into the nucleus to act as a transcriptional coactivator of transcription factors. Without Wnt signaling, β-catenin would not accumulate in the cytoplasm because it would be degraded by a destruction complex [99] . Ever since its initial discovery, Wnt signaling has had an association with cancer [100] . There is substantial evidence to suggest that dysregulation of Wnt signaling is critical for the initiation and progression of PLC [101,102] .
Wnt signaling pathways, particularly the canonical Wnt/β-catenin pathway, are also involved in the selfrenewal and maintenance of embryonic and adult stem cells, and as recent findings demonstrated, in CSCs. Functional characterization of LCSCs has revealed that Wnt/β-catenin pathways were critical for inducing the stem cell properties of hepatoma cells and in promoting self-renewal, tumorigenicity, and chemoresistance [103] . In the aforementioned HBx-mediated tumorigenic effects, Wang et al [63] suggest that HBx may enable LSCs with tumorigenic potential via activation of the Wnt/β-catenin signaling pathway. As shown in several in vivo and in vitro experiments, the Wnt/β-catenin signaling pathway contributes to the activation of normal and tumorigenic LSCs [104] . Moreover, Chiba et al [64] demonstrated that Wnt/β-catenin signaling activation strongly enhances the self-renewal capability of LSCs and generates a CSC population as an early event, thereby contributing to the initiation of PLC.
Notch signaling pathway
Notch signaling is a complex, highly conserved signal transduction pathway in multicellular organisms. In mammalian cells, the pathway is initiated when Notch ligands (Jagged-1, Jagged-2, and Delta-like 1, 3, and 4) bind to the epidermal growth factor (EGF)-like receptors Notch1-4. Signaling is processed by the enzyme g-secretase, which results in the subsequent activation of downstream target genes [105,106] . The Notch signaling pathway functions as a major regulator of cell-fate decisions during embryonic development and adult life, and it is crucial for the regulation of self-renewing tissues. Accordingly, dysregulation of Notch signaling underlies a wide range of human disorders from developmental syndromes to adult-onset diseases and cancer [105,107] .
Like other solid tumors, misregulation of the Notch pathway in PLC has been described as both oncogenic and tumor suppressive, depending on the cellular context [108] . Qi et al [109] reported that overexpression of Notch1 inhibits the growth of HCC cells by inducing cell cycle arrest and apoptosis. In 2009, the same authors showed that Notch1 signaling sensitizes tumor necrosis factor-related apoptosis-inducing ligand (TRAIL)induced apoptosis in HCC cells [110] . In addition, Viatour et al [111] demonstrated that activation of the Notch pathway serves as a negative feedback mechanism to slow HCC growth during tumor progression. At odds with these findings, however, some recent studies have provided strong evidence in favor of the pro-oncogenic activity of Notch in PLC. For example, Wang et al [112] showed that aberrantly high expression of Notch1 is significantly associated with metastatic disease parameters in HCC patients, and shRNA-mediated silencing of Notch1 reverses HCC tumor metastasis in a mouse model. In human HCC cell lines, Gao et al [113] demonstrated that Notch1 activation contributes to tumor cell growth. In accordance, we have shown that Notch1 is overexpressed in human intrahepatic CCC and is associated with its proliferation, invasiveness and sensitivity to 5-fluorouracil in vitro [114] . Taken together, these data highlight the concept that the Notch pathway plays an essential yet controversial role in PLC, presumably depending on the tumor cell type, local inflammatory microenvironment and the status of other signaling pathways [115,116] .
The aforementioned hypothesis was further supported by recent studies examining Notch signaling in the regulation of stem cell and in the development of LSC-driven PLC [117,118] . Utilizing a genetically engineered mouse model and comparative functional genomics, Strazzaboscoet al [115] , Villanueva et al [119] and Razumilava et al [120] observed that liver-specific Notch activation in mice recapitulates different stages of human hepatocarcinogenesis and results in HCC, including histological features associated with stem cell expansion. They also confirmed that Notch1 is a bona fide oncogene in experimental liver cancer. Using a transgenic mouse model, Zender et al [116] proved that stable overexpression of Notch 1 in bipotential LSCs causes the formation of intrahepatic CCCs. Dill et al [121] and Cardinale et al [122] also reported that liverspecific expression of the intracellular domain of Notch2 (N2ICD) in mice is sufficient to induce HCC formation, while DEN N2ICD (diethylnitrosamine-induced HCCs with constitutive Notch2 signaling) mice develop large hepatic cysts, dysplasia of the biliary epithelium, and eventually CCC. These studies also suggested that the LSC compartment is the most likely candidate for oncogenic events [115,116,[119][120][121][122] .
Nevertheless, these newly published studies raise one question: how can one pathway, Notch signaling, contribute to two different subtypes of PLC: HCC and CCC? Of note, the balance between Notch/Wnt signaling has been proposed to be crucial for the determination of the LSC cell fate in liver disease. Activation of Notch signaling in LSCs leads to biliary specification; in contrast, Wnt signaling activation inhibits default-activated Notch signaling via Numb (a target of canonical Wnt signaling), allowing LSCs to escape the biliary cell fate and acquire a hepatocellular specification [123][124][125] . Therefore, based on previous studies and to the best of our knowledge [123][124][125][126] , we propose that the balance between Notch/Wnt signaling pathways determines the oncogenic transformation of LSCs into HCC, CCC, or cHCC-CC phenotype. The predominance of Notch over the Wnt signaling in LSCs leads to the CCC phenotype, and activation of Wnt signaling likely prevents activation of the Notch pathway and thus leads to the HCC phenotype. When the comparison is balanced between the two signaling pathways, the cell has a higher probability of entering the cHCC-CC phenotype. In summary, the role of such a pleiotropic pathway in liver regeneration and liver diseases seems to be highly context dependent. Additional research is required to clearly establish the effects of the Notch signaling pathway during hepatocarcinogenesis.
Hedgehog signaling pathway
The Hedgehog signaling pathway is one of the key regulators of embryonic development. Mammals have three Hedgehog homologues, Sonic (SHH), Indian (IHH), and Desert (DHH), of which Sonic is the best studied. Like the Wnt and Notch pathways, the Hedgehog signaling pathway also plays significant roles in stem cell selfrenewal [127] and cancer cell proliferation [128,129] .
Sicklick et al [130] showed that Hedgehog signaling is conserved in hepatic progenitors from fetal development through adulthood and is essential for the maintenance of LSC survival. In a study reported by Jeng et al [131] , the SHH pathway is activated in CD133+ mouse liver cancer cells that harbor stem cell features. In human CCC tissues and cell lines, El Khatib and colleagues [132] demonstrated that inhibition of Hedgehog signaling attenuates carcinogenesis in vitro and increases necrosis in CCC. Chen et al [133] showed that enhanced Hedgehog signaling activity may be responsible for the invasion and chemoresistance of hepatoma subpopulations. In a fibrosis-associated hepatocarcinogenesis model, Philips et al [134] further established that Hedgehog signaling pathway activation promotes hepatocarcinogenesis while inhibiting Hedgehog signaling safely reverses this process even in advanced HCC.
TGF-β signaling pathway
The TGF-β signaling pathway is involved in various cellular functions in both the developing embryo and the adult organism including cell growth, cell differentiation, apoptosis, and cellular homeostasis. The pathway is activated upon binding of TGF-β to its receptors, TGF-β receptor Ⅰ (TGFBR1) and TGFBR2, resulting in the translocation of Smad proteins to the nucleus where they act as transcription factors and participate in the regulation of target gene expression [135,136] .
The role of TGF-β in tumors is rather complicated. In healthy tissue, it acts as a tumor suppressor controlling the cell cycle and inducing apoptosis. During carcinogenesis, TGF-β acts as a potent inducer of cell motility, invasion and metastasis. In liver cancer, TGF-β has been shown to have both tumor-promoting and tumor-suppressing effects, and its expression is decreased in early but increased in later stages of carcinogenesis. Although the underlying molecular mechanisms remain largely undefined, it had been speculated that the dual role of TGF-β signaling in liver cancer results from its effect on the tumor microenvironment [135,136] .
It has long been known that TGF-β signaling is vitally involved in stem cell renewal and lineage specification, including in LSCs [137] . Recently, TGF-β signaling has also been linked to the malignant transformation of LSCs in hepatocarcinogenesis. Nishimura et al [138] reported that TGF-β treatment increases the percentage of SP cells in a hepatoma cell line. Yuan et al [139] reported that HCC cells with aberrantly high expression of TGF-β signaling that are positive for Oct4 (octamer-binding transcription factor 4) are likely cancer progenitor cells with the potential to give rise to HCC. Using several experimental approaches, Wu et al [140] confirmed that long-term treatment of oval cells with TGF-β impaired their LSC potential but granted them tumor-initiating cell (TIC) properties including the expression of TIC markers, increased selfrenewal capacity, stronger chemoresistance, and tumorigenicity in nude mice. In opposition to these findings, however, Tang et al [141,142] showed that activation of the interleukin-6 (IL-6) signaling pathway induces neoplastic transformation of LSCs along with inactivation of the TGF-β signaling pathway. Lin et al [143] suggested that disruption of TGF-β signaling is an important molecular event in the transformation of normal LSCs to cancer progenitor/stem cells. These data suggest an important but contradictory role for TGF-β signaling in LSC-driven hepatocarcinogenesis, potentially due to the interaction with other signaling pathways.
ENDOTHELIAL TRANSDIFFERENTIATION
Interestingly, CSCs can potentially transdifferentiate into cell types other than the original type from which the tumor arose. Several recent studies have shown that CSCs also can transdifferentiate into functional vascular endothelial cells that line the tumor vasculature, mediating tumor growth and metastasis [144][145][146] . In 2010, Wang et al [147] and Ricci-Vitiani et al [148] provided strong evidence that a proportion of the endothelial cells that contribute to blood vessels in glioblastoma originate from the tumor itself, having differentiated from tumor stemlike cells. Wang et al [147] also demonstrated that blocking VEGF (vascular endothelial growth factor) or silencing VEGFR2 (VEGF receptor 2) inhibits the maturation of tumor endothelial progenitors into endothelium but not the transdifferentiation of tumor stem-like cells into endothelial progenitors, whereas γ-secretase inhibition or Notch1 silencing blocks the transition into endothelial progenitors. Subsequently, multiple studies have confirmed the presence of tumor-derived endothelial cells in several other malignancies, such as renal [149,150] , ovarian [151] , and breast cancers [152,153] , which suggests that this is a general phenomenon in CSCs.
Similarly, Marfels et al [154] found that chemoresistant hepatoma cells show increased pluripotent capacities and the ability to transdifferentiate into functional endothelial like cells both in vitro and in vivo. These tumor-derived endothelial cells possess increased angiogenesis and drug resistance capability (including chemotherapeutics and angiogenesis inhibitors) compared with normal endothelial cells [155,156] . Taken together, these data may provide new perspectives on the biology of CSCs and reveal new insights into the mechanisms of resistance to anti-angiogenesis therapy.
CONCLUSION
Our review of the literature shows that identification of the cellular origin and the signaling pathways involved is challenging issues in PLC with pivotal implications in the therapeutic perspectives. Although dedifferentiation of mature hepatocytes/cholangiocytes in hepatocarcinogenesis cannot be excluded, neoplastic transformation of a stem cell subpopulation more easily explains hepatocarcinogenesis. Elimination of LCSCs in PLC could result in the degeneration of downstream cells, making them potential targets for liver cancer therapies. Therefore, LSCs could represent a new target for therapeutic approaches to PLC in the near future. However, though LSCs have a bright future, their efficient therapeutic applications will demand further scientific advances.
|
2018-04-03T04:13:02.781Z
|
2014-11-26T00:00:00.000
|
{
"year": 2014,
"sha1": "dd576f5fbb6a61e6d2146cb549d4742de0da49ca",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4252/wjsc.v6.i5.579",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "22e6734ee65e9bb428101114862faabf27ffb038",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261637117
|
pes2o/s2orc
|
v3-fos-license
|
Vulnerabilities experienced by family members/caregivers of children with chronic conditions
: The objective was to know the vulnerabilities experienced by family members/caregivers of children with a chronic. Qualitative research supported by the theoretical framework of the French philosopher Roselló Un which 15 family members/caregivers of children with chronic conditions participated in the study. The information was collected in the years 2018 and 2019 and submitted to thematic analysis. The results are presented in three themes: The disease as an expression of the vulnerability of being a child; The child’s chronic illness as a condition of vulnerability of the family member/caregiver; The support of support networks: potentialities and vulnerabilities in the daily life of children with chronic conditions and family members/caregivers. Knowing the components of vulnerability experienced by the families of children with chronic conditions is complex, as it requires analyzing and reflecting on the situations these families face, considering their peculiarities, feelings, family organization and the accessibility they have to health services. Therefore, knowledge about the context in which these families are inserted is essential to establish an adequate planning of health actions aimed at promoting their well-being.
Introduction
Added to the diagnosis of a chronic condition, illness in childhood imposes on children a life different from the one imagined/idealized, as living with this condition generates a series of extremely complex feelings and situations, both for children and for their family members/caregivers (BELLATO et al., 2015).Over time, chronic diseases in childhood cause sequelae that impose limitations on children, requiring special care skills and competencies from their family members/ caregivers (XAVIER et al., 2020).A family member/caregiver is considered to be any person with a strong personal connection with the child, such as a close relative, such as parents.Caregivers provide extensive assistance in all aspects of everyday life, performing direct care (ADASHEK; SUBBIAH, 2020).
These family members/caregivers experience what Roselló (2009) calls existential vulnerability when they realize the ontological vulnerability, in this case of the child as a human, finite and vulnerable being.Consequently, this distress arises from the perception of the child as exposed to existential facticities, vulnerable to suffering and illness.
Suffering is enhanced when family members/caregivers realize that the child's disease is incurable, when living with uncertainties, insecurities and continuous needs for the reorganization of everyday life, in order to meet the care demands (BROCK et al., 2018).Therefore, regardless of the chronic condition, a change in life is assumed, which is not only related to the somatic structure of the human being, but also to its integrity, associated with suffering.Each family interprets illness through their own perceptions, culturally incorporated in and influenced by their way of being in the world and by the affective relationships established with and among its members (ROSELLÓ, 2009).
Coping with a chronic condition causes transformations in the life of the children and their family, requiring intense emotional involvement, from the impact of the diagnosis to the implications of the disease throughout life.Thus, health care becomes complex, requiring accountability from the family members/ caregivers, which can go beyond the competencies inherent to caring for a child (PIMENTA et al., 2020).
Thus, including these family members/caregivers in the care provided by health professionals does not mean a resolution of the painful experience faced, but the | Page 3 of 20 possibility of helping them bear this experience, overcoming it in the physical, emotional, moral, social and spiritual senses (ROSELLÓ, 2009).
Despite all the limitations imposed by vulnerability, their experience can become something positive, impelling human beings to seek recovery of their autonomy, threatened by the human existential condition.The experience of vulnerability can occur in different axes: Ontological (constitution of the being that is limited, dependent and determined by its finitude); Ethical (it is related to the moral duty to protect the weakest individuals), Social (possibility of the human being who is an object for violence in the social environment); Natural (the environmental setting affects the life of the human being and vice versa): and Cultural (ignorance about the different knowledge orders, rendering the individual manipulable and unprotected from power abuse by others).Four of these vulnerabilities were identified in this research, namely: Ontological, Ethical, Social and Cultural (ROSELLÓ, 2009).
Considering that vulnerability has varied permanence and intensity, it is important to understand the spheres that produce human vulnerability in the health field.In this way, health professionals and services will be able to offer autonomy to the children and their families, in order to manage the experience of living with the chronic condition.
Based on the above, and in order to contribute to the discussion on this theme, the following research question was established: Which are the vulnerabilities experienced by family members/caregivers of children with chronic conditions?The objective was to know the vulnerabilities experienced by family members/caregivers of children with chronic conditions.
Methodology
A qualitative research study supported by Roselló's theoretical framework (2009), which describes vulnerability as an experience intimately rooted in the human condition.Thus, human beings are exposed to the danger of getting sick, being attacked, failing and dying, living in vulnerability.The study was conducted and structured according to the Consolidation Criteria for Reporting Qualitative Research (COREQ) (SOUZA, 2021).
This study is part of a multicenter research study developed in four municipalities from Rio Grande do Sul (Porto Alegre, Santa Maria, Palmeira das Missões and Pelotas) and in one from Santa Catarina (Chapecó), entitled "Vulnerabilities of children and adolescents with chronic diseases: Assistance in a health care network".The data presented in this manuscript refer to the information collected in Pelotas.
The information was collected in 2018 and 2019 by the research group members, previously trained to do so.The study participants were the family members/ caregivers of children with chronic conditions (admitted to Pediatrics units of public hospitals in the municipality in question), with the following inclusion criterion: being a family member/caregiver of a child aged from six to 12 years old with a chronic condition.Family members/caregivers of children on palliative care or under critical life situations were excluded.
For data collection, semi-structured interviews were used, with open and closed questions about the perspective of the family members/caregivers about the experience of the child's chronic condition.The place where the interviews were conducted was the house of the family members/caregivers, and the meetings were previously scheduled.The interviews lasted a mean of 60 minutes, were recorded on a cell phone and manually transcribed in full (with double checking).A total of 15 family members/caregivers took part in the study: ten mothers, three fathers and two grandmothers.The information reached saturation when no new element was found in the new participants' speeches, not needing to add new information to understand the phenomenon studied (HENNINK; KAISER; MARCONI, 2017).
The ethical precepts set forth in Resolution No. 466/12 were respected (BRAZIL, 2012).To this end, before conducting the research, the project was submitted to and approved by the Research Ethics Committee under CAEE 54517016.6.1001.5327, and opinion number 1,523,198.The participants' identity was preserved by naming them with the consonant "F" (Family member), followed by an increasing numeral according to the order of the interviews (F1, F2, ...).The information was analyzed in an inductive way, using thematic analysis and following six stages: (1) Familiarization of the researcher with the data (reading and re-reading of the data with note-taking of initial ideas); (2) Generation of initial codes (coding of the pertinent characteristics of all data in a systematic way, as well as collection of important data for each code); (3) Search for the topics (compilation of the codes into possible topics, joining the important data for each potential topic); (4) Creation of the thematic map (it is verified whether the topics work in the coded extracts and in the integral dataset, originating a thematic map of the analysis); (5) Physis: Revista de Saúde Coletiva, Rio de Janeiro, v. 33, e33034, 2023 | Page 5 of 20 Refinement of the topics (the analysis is deepened to improve the particularities of each topic); and (6) Preparation of the final report (final examination of the extracts selected, association between analyses, research question and scientific bibliography to prepare the analysis report) (BRAUN et al., 2019).
The thematic map prepared is presented below (Figure 1).
Results and Discussions
It was found that the family members/caregivers experience situations of ontological, ethical, cultural and social vulnerability, which are closely interconnected, influencing each other.It is not possible to treat them separately, and the entire situation in which the person is should be understood, as these conditions interfere with the child's health and the care provided by the family member/caregiver.
The disease as an expression of the vulnerability inherent to being a child
This topic presents the reports of family members/caregivers about their perspectives on the vulnerabilities experienced by children with chronic conditions.The participants cited as a care difficulty the limitations that affect | Page 6 of 20 both speech and locomotion, as well as agitation of the child, thus reaching its ontological constitution.
Then everything becomes more difficult, because the child doesn't walk or speaks [...] Another issue evidenced with the ontological vulnerability inherent to human beings was the difficulty faced by children not accepting the restrictions imposed by the conditions and/or treatments.Diabetes, for example, implies changes and restrictions in habits, especially in terms of food.Children do not understand severity of the disease and the consequences it can generate, as caregiver F11 needs to always be "quarreling", because the child does not understand the reason for not being able to do things that other children do, evidencing social vulnerability.He's not mature enough to understand his disease, he's always angry, always quarreling [...] he already has the rebellion of his age, right, he already has this aggravating factor and generating this lot of things.He can nothing, nothing I allow, because I don't trust him because he's lied to me several times, his lie is to say that he can when I'm not with him and he can't eat or take anything.[...].Always this rebellion of wanting to do something that can't be done [...].(F11) F11's speech also allows noticing the mother's overprotection to the child, who can only do something if she's with him, limiting and restricting the child in socialization with other children, thus increasing his social vulnerability.
Living with a chronic condition gains a higher proportion when it affects a preadolescent.This scenario is characterized as ontological vulnerability, due to not understanding the new existential condition.As chronic diseases in childhood and adolescence are unexpected and unwanted by the family, many difficulties to be faced arise (FREITAG et al., 2020).
These children experience many changes in their everyday lives, such as changes in their diet, physical exercise, blood glucose monitoring and insulin application, which are oftentimes performed by the pre-adolescents themselves.Therefore, self-care regarding hemoglycotest (HGT) and insulin application can be perceived negatively, | Page 7 of 20 as it generates pain and discomfort, demotivating and causing depressive symptoms.However, the pre-adolescents' autonomy generates positive feelings, such as the ability to take care of themselves and be self-responsible.Independence also increases adherence to the treatment, contributing positive results (BERTOLDO et al., 2020).
It is difficult for a child to accept the treatment, which is initially frightening; it evidences ontological vulnerability due to the fear of being injured, as well as cultural vulnerability due to not understanding the chronic condition, as observed in F7's report about the child's fear of needles.
It was the initial acceptance [...].The fear of needles all the time, that the child was panicking about it.(F7) Chronicity exposes children to invasive procedures, which involve painful and frightening experiences.In an attempt to reduce these negative experiences, it is necessary to prepare the child, and resorting to therapeutic toys is an excellent tool.Playing assists in children's healthy development, offering broad physical, emotional, cognitive and social benefits; in addition, it allows then to develop their motor skills and to simulate scenarios and their consequences in a safe and engaging way, reducing stress and preparing them for the procedures to be performed (NIJHOF et al., 2018).
It is believed that, when experiencing a disease, human beings recognize their own vulnerability, the unpleasant and frail character of the human body.A sick person realizes the constitutive frailty of their own being and, then, knows themselves better (ROSELLÓ, 2009).
Humans are potentially sick beings, as they can fall ill at any time precisely because of their intrinsic vulnerability; thus, the disease and the process of becoming ill are evident and touching forms of human vulnerability.This human capacity to get sick can be understood as the hallmark of human vulnerability (ROSELLÓ, 2009).However, there are conditions that favor the care of children with chronic conditions, such as their acceptance and understanding of the disease and treatment: It was easier at home, we explained it to the child and she understood better [...] since she was very young she already controls her own insulin, already measures it and everything, she does everything.(F7) Thus, when children understand their condition and the limitations they are exposed to, they minimize their vulnerabilities.
The child's chronic disease as a vulnerability condition of the family member/caregiver
The participants outlined their ontological vulnerabilities as human beings who care for children that experience illness and a chronic condition.The initial shock of the chronicity diagnosis and the questions about whether or not there was some failure in their care, added to the feeling of guilt or blame for onset of the disease, also evidence the ethical vulnerabilities, as the people responsible for the child's care.[...] The chronic condition causes changes in family dynamics; the family members seek strategies to face and adapt to this reality, aiming to provide the most appropriate care (GOMES et al., 2017).Roselló (2009) brings up the reflection that human beings' vulnerability is closely linked to the disease, as this supposes a change in the person's life, which does not only refer to the somatic structure of human beings, but also to their integrality.
Faced with the child's chronic condition, vulnerability is directly linked to care, as well as the idea of responsibility.Referring to the child's illness, the blame that falls on the caregiver generates distress, and the condition is identified as negligence.In this sense, it is noticed that, many times, support is lacking for these caregivers, adding to the ethical vulnerability they experience, as they understand themselves with the moral duty to protect children (ROSELLÓ, 2009).
The reports present feelings triggered in the family members/caregivers facing the diagnosis of the child's chronic condition, mainly fear of losing the child or that some complication resulting from the chronic condition will leave sequelae.Thus, ontological vulnerability is characterized as a caring human being, which is perceived in view of the possibility of the child's finitude.In addition to that, ethical vulnerability is also perceived, referring to the need to protect a child against their frailty: | Page 9 of 20 [...] he has no idea of the severity of the disease.He's already been hospitalized nine times in these three years and has been to the ICU (Intensive Care Unit) four times.I thought I was going to lose him that last time he was hospitalized, it was very difficult, I've never seen him so bad [...] we know that a lot of things happen, it's amputation, it's the organs that stop little by little and then I'm already terrified that he's only thirteen years old and has already been hospitalized so many times [...].Then the doctor tells me that I'm overestimating it, but I think it's fear.In these reports, the constant concern of the family member/caregiver with the hospitalizations and the complications inherent to the pathology are evidenced, potentially traumatic situations, even affecting sleep due to concern.In addition to that, F11's speech reflects overprotection of the child, evidencing his ethical vulnerability, as the people responsible for dealing with his frailty.
The family members/caregivers faced with the diagnosis of the chronic condition encounter doubts, fears and uncertainties, but time is too short to understand and organize these feelings, get to know the disease, provide resources and adapt to the reality imposed.Therefore, ontological/ethical vulnerability oftentimes goes unnoticed, may increase over time and not be resolved (DIAS et al., 2020).
Seeing the child's suffering revolts family members/caregivers, as they recognize themselves as incapable of managing, organizing and submitting it to logic.Pain erupts into human existence without a prologue, it appears on the stage of individual life and alters the dimensions of being (ROSELLÓ, 2009).The parents experience several feelings after the diagnosis of a chronic disease in their children, such as anguish, fear, guilt, hopelessness, impotence and insecurity.These feelings deserve attention from health professionals, as understanding the experience of patients and their families is fundamental to provide care according to their needs and identify better strategies according to their unique experiences (HAWKINS et al., 2020) Through the reports, the anguish suffered by the family members/caregivers is perceived; F11's speech shows that, at times, the child states that they are fine in order to be able to remain strong in the face of the adversities experienced but, in fact, they feel physically and emotionally overloaded: And we understand that it's difficult, the child's side is not easy either.But we have to be a little tougher, because if we let it fade, it's worse.We have to be strong.(F11) Physis: Revista de Saúde Coletiva, Rio de Janeiro, v. 33, e33034, 2023 | Page 10 of 20 The participants report the feeling of helplessness when they do not notice any improvement in their children, since, even doing their best so that the children do not suffer or feel pain, they sometimes cannot solve the difficulties and do not know who to turn to.Thus, ethical vulnerability intensifies: Pulled the information and I didn't even know what to do [...] the least from a mother, I think, is to take care and always stay on top watching things over [...] and it's not for lack of care, because I'm always on top of him [...] I don't miss any consultation.(F11) [...] he was with a fever, he was in pain and he (doctor) said that all of this was normal, that it would pass with time, that there was nothing else to do.[...] they know that I'm going to stomp my feet, if I have to go to the prosecution again I will, like I did a report on television when they were denying assistance to him [...] For being responsible for the child's care, when the diagnosis of the chronic condition appears, the main caregivers feel guilty, assuming responsibility for the disease, believing that they did not do everything possible to avoid it or that they did something wrong.The feeling of guilt arises with onset of the pathology due to the negative meaning of this experience.It is precisely this feeling of guilt that causes parents/caregivers to overdo care, overprotecting (PIMENTEL et al., 2017).Witnessing a child's suffering and pain generates feelings of anguish, fear and, above all, impotence in parents, as they are unable to do anything to stop their child's pain/suffering (FAIRFAX et al., 2019;MEDEIROS et al., 2020).
Human beings are frail because they are finite, and they only live safely when they identify their own vulnerability and the others', learning to deal with it in order to live with it.Thus, their greatest vulnerability to finite existence is revealed, and its only certainty is death (ROSELLÓ, 2009).
In F2's report, it is possible to perceive the vulnerabilities faced due to the visual limitation.When the school staff calls warning about any complications with the child, the child needs the help of the other daughter to be able to commute to school: Complementarily, other conditions inherent to the caregiver also bring about weaknesses to the care of a child with a chronic condition, as in the case of F1: when Physis: Revista de Saúde Coletiva, Rio de Janeiro, v. 33, e33034, 2023 | Page 11 of 20 she was pregnant, she had to stop her daughter's treatment, as she was unable to accompany the child: [...] there was even a time when she stopped doing it because I couldn't bring her anymore, because I was pregnant and then we slept both in the same [...] hospital bed, or in that armchair there and I was at the end of my pregnancy, so everything was very uncomfortable and then she stopped going, that's when we went to the blood center again.(F1) Another important aspect that contributes to care discontinuity and to difficulties during hospitalization is the family member/caregiver having a job and/ or other children.The parents' presence during children's hospitalization generates several concerns, as it is necessary to understand that most of them are not able to stay all the time with the hospitalized children due to unfavorable socioeconomic conditions that do not allow their absence at work or even in the care of the other children, having no one to leave them with.
It is generally the mother who gives up her job to take care of a hospitalized child; in many cases, the workplace does not accept the child's medical certificate.In addition to that, the other children also undergo changes in their everyday lives, ceasing to attend school and being cared for by family members and friends, even outside their homes (MELLO; FRIZZO, 2017).
The aid of support networks: potentialities and vulnerabilities in the everyday life of children with chronic conditions and their family members/caregivers
The family members/caregivers point to the support networks as minimizing the vulnerabilities they face.Most often, these networks are made up of family members, godmothers, colleagues and friends: The child's chronic condition generates in the families the need to overcome ontological vulnerability, imposing transformations and struggles, with obstacles that need to be overcome.In addition to that, when they gradually realize their vulnerability, human beings can seek shelter against it, empowering themselves with their rights and thus reducing their vulnerabilities.Thus, when identifying their weaknesses and difficulties, family members/caregivers look for support networks in order to minimize the situations of vulnerability they experience (ROSELLÓ, 2009).Social support is important for the family in coping with the chronic condition (SILVA et al., 2017).
In its very essence, the care process implies the virtue of responsibility.Caring for a frail or vulnerable human being is exercising a form of social responsibility (ROSELLÓ, 2009).For the families of children with chronic conditions, the social support network is the one that helps them face this condition (GOMES et al., 2019).
In addition to family members and friends, health professionals and religion are also seen as a support network: It is observed that religiousness or spirituality are important and constitute a support network for these families, representing a source of hope in coping with the difficulties.In addition to that, they contribute to minimizing the pain and anguish that mark the everyday life of the family member/caregiver, relieving the fears arising from the disease (NEVES et al., 2017).
Lack of bonding and support by the health services to these families enhances the social vulnerability faced, weakening them even more.The family members/ caregivers report not using Basic Health Units (BHUs) for several reasons: not having a specialist physician (pediatrician), not being resolute or not having enough records, thus, they end up resorting to the Emergency Departments (ERs) when they need care, especially when the chronic condition is exacerbated.
She hardly goes to the health center, that when [...] I feel that she's sick, that she has something wrong I already take her straight to the ER [...] there they always treat her, the pediatricians already know the case and everything.(F6) What makes it difficult is what sometimes you think you can consult on a small health center and then sometimes there's no doctor, it's difficult now.[...] sometimes you have to take out a record and there are three, four records, and there's no way, then the resource is the ER itself [..
.]. (F8)
There's no pediatrician.And the information here by the health center is much more difficult [...] we tried, but I had a problem when collecting, because they collected it wrong [...].Then I gave up from this function in the health center [...] I think that then it's the thing of not having a pediatrician, right, in the center, with no way for you to do any follow-up.[...] then either you take it to the ER [...] to be treated by a pediatrician or you pay.(F10) BHUs should be the gateway to the health service, but what happens is the great demand for medium-and high-complexity services.Such demand can be considered by the low resoluteness of Primary Care, professionals without specific training for care, insufficient devices and materials, with migration of users to urgency and emergency services (FREIRE et al., 2020).
In addition to that, when there is a link with a specific health professional, people search for the service where they work.This is often only established after a pilgrimage through multiple health services.
[...] then we returned and she continued undergoing treatment with [...] (name of the physician) who's the one treating her to this day.We always consult in the medical school [...] there was a time that I went to [...] (name of the health service), then she started to do the transfusions in the blood center, then she stopped doing it in the blood center and started to hospitalize because in the blood center there were no more doctors, the doctor had left.Then she stayed [...] a year and a little I think with the hospitalizations at [...] (name of hospital) every month and now we managed to move to the blood center again.(F1) In this sense, health professionals should articulate support networks so that family members/caregivers do not feel helpless.Nursing may contribute with diverse information on pathology, treatment, care and prevention of future complications, also paying attention to the caregivers, who suffer together with the children (GOMES et al., 2016).
It is noticed that many families do not have any health service and/or professional as a support network and that, without due support for care and continuous monitoring of the child, they seek the emergency care service when the disease is exacerbated, or try to circumvent the situation with their own knowledge: [...] he does the treatment at home, now these days he used the little pump for not needing to take him [...] The mothers provide care to their children based on the knowledge they have, use medications that are part of the home pharmacy as a way to complement the treatment and mitigate the acute symptoms: in addition, they are influenced by the health professionals' behavior, and it is indispensable that they provide them with the necessary support (NEVES et al., 2017).
As a potentiality in the support network is the tertiary-level care that is sought for the trust in the help it provides: Another facilitating aspect mentioned by the caregivers is the good care provided by the professionals, which strengthens the bond, increasing the potential to face the vulnerabilities experienced.
[...] we were very well treated there, the child's tests didn't take long, the results were very fast, and we had follow-up.Several types of follow-up for her to be entertained, from the nurses, even from the dentist who went there to visit her.So, there I really enjoyed the care provided [.The practicality was the care they provided there, which was good, it was quite well--oriented.(F12) Of course that there in the hospital, anywhere we went we were very well received, really very good care the girls, it was huge affection with her [...].(F1) The health team should be a support reference for these family members/ caregivers, through effective strategies and health actions aimed at the needs of the child and family (KALANTAR-ZADEH et al., 2021).
When asked about health services as effective support networks, the family members/caregivers point out that, although they resort to urgency and emergency services (ERs) these places do not constitute support networks, as the environment is usually not appropriate for children, in addition to the delay of care in these places.It is important to emphasize that the ER environment can be frightening for family members and especially for children, as the physical space, organization of the materials, observation room, invasive procedures performed, noises and even excessive luminosity can cause discomfort.In addition to that, whether pediatric or adult, the patients are exposed to common areas, a context characterized as a frightening place, with situations that a child has never experienced (LIMA et al., 2018).
The family members/caregivers pointed out that, as they do not have a support network in health services, they face difficulties treating their children, such as access to medications, physical therapy or surgical procedures.Many times, lawsuits are required to access services, medications and devices: And in this drainage, they did a biopsy and I didn't even know about it.Then over time that I went to the prosecutor's office, in the child's health care council and then I got them to do the child's treatment [...] fighting because no one wanted to assist.[...] the difficulty that we find the most is to get things [...] there's a lot of legal stuff [...].There's a lot that there's a month that sometimes doesn't come so it's always a problem [...] something from the State, the Municipality, you go to the pharmacies and things are so always missing [.In this speech, the cultural vulnerability experienced by F13 is made evident, unaware of the biopsy to which the child was subjected.In addition to that, the access limit to the health services is configured as a social vulnerability, as care is not linked to any health network in many cases.Therefore, the family needs to undertake this path independently and, if it lacks resources, the child may have his/ her health condition worsened, with the possibility of dying without receiving the necessary care (ROSELLÓ, 2009).
In some cases, the treatment involves surgical procedures that are sometimes not performed because there is no anesthesiologist in the Unified Health System (Sistema Único de Saúde, SUS) service: She had to have the surgery.There's a big adenoid [...] but there's the anesthesiologist.[...] OK, but how much is the anesthesiologist?[...] that was just four thousand for the anesthesiologist.No way for me to do this.(F15) Absence of professionals who assist through the SUS is common, increasing the vulnerability of families and children with chronic conditions.The SUS physician advised the family to seek a private doctor to provide care to the child, which was unfeasible, leaving the child without care.This is related to the social vulnerability experienced by the families of children with chronic conditions, linked to the scarcity of resources that are indispensable for the effective prevention and treatment of diseases.The social dimension seeks to show how services can improve the life of the population, through public policies for the care and autonomy of people, not attended to in these cases (AYRES, 2009).
In this sense, considering that health services are the spaces where the family members/caregivers of these children seek care, diagnosis and treatment for the chronic conditions, and that these loci cannot meet their health needs, the social vulnerability situations of these families are amplified, as collective coping practices and adversities, such as unavailability of resources and access to them, are social components.
Final considerations
Seeking to know the components of the vulnerability experienced by the families of children with chronic conditions is complex, as it requires analyzing and reflecting on the situations faced, in view of the peculiarities, feelings, family organization and accessibility to health services.
| Page 17 of 20 It was found that vulnerability is something intrinsic to human beings, experienced by everyone to a lesser or greater degree.For the children's families, the vulnerabilities were accentuated after the children's illness, exposing them to difficult periods.In this context, it is considered that, through welcoming in health services, it is possible to minimize the ontological, ethical, cultural and social vulnerability situations experienced by family members/caregivers and children.
The study limitations are the fact that it was not possible to interview other members of the family nucleus and the difficulty finding the families, as many of them lived in rural areas.
It is believed that, through the knowledge of the components of the vulnerabilities experienced by these families and children, health professionals can act, plan and evaluate ways to minimize these situations, offering tools for the family members/ caregivers to develop their empowerment and, thus, be able to recognize, confront and minimize them. 1
Figure 1 .
Figure 1.Thematic map corresponding to organization of the results [...] so the child goes to a little party, for example, she has to take care of what she eats [...] we have to take care of what she eats, because otherwise she messes around, easily gets to eat a candy.[...].[...] so we take care of that [...] but it's control [...] sometimes it becomes a difficulty [...].(F7) (F11) If it was no use [...] they would take it and remove it, amputate it, which was to avoid the risk of going up because it goes up, it becomes general [...].I sleep, I wake up every day thinking terrified, how must it be [...], but who can sleep peacefully, can lay their head on the pillow peacefully knowing everything that can happen [...].(F13) There's no way I'm going alone to fetch the child, where I leave my girl would go with me, because I'm alone so I can walk, like this the holes I can twist my foot.Or when walking like this I can stumble, I may fall.[...] because of the sight difficulty.(F2) The only difficulty I had was because I work [...] I had to go out to work [...] and I left the other two at home, because I have another two children and I went to stay with her [...] all the time with her, she was afraid to be alone.(F5) My difficulty, as I have three children, of staying with her all the time, I can't, I have the other two, so I even can't manage [...] be always in my role, and my husband is always working.(F15) It's just me and my husband and my mother-in-law who send the little pump, there's no one else.(F2) [...] staying with my mother, my mother-in-law.(F3) Physis: Revista de Saúde Coletiva, Rio de Janeiro, v. 33, e33034, 2023 | Page 12 of 20 Only my family".(F4) I have my family [...] there are seven of us.(F5) There is the godmother who's always here, she's always attentive too [...] she's very careful [...].(F8) It's quiet, in the role like this at home [...] everyone I think has become used to the role of taking care of her.So each one takes care a little bit.(F10) It was all very natural, very correct, everyone helped, friends, family members, everyone helps, the colleagues are an example, they're always helping.(F12) His father, I myself, sometimes my neighbors [...] and my mother and my father.(F13) It's me, his father, and his eight-year-old sister.It's the four of us.(F11) No, it's just me and his father.(F14) My husband [...] helps me with her, who stays with them so I don't have to leave the service.Staying with her, staying with the other one, so I can be with her.[...] there are my sisters-in-law, my sisters-in-law work.[...] they'd like to help more.(F15) Nutritionist, endocrine surgeons, doctors.(F12) And I have a second family, because we're Jehovah's Witnesses, right, and we're a family, so when one is need the other is always there.(F5) I read a lot of Spiritist books and it's still some support [...].(F15) Physis: Revista de Saúde Coletiva, Rio de Janeiro, v. 33, e33034, 2023 | Page 13 of 20 It was easy for the doctors, the most specialized one for her has already arrived [...] the doctor already knowing her condition, bronchitis [...].Pulmonologist, right?And then I find this practicality [...].So I think it's good, I'm always assisted, I feel like that there.(F15) ..]. (F5) | Page 15 of 20 [...] one difficulty is the environment [...] the hospital environment is bad for a child [...] it's not a cool environment.How much we see.Apart from the contamination risk, as the child's immunity is already low and there's also the delay.(F1) In the ER it's more difficult, because, [...] going through those corridors there, it's bad [...].You see everything there, right?And the x-ray that takes a lot of time [...].And she sees certain things that I think are bad, [...] she already comes home terrified [...].She had horrible moments to see, even more for her [...].(F15) ..] you have to always be running after [...] the physiotherapist also entered the justice, I got it too.(F3) | Page 16 of 20 it wasn't negligence because what we hear most is condemnation, what did you do?Why didn't you take care?What did this kid eat? [...] they all think it's the mother's negligence.[...] I keep blaming myself.(F11) [...] everyone blames me, they tell me that I had to go further, that it was necessary to demand.But no matter how much you demand, you're not in charge [...].I just ran doing everything I could from side to side several times, but [...] I don't see it advancing [...].Just me alone is no use.(F13) this is something that hurts, because you know it's your child who needs it [...] you have nowhere to run.[...] they say they've already tried to do everything and that they don't know what they're going to do now [...].(F13) to the hospital, because if I take him [...] he'll end up hospitalized [...].(F2) When it's like this, something lighter [...] then we give predsim, there's nasonex also, which he uses and that I have it prescribed.So we control [...] what you can control at home [...], but sometimes, when you see that you can't [...] you have to take him straight to the ER, when things can't wait.(F8)
|
2023-09-10T15:16:27.084Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "be40e3d3ee577faca640814b4f00f79ac5382683",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/physis/a/WnrLdFzPvQf9Qh4LT8bj8jK/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f6b471d6f84dec42c457c54adbe0959fd74d79ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
8008523
|
pes2o/s2orc
|
v3-fos-license
|
Slowly developing depression of N-methyl-D-aspartate receptor mediated responses in young rat hippocampi
Background Activation of N-methyl-D-aspartate (NMDA) type glutamate receptors is essential in triggering various forms of synaptic plasticity. A critical issue is to what extent such plasticity involves persistent changes of glutamate receptor subtypes and many prior studies have suggested a main role for alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptors in mediating the effect. Our previous work in hippocampal slices revealed that, under pharmacological unblocking of NMDA receptors, both AMPA and NMDA receptor mediated responses undergo a slowly developing depression. In the present study we have further adressed this phenomenon, focusing on the contribution via NMDA receptors. Pharmacologically isolated NMDA receptor mediated excitatory postsynaptic potentials (EPSPs) were recorded for two independent synaptic pathways in CA1 area using perfusion with low Mg2+ (0.1 mM) to unblock NMDA receptors. Results Following unblocking of NMDA receptors, there was a gradual decline of NMDA receptor mediated EPSPs for 2–3 hours towards a stable level of ca. 60–70 % of the maximal size. If such an experimental session was repeated twice in the same pathway with a period of NMDA receptor blockade in between, the depression attained in the first session was still evident in the second one and no further decay occurred. The persistency of the depression was also validated by comparison between pathways. It was found that the responses of a control pathway, unstimulated in the first session of receptor unblocking, behaved as novel responses when tested in association with the depressed pathway under the second session. In similar experiments, but with AP5 present during the first session, there was no subsequent difference between NMDA EPSPs. Conclusions Our findings show that merely evoking NMDA receptor mediated responses results in a depression which is input specific, induced via NMDA receptor activation, and is maintained for several hours through periods of receptor blockade. The similarity to key features of long-term depression and long-term potentiation suggests a possible relation to these phenomena. Additionally, a short term potentiation and decay (<5 min) were observed during sudden start of NMDA receptor activation supporting the idea that NMDA receptor mediated responses are highly plastic.
Background
Hippocampal synapses display a variety of activity dependent changes that may represent basic elements of memory. Of foremost interest are long-term potentiation (LTP) and depression (LTD), especially forms that depend on N-methyl-D-aspartate (NMDA) receptor activation and therefore can attain "associative" properties [1][2][3]. The selective induction of LTP versus LTD has been attributed to differing amounts of Ca 2+ ions entering via postsynaptic NMDA receptor channels [4]. Depending on type of stimulation, enzymes with different sensitivities to Ca 2+ may be engaged and change the balance between kinase and phosphatase activities, leading to either phosphorylation or dephosphorylation of postsynaptic target proteins, such as ionotropic receptors [2]. It has been shown that afferent stimulation by frequencies in the range 0.5 to 5 Hz reliably produces LTD whereas higher frequencies, 50-100 Hz, lead to LTP [5]. Several studies suggest that temporal factors are also important, implying that LTD requires a longer time to be induced than LTP [6]. We have previously demonstrated that under conditions of facilitated activation of NMDA receptors by low extracellular Mg 2+ synaptic plasticity can be induced by frequencies as low as 0.1-0.2 Hz when applied for prolonged periods of time [7]. Following an initial phase of transient potentiation there was a substantial depression that developed gradually during several hours and that remained stable after termination of NMDA receptor activation. Although the relation to "standard LTD" was not fully clarified, such slowly developing depression in low Mg 2+ solution may provide a useful model for studying certain forms of NMDA receptor dependent depression. In the present study, we will further develop the concept of gradually decaying responses.
One critical issue regarding LTP, LTD as well as other forms of glutamatergic synaptic plasticity, is the relative contribution of different glutamate receptor subtypes in creating the synaptic modification. Knowledge about this matter may be helpful in elucidating the underlying modification. While a selective change of alpha-amino-3hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptors has been cherished [8][9][10], especially in the case of LTP, several studies also observed NMDA receptor mediated changes in both LTP and LTD [11][12][13][14] . Previous work on LTD in our lab described an equal change of AMPA and NMDA responses [15]. However, it was reported by others that the relative contributions of AMPA and NMDA responses during LTD depend on experimental conditions, an equal change being one possible outcome [12]. In our recent examination of a slowly developing depression using composite AMPA-NMDA excitatory postsynaptic potentials (EPSPs) [7], the two responses declined in close parallel, indicating a common factor. Such an equal change is compatible with both a coordinated change of receptors and a presynaptic one via a decrease of glutamate release. However, in view of other studies reporting a coupling between responses via AMPA and NMDA receptors [16,17], one may ask whether our observation of declining NMDA responses could be secondary to the change of AMPA. In the present study, isolated NMDA receptor mediated EPSPs were shown to decline progressively during prolonged low frequency activation (0.1 Hz). Moreover, following sudden start of stimulation there was an initial, transient potentiation. Our findings also resolved some questions regarding input specificity and durability of the slow decay, which were previously addressed only for AMPA EPSPs.
Isolated NMDA EPSPs show a progressive decay
AMPA EPSPs were initially recorded in low Mg 2+ solution in the presence of NMDA receptor antagonist AP5 to allow for pathway equalization (see Methods) without evoking NMDA EPSPs. Synaptic transmission was then entirely blocked by adding AMPA receptor antagonist CNQX, followed by unblocking of NMDA receptors by wash out of AP5. During this time, only one pathway was stimulated, keeping the other one silent for later use. As illustrated in Fig. 1A (upper part), an NMDA receptor mediated EPSP appeared within 10 min and reached maximum about 30 min after switching to AP5-free solution. During the following recording period of nearly 2 h, the NMDA EPSP decayed substantially, on average down to 58 ± 6 % of peak value (n = 8). Control experiments showed that isolated AMPA EPSPs recorded in low Mg 2+ remained stable for several hours (changing to 96 ± 5 % of baseline after 2 h, n = 5, not illustrated).
Reinduction in a naive pathway
To exclude the possibility that the observed decay of NMDA responses were due to deterioration of slices, implying a general decrease of essential physiological processes, the experiment was repeated for the other pathway, i.e. the one that had not previously expressed NMDA responses (continuation in the same set of 8 slices). As seen in Fig. 1A (lower part), a similar result was obtained as above. The NMDA EPSP peaked at 98 ± 7 % and declined to 63 ± 8 % relative to the peak in the first experimental session. As illustrated in Fig. 1B, the curve obtained for a naive pathway during the second session of NMDA unblocking was similar to the one obtained for the pathway activated during the first session, the two curves overlapping closely for the entire recording period.
Comparison between pathways Specificity and persistency
The same experiment was used to address the question of persistency as well as input specificity of the slowly developing depression of NMDA EPSPs. As can be seen in Fig. Decay of NMDA receptor mediated EPSPs in low Mg 2+ solution Figure 1 Decay of NMDA receptor mediated EPSPs in low Mg 2+ solution. (A) Experimental design: Measurements of field EPSPs from a representative experiment are plotted for two independent pathways, referred to as input 1 and input 2. By appropriate use of specific blockers CNQX and AP5, either isolated AMPA EPSPs (used for initial pathway equalization) or isolated NMDA EPSPs (used in testing sessions N1 and N2) were recorded with periods of fully blocked responses in between. The pathways were stimulated alternately, each at 0.1 Hz, except for silencing input 2 for a 3 h period that contained session N1. Each point represents the average of measurements within 1 min. As seen, during N1 the responses to input 1 decayed. During session N2 both inputs were stimulated, revealing a novel decay for input 2 and occluded depression for input 1. Samples of recorded potentials of both inputs are given for the indicated time points a-e. (B) Superimposed, averaged time courses of NMDA EPSPs for input 1 during session N1, and input 2 during session N2. (C) Superimposed averaged time courses of NMDA EPSPs for input 1 and input 2, both during session N2. For B and C, each point represents the average of measurements within 3 min intervals. The peak during session N1 was used as 100 %. Values are expressed as mean ± S.E.M (n = 8 experiments).
Activation of NMDA receptors is necessary for inducing persistent depression Figure 2 Activation of NMDA receptors is necessary for inducing persistent depression. (A) Experimental design: Measurements of field EPSPs are plotted for a representative experiment as in Fig. 1A. The experiment conformed with the previous one except that AP5 was present in the solution during session N1. Accordingly, synaptic transmission was blocked for the entire period when input 2 was stopped. In session N2, AP5 was eventually washed out and both inputs displayed decaying NMDA EPSPs. Samples of recorded potentials of both inputs are given for the indicated time points a-e. (B) Superimposed averaged time courses of NMDA EPSPs from both inputs during session N2. Each point represents the average of measurements within 3 min intervals. The peak of input 2 was chosen as 100 %. The equal appearance of the curves shows that no pathway specific depression remained during N2 due to the differential treatment during N1 (compare such a depression in the reference case in Fig. 1C). Values are expressed as mean ± S.E.M (n = 5 experiments).
1A, the pathway that was active during the first experimental session was retested during the second one together with the pathway receiving novel NMDA receptor activation. This allowed for a comparison between the pathways (for convenience, the peak in the first session is still used as reference for the values in the following). During the second session of NMDA receptor unblocking, the previously treated pathway displayed a substantially smaller peak than the naive one (48 ± 3 % vs. 98 ± 7 % p < 0.05, n = 8), and the two curves were still different by the end of the session (44 ± 3 % vs. 63 ± 8 %; difference 19 ± 4 %, p < 0.05). Since the latter time point was located at 3 h after the end of the first session it is evident that the depression of NMDA EPSPs lasted for at least 3 h. It is noteworthy that the previously depressed pathway showed no significant decay during the second session, passing from 48 to 44 %, i.e. a relative change by 92 % (p > 0.05), as if it had already been saturated. For a comparison, the naive pathway changed by a factor 63/98 = 64 % (p < 0.05) (see curves in Fig. 1C). A graphic summary of all "peak" and "end-of-session" values is given in Fig. 3B. It can also be noted that NMDA EPSPs recorded for contiguous time intervals up to 4 h reached a saturation level after 2-3 h (n = 3, not illustrated).
Instantaneous versus persistent depression
The above results suggest that the progressive decay observed in a single pathway as an instant event actually represents a long-term, pathway-specific change that can be assessed long later by comparison across pathways. To further examine the relation between the instantaneously recorded depression and the one measured about 90 min later, the relation between the two was plotted as illustrated in Fig. 3C. The two variables were found to be positively correlated (r = 0.71, p < 0.05, n = 8), implying that depression to a lower level in one pathway led to smaller responses in that pathway at later times compared to another pathway. The regression line, passing below rather than through the point of no depression (100 %, 100 %), indicates that even a slight instantaneous decay may be coupled to a noticeable change in the long term. Possibly, the declining trend was partially masked by recovery from AP5 leading to an underestimation of it.
Effect of NMDA receptor blockade on subsequent NMDA EPSP decay
To pinpoint the induction mechanism, in terms of a preversus postsynaptic location, experiments were carried out in a similar way as above except for keeping AP5 in the solution during the first session (see Fig. 2A). Hence, the stimulus pattern included a 3 h long interval with no stimulation in one of the pathways. The other pathway was stimulated during that time, and most likely releasing glutamate, but no postsynaptic response was expressed due to blockade of NMDA receptors. It can be argued that successful blockade of depression would predict a postsynaptic mechanism whereas a failure to do so would predict a presynaptic one. Fig. 2B shows that the depression was actually blocked, demonstrating the importance of NMDA receptor activation in the induction process. As illustrated, the two curves obtained during unblocking of NMDA receptors in the second session were quite similar. The continuously stimulated pathway, being depressed in the standard case, peaked at a level of 105 ± 5 % relative to the control pathway (n = 5; see also graphic summary of values in Fig. 3B).
Phase trajectories as indicators of waveform change
Depending on the type of synaptic modification, EPSP waveforms may change in different ways, and previous work in our lab has demonstrated that NMDA EPSPs are more prone than AMPA EPSPs to show these changes [18]. Taking advantage of such waveform analysis, which might shed light on the underlying mechanism, we examined phase plots based on measures of the initial part and the later part of the NMDA EPSP (see Methods) on the Xand Y-axis, respectively. The curve in Fig. 3D is based on a total of 20 experiments (pathways), including some with only a single session. As shown, the phase trajectory displayed a loop indicating a difference between the effect of AP5 and the gradual decay of NMDA EPSPs. On the average the encircled area was 15 ± 2 % (p < 0.05, n = 20; scaling peak × peak as 100 %, clockwise being positive).
While our data show that the time window matters for measuring NMDA EPSPs, all the above results were qualitatively similar regardless of which window was used. It can be noted, however, that the depression of NMDA EPSPs by the end of a recording session was less pronounced using early measurements than late ones (responses attaining 72 ± 3 % vs. 61 ± 3 % of the peak value, p < 0.05, n = 20).
In 12 out of the 20 experiments, the fiber volley was well separated from the stimulus artifact allowing it to be properly measured. No significant change was detected, the end-of session value amounting 103 ± 2 % of the value at the EPSP peak (p > 0.05, not illustrated).
Short-term effects induced by sudden onset stimulation
In the above, NMDA receptor activation occurred gradually while the antagonist AP5 was washed out. This is in line with the experimental protocol used in our previous work on composite EPSPs containing both AMPA and NMDA components [7]. However, a natural question is whether sudden, novel activation of NMDA receptors is equivalent in producing the results observed here. We therefore pursued experiments in which stimulation was silenced until washout of AP5 was complete. One pathway, receiving such sudden stimulation, was compared to Summary of NMDA EPSP depressions during unblocking of NMDA receptors Fig. 2. Pathways are defined as "treated" or "naive" depending on whether stimulation was "on" (solid) or "off" (dashed) during session N1. Each number (1-10) in A1 and A2 represents a portion where measurements were taken for analysis. (B) Comparison between NMDA EPSP measurements, each bar representing an average obtained for 30 consecutive responses (5 min). Value no. 1 was chosen as 100 % for experiments depicted in A1. Since a corresponding value was lacking for experiments depicted in A2, value no. 9 was chosen as 100 %. All values are expressed as mean and S.E.M (n = 8 and n = 5, respectively, for A1 and A2 type experiments). (C) Relation between instantaneous depression in session N1 (abscissa, final value relative to peak) and persistent depression in session N2 (ordinate, interpathway comparison of peaks). Each dot represents a single experiment. The analysis revealed a significant correlation between the variables as illustrated by the superimposed regression line (n = 8). (D) Relation between early and late measurements obtained with two different time windows (initial 5 ms and 35-45 ms, respectively) plotted as a "phase trajectory" (abscissa, early measure; ordinate, late measure). The curve represents an average of 20 experiments with additional smoothing to reduce noise (5 min moving average). Samples of recorded NMDA EPSPs corresponding to locations a, b of the trajectory are overlaid. Bars above EPSPs curves indicate time windows used for early and late measurements, respectively. The curve at b is shown superimposed on a dotted copy of the curve at a. a control pathway subjected to gradual NMDA receptor activation during a single recording session. Fig. 4A shows an essential difference in behavior between the pathways, the sudden start of activation leading to substantially larger responses for about 5 min. Fig. 4B reveals additional complexity, the initial responses showing actual growth of responses for about a minute before they started to decay, implying an early potentiation process. The total range of responses was substantial, from a peak above 200 % to about 70 % by the end of the recording session (relative to the peak of the control pathway), i.e. about 3 times.
Transient potentiation induced by sudden activation of NMDA receptors
In order to determine whether the transient potentiation had any obvious relation to the slow depression of NMDA EPSPs, the relation between the two was examined. Thus, the degree of initial potentiation was calculated by comparing the pathways just after stimulation was started and the depression was determined, as before, by comparing the end-of-session value with the peak value (see legend of Fig. 4 for further details). The two variables, illustrated by the XY-plot in Fig. 4C, were found to have no significant correlation (r = 0.35, p > 0.05).
Discussion
Our study revealed a progressive decline of pharmacologically isolated NMDA EPSPs, as observed for several hours in response to low rate (0.1 Hz) activation of afferents. The decline was found to be a form of long-term synaptic depression with an induction linked to NMDA receptor activation and with an expression that was maintained through periods without such activation. Several of its basic characteristics were similar to those of conventional LTP and LTD, suggesting a possible relation to these phenomena.
Synapse specificity and NMDA-dependent induction
Decaying responses is a potential side effect in long-term, electric recording in vitro due to declining viability of biological tissue or other experimental imperfections. Such unspecific "run down" can not account for the present findings since the gradual depression of responses could be repeated in the same slice, using a previously undepressed pathway. On the other hand, if the experiment was repeated twice in the same pathway, the second occasion revealed a diminished NMDA EPSP that showed little further decay. Together, these results show that the depression is input specific and long lasting and that it can saturate. Moreover, the lack of associated changes of the fiber volley speaks against a failure of axon conductance [19], favoring a synaptic localization of the process.
While both pre-and postsynaptic expression mechanisms appear feasible, certain mechanisms of induction can be excluded. For instance, a decrease in probability of gluta-mate release due to a direct depletion of the vesicle pool is unlikely since AMPA EPSPs could be evoked for several hours without significant decay (see also [7,20]). Even so, a use-dependent reduction of vesicle content may affect NMDA responses selectively under certain conditions by restricting "glutamate spillover" [21]. The most critical data with respect to the induction mechanism is that a period of conditioning stimulation, normally leading to reduction of NMDA EPSPs in the same pathway later on, was ineffective if delivered during blockade of NMDA receptors. This implies that the induction of the depression requires activation of NMDA receptors, most likely postsynaptically.
Other observations of decaying NMDA EPSPs
The input specificity and NMDA dependent induction of the current depression conform with basic properties of conventional LTP and LTD [22]. The depression might then be a case of LTD, although induced by an alternative protocol. In fact, LTD was shown to be associated with changes involving both AMPA and NMDA receptors, although the linkage between the two contributions is controversial [12,15]. Moreover, both of the cited studies demonstrated LTD of isolated NMDA EPSPs induced by 1-2 Hz stimulation. In contrast, experiments in cultures, inducing LTD by field stimulation at a higher frequency (5 Hz), reported only AMPA receptor mediated changes [10]. Direct interaction tests may further clarify the relation between the present depression and LTD.
Gradually decaying, NMDA receptor mediated responses were observed previously in our lab during recording of composite AMPA-NMDA EPSPs for several hours [7,23]. Attempts to relate the decay to LTD demonstrated a weak reduction of subsequent LTD of AMPA responses suggesting at least some elements in common [7]. In view of studies reporting forms of AMPA-NMDA coupling [16,17], it is arguable that the studies demonstrating a decay of both components could have been influenced by the use of composite responses. In one of our studies [7], the observed depression of the AMPA component of composite EPSPs was verified by additional comparison between isolated AMPA EPSPs obtained under blockade of NMDA receptors. A similar verification was lacking for the depression of the NMDA response. By recording of isolated NMDA EPSPs, the present study ascertains that NMDA receptor mediated responses undergo a use-dependent depression, which is manifested in the absence of AMPA receptor activation. However, the decay was less pronounced than that reported previously for the NMDA component of composite EPSPs (average reduction to 60 % of peak as compared to 40 % in the previous study [7]).
While we observed that isolated NMDA EPSPs decay "spontaneously", most prior studies employing such EPSPs did not report a decay. It might be that limitations of recording time concealed the effect and cell dialysis during whole cell recording could also be a limiting factor. Actually, a recent study, recording "novel" responses under whole cell conditions, reported on decaying AMPA EPSPs but constant NMDA EPSPs [24]. The possibility of AMPA receptor LTD under the present conditions could not be excluded as the blockade of the receptors may just conceal the effect. Further studies may help to reveal this matter.
Persistency and saturability
Standard LTP/LTD experiments compare relatively stable periods of recording before and after induction of the synaptic modification. This was not possible in the present case, since merely test stimulation evoked the decay. Therefore, comparisons were generally made between synaptic pathways subjected to different stimulus paradigms. The induction of depression in a single pathway during an initial 2 h period caused a subsequent difference between NMDA responses of the two pathways throughout a subsequent test period. The degree of initial decay was closely related to the later difference between pathways, suggesting that once depression occurred it could be maintained through periods of receptor blockade until testing was performed. Our data suggest a duration of the depression of more than 3 h after the initial induction period. This is in the range commonly referred to as "late", which is believed to involve special biochemistry such as gene expression and protein synthesis [25,26]. Whether, the presently studied depression involves such changes remains to be determined.
The gradual depression of NMDA EPSPs was found to saturate after 2-3 h as evidenced by both single and double session experiments. This is in line with several other forms of NMDA-dependent plasticity, including LTP, LTD and chemically induced variants, which are shown to be saturable [27][28][29]. Whether, the saturation observed here is a "true one" at the level of expression is not known. Alternatively it could be a phenomenon at the induction level, related to weaker induction due to the diminished NMDA response.
Possible expression mechanisms
Previous work on conventionally induced LTD revealed an essential role for protein phosphatases in mediating the synaptic modification [30,31]. Consistent with the idea that changes of AMPA receptors mediate NMDAdependent synaptic plasticity [32,33] it was demonstrated that certain sites of the GluR1 subunit were targeted in LTP/de-potentiation and other ones in LTD/de-depression [2,34]. Less is known about mechanisms underlying NMDA receptor changes in LTP/LTD as well as in the current depression. A previous study in our lab recording composite EPSPs reported that LTD of the NMDA component was blocked by a phosphatase inhibitor in a similar manner as "standard LTD" [15]. Hence, one can envisage that NMDA receptors would be controlled via dephosphorylation in a similar manner as inferred for AMPA receptors.
NMDA receptors also have a number of other regulatory sites, allowing for modulation by glycine, polyamines, calcium, and redox agents [35] and they have shown to be mobile as well [36][37][38], in keeping with the idea of mobile AMPA receptors in LTP/LTD [39,40]. Regardless of details, additional factors are needed to stabilize the synaptic modification in the long term, perhaps via synthesis of new proteins as previously demonstrated for LTP and LTD lasting longer than about 3 h [25,28,41]. Changes in synaptic morphology and altered subunit composition of receptors are examples of protein synthesis dependent mechanisms that have been implied in late forms of plasticity [32,42].
Although a postsynaptic modification appears to be the primary choice, a presynaptic one that is initiated postsynaptically is also conceivable. In previous attempts to distinguish between pre-and postsynaptic mechanisms, LTD was compared with depression caused by various pharmacological agents with respect to the ability to influence the waveform of EPSPs [18]. While LTD in that study was found to affect isolated NMDA EPSPs in a uniform manner, i.e. no waveform change, the present data appeared to be less clear-cut. Nevertheless, the relation between early and late EPSP measurements differed for the initial AP5 washout period and the following period of actively induced depression, indicating a change in EPSP waveform. The depression therefore appeared to be distinct from a postsynaptic modification via modulation of channel gating. However, a clear test of the pre-post issue still remains. Unfortunately, the MK-801 test of release probability [43] does not appear useful when dealing with decaying responses as in the present case.
Short-term changes and their possible mechanisms
While the main line of experiments employed a smooth start of NMDA receptor activation following the gradual washout of AP5, another set of experiments made use of sudden activation by awaiting full washout until stimulation was started. Compared to smooth activation, there was an additional, transient potentiation that largely decayed within 20-30 stimuli. This is in accord with a previous study in hippocampal slices showing that stopping stimulation of composite AMPA-NMDA EPSPs for 10-60 min (and one case of isolated NMDA EPSP for 10 min) resulted in a transient potentiation when stimulation was resumed [23].
Several other studies describe decaying NMDA responses in relation to inactivation or desensitization of receptors [44,45]. Accordingly, synaptically evoked NMDA responses in cell cultures were found to inactivate (i.e. decay) within a few minutes in much the same manner as observed here [45], a process shown to be triggered by postsynaptic influx of Ca 2+ via the NMDA channels. Similar mechanisms of receptor desensitization/inactivation might be responsible in the present case in forming the transient phase after starting stimulation. Some details remain unexplained by this simple model, such as the biphasic character of the transient phase in terms of initial growth and subsequent decay. One can speculate that a minor LTP, or short-term potentiation, might be induced by the sudden activation of NMDA receptors and so would contribute to the initial growth, although the underlying cause is not addressed in this kind of explanation.
Conclusions
The above results emphasize that NMDA receptor mediated responses are highly plastic and that mere test stimulation can induce a short-term potentiation as well as a slowly developing depression that persists for several hours. The depression was input specific and saturable, and its induction required NMDA but not AMPA receptor activation in conformity with conventionally induced LTP and LTD, suggesting a relation to these phenomena. While a low Mg 2+ solution was used in our case to unblock NMDA receptors, similar unblocking may occur naturally in response to depolarization. Several important issues are still not settled. Is the saturation of the NMDA EPSP depression an absolute matter or can it be overcome, leading to further down regulation and possibly silencing of synapses? Conversely, is it possible to reverse, i.e. dedepress, the change by LTP or similar processes, allowing for bidirectional control? Further research is needed to resolve these questions.
Methods
Experiments were performed on 12 to 18 day old Sprague-Dawley rats. The animals were decapitated after isoflurane (Forene) anesthesia in accordance with the guidelines of the Swedish Council for Laboratory Animals. All animal procedures were approved by the Local Ethics Committee at Göteborg University. The brain was removed and placed in an ice-cold artificial cerebrospinal fluid solution containing (in mM) NaCl 119, KCl 2.5, CaCl 2 2, MgCl 2 2, NaHCO 3 26, NaH 2 PO 4 1 and glucose 10, oxygenated by 95% O 2 , 5% CO 2 . The hippocampus was dissected out and transverse 400 µm thick slices were prepared by a vibratome or tissue chopper. The slices were initially kept in the same solution at room temperature for at least 60 min. As required, slices were then transferred to one or several "submerged type" recording chambers. During the experiment, slices were perfused at 30°C by a solution similar to that above except that the concentration of Mg 2+ was 0.1 mM. The usage of low Mg 2+ allowed for expression of NMDA receptor mediated responses.
Stimulation was delivered as 0.1 ms negative constant current pulses via monopolar tungsten electrodes. For each slice, two stimulating electrodes were placed in the apical dendritic layer of CA1 pyramidal cells on either side of the recording electrode to provide for stimulation of two separate sets of afferents. Field EPSPs were recorded by using a glass micropipette filled with 3 M NaCl (4-10 MΩ resistance). The basal test stimulus frequency was 0.1 Hz with stimuli delivered alternately to the two electrodes, successive stimuli being separated by 5 s. To test the effect of stimulus interruption, one of the two electrodes was given no stimulation during a certain time, the other one remaining stimulated at 0.1 Hz.
Recording commenced by monitoring isolated AMPA EPSPs in the presence of AP5 (50 µM) to block NMDA responses. A low concentration of CNQX (1 µM) was used to partially suppress the AMPA responses. In this way, somewhat larger stimulus strengths could be applied, suitable for evoking isolated NMDA EPSPs in the later part of the experiment. During the time of AMPA EPSP recording, the stimulus strengths were adjusted for each slice to equalize the synaptic inputs of the two pathways. This was essential for later comparison of NMDA EPSP across pathways. After obtaining a baseline of equal AMPA responses, the concentration of CNQX was raised to 10 µM which entirely blocked synaptic responces. The remaining nonsynaptic response, consisting of stimulus artifact and presynaptic volley, was used to define "true zero".
To study NMDA receptor mediated responses, CNQX (10 µM) was maintained in the solution while AP5 was washed out for one or several 2 h periods, referred to as sessions in the following. In between the sessions as well as afterwards, synaptic transmission was again blocked by applying AP5 (50 µM), framing in the sessions by periods of recording non-synaptic responses. Under the sessions, various tests were made depending on the purpose of investigation. Usually one input remained silent during the first session and stimulation was not resumed until after synaptic transmission was reblocked. In another kind of experiment, the initially silent pathway was reactivated in the early part of the first session after NMDA receptors were unblocked, providing a means for sudden start of NMDA receptor activation.
Signals were amplified, filtered and transferred to a PC clone computer for on-line and off-line analysis by specially designed electronic equipment (based on an Eagle Instruments multifunction board) and own developed computer software. AMPA EPSPs were measured using an early time window (first 1.5 ms after the fiber volley) while NMDA EPSPs were measured using both an early (first 5 ms after volley) and a late (35-45 ms after artifact) time window. The late measurement was used in presenting most of the results, allowing easy comparison with previous work in our lab that estimated the NMDA component of composite EPSPs via a late measurement [7]. Similar albeit not identical results were obtained with early and late measurements (see illustration in Fig. 3D).
Measurements were calculated by integrating the curve along the specified time window after substraction of the prestimulus baseline. All values were corrected by substracting the corresponding measurements of the nonsynaptic potential obtained after total blockage of the EPSPs (except when measuring the fiber volley). The final data were quantified as relative values compared to a reference level defining 100 %. While the initial baseline formed a natural reference for AMPA responses, the choice was less obvious for NMDA responses, leading us to use the highest level of responses for one of the pathways in one of the experimental sessions (selected to make sense). Results are expressed as mean ± S.E.M. Statistical comparisons were made using Student's t-test.
Drugs were obtained from Tocris Cookson, UK; prefabricated stimulating electrodes were obtained from World Precision Instruments, FL USA, type TM33B.
Authors' contributions
MD planned and carried out most of the experiments including data analysis, and compiled the manuscript. RL carried out experiments, participated in the planning process and helped in shaping the final manuscript. HPX carried out the initial experiments establishing the effect of NMDA EPSP depression. BJ was responsible for logis-tics planning and participated in experiments. HW conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.
|
2014-10-01T00:00:00.000Z
|
2004-08-03T00:00:00.000
|
{
"year": 2004,
"sha1": "81fc4a317c68988f701b0b3f0dd26289a27b45ac",
"oa_license": "CCBY",
"oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/1471-2202-5-26",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5814e66f4a1cc8cda0917b05eb7bbe9945c49ec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
259766652
|
pes2o/s2orc
|
v3-fos-license
|
The hunt for formamide in interstellar ices: A toolkit of laboratory infrared spectra in astronomically relevant ice mixtures and comparisons to ISO, Spitzer, and JWST observations
This work aims at characterizing the mid-IR spectra of formamide in its pure form as well as in mixtures of the most abundant interstellar ices via laboratory simulation of such ices, as well as demonstrating how these laboratory spectra can be used to search for formamide in ice observations. Mid-IR spectra (4000 - 500 cm$^{-1}$, 2.5 - 20 $\mu$m) of formamide, both in its pure form as well as in binary and tertiary mixtures with H$_2$O, CO$_2$, CO, NH$_3$, CH$_3$OH, H$_2$O:CO$_2$, H$_2$O:NH$_3$, CO:NH$_3$, and CO:CH$_3$OH, are collected at temperatures ranging from 15 - 212 K. Apparent band strengths and positions of eight IR bands of pure amorphous and crystalline formamide at various temperatures are provided. Three bands are identified as potential formamide tracers in observational ice spectra: the overlapping C=O stretch and NH$_2$ scissor bands at 1700.3 and 1630.4 cm$^{-1}$ (5.881 and 6.133 $\mu$m), the CH bend at 1388.1 cm$^{-1}$ (7.204 $\mu$m), and the CN stretch at 1328.1 cm$^{-1}$ (7.529 $\mu$m). The relative apparent band strengths, positions, and FWHM of these features in mixtures at various temperatures are also determined. Finally, the laboratory spectra are compared to observational spectra of low- and high-mass young stellar objects as well as pre-stellar cores observed with the Infrared Space Observatory, the Spitzer Space Telescope, and the JWST. A comparison between the formamide CH bend in laboratory data and the 7.24 $\mu$m band in the observations tentatively indicates that, if formamide ice is contributing significantly to the observed absorption, it is more likely in a polar matrix. Upper limits ranging from 0.35-5.1\% with respect to H$_{2}$O are calculated. These upper limits are in agreement with gas-phase formamide abundances and take into account the effect of a H$_{2}$O matrix on formamide's band strengths.
Introduction
Of the >280 molecules that have been detected in interstellar environments (Endres et al. 2016), formamide (NH 2 CHO) has become one of the most widely and deeply investigated in observational, modeling, computational, and laboratory studies in the last decade. Containing all four of the most abundant biological elements (C, H, N, and O), formamide is the simplest molecule that contains the biologically essential amide bond and has been suggested as a plausible prebiotic precursor to various nucleobases (e.g., Saladino et al. 2003; Barks et al. 2010), the chemical building blocks of RNA and DNA. It has also been proposed as an alternative prebiotic solvent to promote condensation reactions, which form many vital biological molecules but are highly endergonic in purely aqueous solutions (e.g., phosphorylation), by lowering water activity (Gull et al. 2017;Pasek 2019;Lago et al. 2020).
Given this potential prebiotic relevance, the fact that formamide has been observed in numerous sources in the interstellar medium as well as on extraterrestrial bodies in our own Solar System has exciting implications for astrobi-Article number, page 1 of 25 arXiv:2307.04790v2 [astro-ph.GA] 14 Jul 2023 A&A proofs: manuscript no. ms ology. First detected in the interstellar medium in the gas phase by Rubin et al. (1971) in the Sagittarius B2 high-mass star-forming region, formamide has since been observed in over 30 massive young stellar objects (MYSOs) as well as low-mass YSOs (LYSOs) with hot corinos and protostellar shocks (López-Sepulcre et al. 2019 and references therein). Within our Solar System, gas-phase formamide has been found in the comae of the comets Lemmon, Lovejoy, and Hale-Bopp, with abundances ranging around 0.01-0.02% with respect to H 2 O (Bockelée- Morvan et al. 2000;Biver et al. 2014). It was also detected in situ by the Rosetta mission on comet 67P Churyumov-Gerasimenko, both on the surface by the Cometary Sampling and Composition experiment (COSAC) instrument on the Philae lander (Goesmann et al. 2015) and in the coma by the Double Focusing Mass Spectrometer (DFMS) on the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument (Altwegg et al. 2017;Rubin et al. 2019), where the formamide abundance was found to be ∼0.004% with respect to H 2 O.
Notably, all of the interstellar sources in which gas-phase formamide has been securely detected have hot cores and corinos or shocked regions, where temperatures are high enough for formamide to thermally desorb from icy grains into the gas phase (López-Sepulcre et al. 2019). Additionally, in many of these sources, the formamide abundance correlates almost linearly with the abundance of isocyanic acid (HNCO) (Bisschop et al. 2007b;Mendoza et al. 2014;López-Sepulcre et al. 2015), and, in the case of the low-mass source IRAS 16293-2422, the two species are spatially correlated and have very similar deuteration ratios (Coutens et al. 2016).
These aspects of formamide observations could be considered evidence that formamide is formed in the solid state (i.e., via ice chemistry), possibly in a pathway chemically related to HNCO, and it is detected in the gas phase following desorption from icy grains. The ice formation and grain sublimation scenario is further supported by recent observational work investigating excitation temperatures of Nbearing complex organic molecules (COMs) in 37 MYSOs from the ALMA Evolutionary study of High Mass Protocluster Formation in the Galaxy (ALMAGAL) survey, where formamide had the highest excitation temperatures of all the studied N-bearing COMs ( 250 K) (Nazari et al. 2022). These temperatures are consistent with thermal desorption experiments, in which formamide ice sublimes at high temperatures (typically >210 K) even when it is mixed with or deposited on top of more volatile species such as H 2 O and CO, and at even higher temperatures (>250 K) when the experiments are performed on certain dust grain analog substrates (Dawley et al. 2014;Urso et al. 2017;Chaabouni et al. 2018;Corazzi et al. 2020).
Experimentally, solid-state formamide has been identified as a product of processing via a variety of energetic sources (e.g., electron, UV, X-ray, and ion irradiation) of a myriad of simple ice mixtures, including (but not limited to) CO:NH 3 (Demyk et al. 1998;Hudson & Moore 2000;Jones et al. 2011;Bredehöft et al. 2017;Martín-Doménech et al. 2020) processing of almost any ice mixture that contains H, N, C, and O is very likely to produce formamide. Such processing experiments mimic the radiation environments experienced by ices in protostellar envelopes and protoplanetary disks. Furthermore, recent experiments by Dulieu et al. (2019) demonstrate that hydrogenation of NO:H 2 CO can also produce formamide, providing a plausible nonenergetic formation pathway that is relevant to cold, dark clouds.
While a plethora of observational, experimental, and theoretical works (see Section 2) have significantly progressed our understanding of formamide's interstellar presence and its plausible chemical history, whether its formation occurs in the solid state, gas phase, or both remains unclear. A secure detection of formamide in ices would be immensely valuable to resolve this debate regarding its formation mechanism. Such a detection, if well resolved, could provide parameters such as formamide's solid-state abundance and its physico-chemical environment, which are essential to elucidating its formation pathway.
Previously, formamide has been tentatively detected in the solid state in the Infrared Space Observatory Short Wavelength Spectrometer (ISO-SWS) spectra of the MYSOs W33A and NGC 7538 IRS 9. In the case of W33A, an upper limit of 2.1% with respect to H 2 O via the CH bend at 7.22 µm/1385 cm −1 was derived, but the authors noted that the peak position in the observation (7.24 µm) was red-shifted relative to the formamide peak in their laboratory spectra (Schutte et al. 1999). For NGC 7538 IRS 9, no upper limit of formamide was provided -a laboratory spectrum of irradiated HNCO that showed IR evidence of formamide formation was qualitatively evaluated as a spectral fit to the observed 6 µm/1700 cm −1 band (Raunier et al. 2004). In both of these cases, the bands attributed to formamide were overlaid on top of or blended with other strong ice features.
Typically, reference laboratory IR spectra are used to assign and fit astronomically observed IR features to specific species, and band strengths acquired via systematic laboratory experiments are used to quantify the column densities of these species. For COMs such as formamide that are expected to be present in the ice in very low concentrations ( 5%), it is important to obtain these spectra and band strengths not only for pure ices, but also in chemical conditions that are more realistic for interstellar ices. Namely, the molecule of interest should be diluted in the more abundant simple ice species (e.g., H 2 O, CO, and CO 2 ), as interactions with other species present in the ice matrix can significantly alter the positions, profiles, and apparent band strengths of a molecule's vibrational features.
Morphological changes in the ice caused by thermal processing, such as transitions from amorphous to crystalline ice or matrix segregation, can also dramatically change an ice's spectral features, so spectra should be collected at a variety of temperatures as well. Considering such factors is not only important to accurately assign and quantify the molecule of interest, but it can also provide valuable information about the molecule's physico-chemical environment and history.
In previous IR characterization work, Brucato et al. (2006) derived the refractive index, density, and several band strengths of pure formamide, but integration ranges and errors were not provided for these band strengths, and no spectra of heated formamide or formamide in mixtures were collected. In order to tentatively assign the 7.24 µm band in W33A's spectrum to formamide, Schutte et al. (1999) collected spectra of formamide at 10 K in H 2 O and H 2 O:CH 3 OH matrices, but only one band was characterized from these spectra, and it is unclear for what phase of formamide the band strength used in the upper limit calculation was derived. Urso et al. (2017) collected IR spectra of formamide in pure, H 2 O-dominated, and CO-dominated ice matrices, but the band strengths, peak positions, and full width half maxima (FWHMs) of the formamide features in these mixtures are not given. Sivaraman et al. (2013) presented the peak positions of the bands of pure formamide in the 30 -210 K temperature range, but no spectra of formamide in mixtures were collected.
Thus, in an effort to enable more secure assignments and accurate abundance and/or upper limit determinations of formamide in observed ice spectra, this work provides a comprehensive set of laboratory transmission IR spectra of pure formamide as well as formamide diluted in nine different astrophysically relevant ice mixtures of varying polarities. These spectra are provided at temperatures ranging from 15 -212 K. Apparent band strengths were derived for eight integrated regions from the pure formamide spectra, and from these, three bands are evaluated as the most promising for future identification of formamide in observations. These bands are also fully characterized (i.e., peak positions, FWHMs, and relative band strengths are provided). Examples of how these spectra and values can be used in future analyses of ice observations are described, and new upper limits of formamide in a variety of objects (prestellar cores, low-mass protostars, and high-mass protostars) were calculated. Finally, all spectra are made publicly available on the Leiden Ice Database (Rocha et al. 2022) for the community to use in fitting to their ice observations. This work is particularly timely given the recent launch of the James Webb Space Telescope (JWST), which may enable the detection of new COMs in interstellar ices due to its unprecedented sensitivity and spectral resolution.
Formamide formation mechanism debate
A variety of pathways have been suggested to explain the observed solid-state formamide formation in laboratory ice experiments. One initially proposed mechanism was the hydrogenation of HNCO, an attractive premise given that it provided a direct chemical link between HNCO and formamide to explain their correlation in gas-phase https://www.icedb.strw.leidenuniv.nl observations: This pathway was first suggested by Charnley (1997) and was stated as a possible formation mechanism of formamide when it was observed in VUV irradiation experiments of pure HNCO (Raunier et al. 2004). However, hydrogenation experiments by Noble et al. (2015) via H bombardment of HNCO <20 K did not produce detectable amounts of formamide, although the authors suggested that the reaction may be prevented in their experiments by the formation of very stable HNCO dimers or polymers, and that it could possibly proceed if HNCO is diluted in the matrix of an ice like H 2 O. Indeed, subsequent experiments by Haupa et al. (2019) showed that, in a 3.3 K para-H 2 matrix, formamide can form from HNCO via a hydrogen additionabstraction cycling mechanism, but in this reaction scheme, HNCO is still the favored product.
Another proposed formation pathway is the following radical-radical recombination: This mechanism is technically barrierless and can proceed at low temperatures (∼10 K) but produces higher yields at higher temperatures (∼20-40 K) due to increased mobility allowing the radicals to orient in the proper reaction geometry (Rimola et al. 2018;Martín-Doménech et al. 2020). In the laboratory, this mechanism requires some form of energetic processing to generate the NH 2 and CHO radicals, and its viability is supported by the presence of the CHO radical in the experimental spectra (Jones et al. 2011;Fedoseev et al. 2016;Ciaravella et al. 2019;Martín-Doménech et al. 2020;Chuang et al. 2022).
Various mechanisms have also been suggested where formamide is produced from the NH 2 CO radical, which could form by the radical-molecule association of NH 2 and CO or CN and H 2 O (Hudson & Moore 2000;Bredehöft et al. 2017;Rimola et al. 2018): However, the formation of the NH 2 CO radical via a pathway that does not involve hydrogen abstraction from already existing formamide, as seen in Haupa et al. (2019), has yet to be experimentally confirmed.
While these latter mechanisms do not provide an immediately obvious direct solid-state link between HNCO and NH 2 CHO, some experimental studies have suggested alternative links consistent with these mechanisms. For example, once formed, formamide can decompose into HNCO via dehydrogenation and photolysis by H 2 loss (Brucato et al. 2006;Haupa et al. 2019;Chuang et al. 2022), so HNCO may be a product of NH 2 CHO rather than the other way around. Fedoseev et al. (2016) proposed that the NH 2 radical can produce either HNCO or NH 2 CHO depending on the degree of hydrogenation of the C-and O-containing molecule with which it reacts: the reaction of NH 2 with CO leads to HNCO, while NH 2 with HCO or H 2 CO leads to formamide.
Thus, while formamide may not be a direct product of HNCO, the two species may be linked in a solid-state chem-Article number, page 3 of 25 A&A proofs: manuscript no. ms ical network by common precursors. Astrochemical models using the rate constants from Fedoseev et al. (2016) further corroborate that, indeed, a direct chemical link between HNCO and NH 2 CHO is not necessary to reproduce the observed linear correlation between them in models of various interstellar environments and suggest instead that their correlation could be explained by their similar responses to physical (i.e., thermal) environments (Quénard et al. 2018).
In addition to these solid-state mechanisms, the plausibility of the following gas-phase formation route has been extensively debated in computational and modeling works since its proposal in Garrod et al. (2008): According to its first published electronic structure and kinetic calculations, this reaction is essentially barrierless at low temperatures and thus should proceed readily in interstellar environments (Barone et al. 2015;Vazart et al. 2016). Furthermore, chemical models of the protostar IRAS 16293-2422 and the molecular shocks L1157-B1 and B2 utilizing the calculated rate coefficients of this reaction produce formamide abundances that are consistent with observed values (Barone et al. 2015;Codella et al. 2017), and follow-up studies calculating rate coefficients of deuterated formamide formation via the same reaction show that formamide's observed deuteration ratio does not necessarily exclude the possibility of gas-phase formation (Skouteris et al. 2017).
However, the accuracy of these calculated rate coefficients has been called into question given that they neglect the zero point energy (ZPE) of one of the transition states. When the ZPE of the transition state is included, the reaction barrier becomes large enough that the reaction rate is negligible at low temperatures (Song & Kästner 2016), although some argue that inclusion of the ZPE is not warranted for this transition state and results in overestimation of the reaction barrier (Skouteris et al. 2017). Recent gasphase experiments attempting to perform this route did not confirm any formamide formation, and their detection upper limits are consistent with the reaction barrier that includes the transition state ZPE (Douglas et al. 2022).
Methodology
All of the measurements were collected in the Laboratory for Astrophysics at Leiden Observatory on the IRASIS (In-fraRed Absorption Setup for Ice Spectroscopy) chamber. The setup was described in detail in Rachid et al. (2021) and Rachid et al. (2022), and it has since undergone several upgrades, including a decrease of its base pressure to <1.0×10 −9 mbar by the addition of new pumps, an exchange of the laser used for interference measurements to one with a wavelength of 543 nm (as the formamide ice refractive index was measured by Brucato et al. 2006 at this wavelength), and the implementation of an independent tri-dosing leak valve system that can be calibrated with a quadrupole mass spectrometer (QMS) following the procedure described in Appendix C.
The optical layout of the chamber remains the same as that shown in Figure 1 in Rachid et al. (2021): a Ge substrate sits at the center of the chamber and is cooled by a closed-cycle He cryostat to 15 K. Ices are grown on the substrate via background deposition of gases and vapors dosed into the chamber through leak valves. Infrared transmission spectra are collected through two ZnSe viewports that are parallel to the Ge substrate and normal to the IR light beam. During deposition, laser interference patterns used to determine ice thickness are measured on both sides of the Ge substrate (which is opaque and reflective in the visible light range) via photodiode detectors placed outside of viewports positioned 45 • from the substrate normal. The patterns obtained from each side of the substrate during deposition show equal deposition rates on both sides. After deposition, the substrate can be heated to obtain IR spectra at different temperatures. In this work, 256 spectral scans with a 0.5 cm −1 resolution were collected and averaged while the substrate was heated at a rate of 25 K hr −1 , resulting in a temperature uncertainty of ±1.5 K in each heated spectrum. Spectra were collected during heating until reaching the temperature at which the major matrix component desorbed. Before their analysis, all spectra were baseline-corrected using a cubic spline function.
The liquids and gases used in this work were formamide (Sigma Aldrich, ≥99.5%), water (Milli-Q, Type I), carbon dioxide (Linde, ≥99.995%), carbon monoxide (Linde, ≥99.997%), ammonia (PraxAir, ≥99.96%), and methanol (Sigma Aldrich, ≥99.9%). The mixing ratios calculated for all of the spectra via the method outlined in Appendix C are presented in Table 1. Uncertainties in the column densities used to calculate these ratios are estimated to be ∼21% for the formamide column densities and ∼27% for the matrix species column densities (see Appendix C). Prior to deposition, the liquid formamide sample was heated to 60 • C and pumped on directly with a turbomolecular pump in order to remove contaminants (primarily water). The apparent band strengths of pure formamide are determined via depositing formamide onto the substrate held at 15 K while simultaneously collecting the transmission IR spectra and the laser interference pattern. The thickness d of the ice can be derived from the laser interference pattern via the following equation: where m is an integer number of constructive fringes, λ is the laser wavelength, n is the ice refractive index (1.361 for formamide at 543 nm, from Brucato et al. 2006), and θ is the angle of incidence.
Enough formamide is deposited so that four constructive fringes are acquired, the thickness of the ice at each fringe peak is calculated, and the integrated absorbances of eight spectral regions (see Table 2) are calculated from the spectra collected at the time that a fringe peak was reached. Then, the integrated absorbance for each spectral region is plotted as a function of ice thickness, and the slope of this line, ∆ abs(ν) dν/∆d, is obtained via a least-squares fit. From this value, the apparent band strengths A' can be approximated with an equation based on the Beer-Lambert Law (e.g., Hudson et al. 2014;Gerakines et al. 2023): where M is the molar mass of formamide (45.041 g mol −1 ), ρ is the density of formamide ice (0.937 g cm −3 , from Brucato et al. 2006), and N A is Avogadro's number. Using change in integrated absorbance over change in thickness in this equation rather than the absolute values of both variables ensures that there is no contribution of any residue from previous experiments on the substrate to the calculated ice thickness. It also does not require a constant ice growth rate. The apparent band strengths reported in Table 2 are the averages of three repeated measurements following this method. The experimental uncertainties derived from the standard deviation of these three measurements range from 3-8% for the eight band strengths. However, simply using the standard deviations from the repeated measurements as the band strengths uncertainties neglects potential systemic sources of error such as uncertainties in the laser alignment geometry and the data analysis procedure. Thus, the uncertainties provided in Table 2 are calculated via error propagation of all of the experimental terms in Equation 2, using the same estimated uncertainties as Rachid et al. (2022) for the ice thickness (4%) and integrated absorbance (10%) as well as the ice density (10%). This calculation yields an uncertainty of 15% for the reported band strength values.
From the pure formamide apparent band strengths, the apparent band strengths of formamide in the investigated mixtures, A' i , are calculated using the formamide column densities N mix (obtained from the methods described in Appendix C) via the following equation: and the relative apparent band strengths, η, are subsequently found by: Following propagation of error from the pure apparent band strengths, integrated absorbances, and the formamide column densities in the mixtures (see Appendix C), the uncertainties of the relative apparent band strengths presented here are estimated to be ∼28%.
Results
The spectra of pure amorphous and crystalline formamide are presented in Figure 3, and the eight apparent band strengths calculated at 15 K are presented in Table 2. Peak positions and vibrational mode assignments are also provided. Some integrated regions contain multiple overlapping peaks; in these cases, the peak positions and assignments were provided for all peaks within the integrated region, but the peaks were not deconvolved to give an individual band strength for each peak. These band strengths have Table 2 are indicated via the shaded areas under the peaks. percent differences ranging from 1-35% compared to those given for the same peak values in Brucato et al. (2006). As integration bounds were not provided by Brucato et al. (2006), any discrepancies in band strengths may be caused by differences in chosen integration regions.
The transition from amorphous to crystalline formamide is observed at 170 K, indicated by its bands becoming sharper and narrower and some peaks splitting. The amorphous nature of almost all of the pure and mixed ices collected at 15 K can be ascertained from their spectra, which have typical amorphous features that show evidence of matrix crystallization during the warm-up phase of the experiments. This excludes the mixtures containing CO, whose phase at 15 K in these experiments may be crystalline given recent investigations of CO ice structure ≥10 K He et al. (2021); Gerakines et al. (2023); Rachid et al. (in prep.). Figure 3 presents the spectrum of pure formamide ice along with the spectra of the pure matrix components, all at 15 K. The formamide peaks indicated in the shaded areas were selected for full characterization (i.e., their peak positions, FWHMs, and relative band strengths are determined for mixtures): the overlapping C=O stretch and NH 2 scissor at 1700.3 cm −1 /5.881 µm and 1630.4 cm −1 /6.133 µm, respectively, and the slightly overlapping CH bend and CN stretch at 1388.2 cm −1 /7.204 µm and 1328.1 cm −1 /7.529 µm, respectively. These peaks were selected because they are strong, have sharp profiles, and overlap the least with the major peaks of the most common interstellar ices, making them the best candidates for identifying formamide in interstellar ice spectra. There is still some overlap between these formamide peaks and some minor peaks of the matrix components, namely the water OH bend at ∼1600 cm −1 /6.25 µm, the methanol CH 3 and OH bends at ∼1460 cm −1 /6.85 µm, and the ammonia NH scissoring at 1624 cm −1 /6.16 µm. However, with sufficiently high formamide concentrations, it may still be possible to identify formamide in these spectral regions, as these matrix bands are relatively weak and broad.
The matrix-and temperature-dependent changes in these selected formamide ice bands are discussed in the following subsections, and their peak positions, FWHMs, and relative band strengths in different mixtures at various temperatures are reported in Appendices A and B. The NH 2 stretching features at 3371.2 cm −1 /2.966 µm and 3176.4 cm −1 /3.148 µm and the NH 2 wagging and twisting features at 689.2 cm −1 /14.510 µm and 634.0 cm −1 /15.773 µm were excluded from further characterization despite their relatively large band strengths due to their direct overlap with the two most intense water features, the OH stretch at ∼3279 cm −1 /3.05 µm and the H 2 O libration at ∼780 cm −1 /12.8 µm, respectively (Öberg et al. 2007). The remaining formamide bands, the CH stretch at 2881.9 cm −1 /3.470 µm, the CH bend overtone at 2797.7 cm −1 /3.574 µm, and the convolved NH 2 rock at 1108.1 cm −1 /9.024 µm and CH out-of-plane deformation at 1056.1 cm −1 /9.469 µm, have low band strengths and directly overlap with various methanol features: the CH 3 stretches at 2950 cm −1 /3.389 µm and 2830 cm −1 /3.533 µm, the CH 3 rock at 1126 cm −1 /8.881 µm, and the C-O stretch at 1027 cm −1 /9.737 µm (Luna et al. 2018). The integrated regions used to calculate these band strengths also include the NH 2 scissoring mode, which presents as a weak, broad feature overlapping with the red shoulder of the C=O stretch (see Figure 4). The FWHM and relative band strengths of the formamide:NH 3 mixture are excluded from the bottom scatter plots in Figure 4 and the tables in Appendix A due to the significant overlap of this band with ammonia's NH scissoring mode at 1624 cm −1 /6.16 µm. The NH 3 peak is small enough in the NH 3 -containing tertiary mixtures relative to the formamide C=O stretch to extract reliable peak positions and FWHMs, but relative band strengths were not calculated. In pure amorphous formamide (<170 K), the C=O stretch appears as a single broad peak centered at 1704.2 cm −1 /5.868 µm. Generally, being in a mixture causes the feature to sharpen, most dramatically so in apolar mixtures in which CO or CO 2 are the dominant species. For example, the FWHM of the feature in formamide:CO 2 at 15 K is 51.1 cm −1 , over three times narrower than that in pure formamide. Also, in the CO, CO:CH 3 OH, and crystalline CO 2 matrices, some peak splitting occurs before the formamide crystallization temperature is reached. Such sharpening and splitting is typical when a polar molecule is highly diluted in an apolar matrix and is caused by the polar molecule being isolated in the matrix as a monomer or dimer, unable to form the hydrogen bonds with other polar molecules that tend to broaden and blend vibrational features (e.g., Ehrenfreund et al. 1996). Urso et al. (2017) also previously observed the formamide peaks splitting due to monomer and dimer formation in their very dilute 1:40 formamide:CO mixture. In the polar mixtures, however, as hydrogen bonding with the matrix is still possible, the feature remains broad. The feature is the most blue-shifted in the binary CO and CO 2 mixtures, where its peak values are 1717.2 and 1703.7 cm −1 , respectively, in the 15 K ices, while in polar mixtures it tends to red-shift, with the most redshifted peak position being that of the tertiary H 2 O:CO 2 mixture, 1694.0 cm −1 . Despite containing a high fraction of apolar CO, the tertiary mixtures with CO:CH 3 OH and CO:NH 3 have peak positions similar to the polar mixtures. The relative band strength of this formamide feature is >1 in all of the investigated matrices, with no observable trend related to polarity present in these values.
At formamide's crystalline phase transition temperature (170 K), the C=O peak blue-shifts and splits into multiple blended features. This is only observed in the pure formamide spectrum because all of the matrix molecules investigated here desorb below 170 K. An interesting trend to note is that, as the mixtures increase in temperature, the formamide C=O feature tends to broaden to have a FWHM Article number, page 7 of 25 A&A proofs: manuscript no. ms value more similar to that of pure formamide. This trend can be easily identified in the scatter plot in Figure 4, where the scatter points of several of the mixtures move closer to the points of the pure amorphous spectrum as temperature increases. It is also particularly noticeable in Figure 4 in the spectra of mixtures containing H 2 O, which have peak position and FWHM values at high temperatures (>150 K) that are the close to those of the pure spectrum. Sudden broadening of the FWHM to a value closer to that of pure formamide also tends to occur at the matrix crystallization temperatures (for example, in the binary CO 2 mixture between ∼30 and 40 K and in the H 2 O-containing mixtures between ∼130 and 150 K). These spectral changes indicate that formamide segregation is occurring in the matrix as the ice is heated and is particularly promoted when the ice undergoes a dramatic restructuring during matrix crystallization. The conclusion that solid-phase formamide diluted in a matrix is mobilized via heating is consistent with formamide thermal processing studies, in which formamide deposited on top of water ice diffused through the water during heating (Chaabouni et al. 2018).
CH bending and CN stretching features (∼1388 and
1328 cm −1 ) The shape and position of the CH bend (1388.1 cm −1 /7.204 µm) does not vary much depending on chemical environment or temperature, with peak positions only ranging from 1398.0 -1387.2 cm −1 and FWHM values ranging from 11.1 -27.5 cm −1 in the mixtures investigated here (see Figures 5 and 6). As in the C=O stretch band, the binary apolar mixtures with CO and CO 2 have the most blue-shifted and narrow peaks; however, a trend of the mixture band shifting during heating to peak position and FWHM values closer to those of the pure band is not as clear. The band strength of the CH bend increases in all of the mixtures (e.g., η=1.63 at 15 K in the formamide:H 2 O mixture) except for the CO 2 mixture, in which the band strength decreases slightly (η=0.85 at 15 K). The CN stretching band (1328.1 cm −1 /7.529 µm) varies much more dramatically across different mixtures and temperatures (see Figures 5 and 6), particularly in the binary apolar mixtures, in which it red-shifts by up to ∼50 cm −1 and splits into multiple convolved features. In the formamide:CO 2 spectrum, two peaks are present at 15 K at 1316.8 and 1277.0 cm −1 , with the peak at 1277.0 cm −1 having a greater intensity until 40 K, at which point the intensity of the 1316.8 cm −1 peak increases and that of Article number, page 9 of 25 A&A proofs: manuscript no. ms the 1277.0 cm −1 peak decreases. The 1277.0 cm −1 peak intensity then continues to decrease during heating until CO 2 sublimates at 90 K (see Figure 5). This trend is indicative of the 1277.0 cm −1 peak belonging to the formamide monomer and the 1316.8 cm −1 peak belonging to the formamide dimer, as it would be expected for the monomer peak to decrease and the dimer peak to increase if segregation occurs during heating, especially during a major ice structure rearrangement like matrix crystallization, which occurs for CO 2 at 40 K. Such assignments are consistent with the assignments in Mardyukov et al. (2007), who observed the formamide monomer and dimer in a xenon matrix at 1267.2 and 1305.4 cm −1 , respectively, and supported their assignments with computations. The peak in the formamide:CO spectrum also has a red component that appears to decrease in intensity during heating, but the monomer and dimer peaks are not as clearly distinguishable as more than two peaks appear to be overlapping in that spectrum. In the mixtures containing other polar molecules, the band is generally blue-shifted, broadened, and decreases in intensity relative to the CH bend. The relative strength of the band is close to 1 in most of the characterized polar mix-Article number, page 10 of 25 tures, except for the H 2 O:CO 2 mixture, which has a relative band strength of 0.75 at 15 K. In contrast, the relative band strength is closer to 2 in all of the primarily apolar mixtures.
While the CN stretch clearly has more potential than the CH bend as a diagnostic of the chemical environment of formamide, it is also much broader and less intense in most of the mixture spectra than in the pure spectra. This diminishes the ability to identify this band in a spectral region where several other astronomically relevant COMs also have features (see Section 5).
Astronomical implications
The ability of formamide to form via both atom addition and energetic processing in a variety of ices containing C, H, N, and O means that its solid-state presence is plausible in many interstellar environments, ranging from dark interstellar clouds to protoplanetary disks. However, in order to securely detect it, an absorption with a clear peak position and profile that is distinguishable from other ice features in the same spectral region must be identified.
The C=O stretch is amorphous formamide's strongest and sharpest feature, but it overlaps with the blue wing of the strong and broad 6.0 µm feature present in most interstellar ice spectra. Water and ammonia, which have been securely identified in ices, as well as formic acid and formaldehyde, which have been tentatively identified, have features in this spectral region (Boogert et al. 2008(Boogert et al. , 2015. Additionally, many other carbonyl group-containing COMs that have been detected in the gas-phase and may be present in the solid state, like acetaldehyde, acetone, and methyl formate, also have strong absorptions in this wavelength region (van Scheltinga et al. 2018;Rachid et al. 2020;van Scheltinga et al. 2021). While this limits the potential of using formamide's C=O band as its primary means of identification, the band can still be used for performing fits spanning a wider wavelength region in combination with other bands.
The CH bend and the CN stretch are medium-strength features that lie in the "COM-rich region" of interstellar ice spectra between 7-8 µm (Boogert et al. 2008). This region, where many organic functional groups have absorptions, sits on the tail of the strong 6.85 µm band (whose assignment remains uncertain but likely contains absorptions by methanol and the ammonium cation, Boogert et al. 2008Boogert et al. , 2015. The methane CH bending band at 7.68 µm is the most clearly and frequently observed ice band in this region (Öberg et al. 2008), but additional weaker features at 7.03, 7.24, 7.41, and 8.01 µm are also consistently observed toward some sources (Figure 7). Candidate carriers suggested for some of these absorptions include species like formic acid, ethanol, acetaldehyde, the formate anion, and, potentially, formamide (Schutte et al. 1999;Boogert et al. 2008;van Scheltinga et al. 2018).
As mentioned previously, Schutte et al. (1999) tentatively assigned formamide as a plausible contributor to the 7.24 µm band in W33A using a formamide:H 2 O spectrum and calculated a formamide ice upper limit of 2.1% with respect to H 2 O, although they pointed out that in their lab data, the formamide peak position was blue-shifted by 0.02 µm relative to the observed band, and that an assignment to the CH bend of formic acid (HCOOH) may µm band, and the colored dotted lines correspond to the peak positions of the formamide CH bend in the plotted formamide spectra. The wavelength calibration of the MIRI-LRS spectra, NIR38 and J110621, is still uncertain and was done for these spectra locally via the CH 4 band. be more appropriate. Ethanol (CH 3 CH 2 OH) and the for-Article number, page 11 of 25 A&A proofs: manuscript no. ms mate anion (HCOO − ) have also been considered candidates for this band (Boogert et al. 2008;Öberg et al. 2011;van Scheltinga et al. 2018;Rocha et al. in prep). No distinct and consistently observed bands are located at the peak position of the formamide CN stretch at ∼7.5 µm. However, in mixtures (particularly those with polar components), the intensity and sharpness of this band weaken (relative to the intensity and sharpness of the CH bend). Such a profile change makes a distinction of the CN stretch from the continuum in this region less feasible if formamide is present at the low ice abundances expected for COMs, especially given that around this wavelength, many sources also show a broad and significant absorption commonly attributed to SO 2 ice (Boogert et al. 1997;Öberg et al. 2008). On the other hand, the CH bend remains strong and sharp in all of the mixtures investigated here. All of the other absorption features of formamide either have profiles that are too broad or weak, or overlap directly with the strongest absorptions of the major ice components (see Figure 3), and will therefore not be utilized in our hunt for formamide ice.
Thus, if formamide is indeed present in interstellar ices, the CH bend is likely its best tracer. We focus our subsequent analysis on the comparison of the formamide CH bend in mixtures to the observed 7.24 µm band in nine spectra collected toward a variety of sources by ISO, Spitzer, and the recently launched JWST (Figure 7). The ISO (SWS) spectra include three massive young stellar objects (MYSOs), W33A, NGC 7538 IRS 9, and AFGL 7009s, and the Spitzer (IRS) spectra include three low-mass young stellar objects (LYSOs), B1c, 2MASS J17112317, and RNO 91. These archival spectra were selected due to their 7-8 µm regions having several deep and distinct features, indicating that they may be COM-rich, and because their profiles in this region slightly differ, demonstrating the variety of spectral features that have been observed here. In addition, three spectra recently collected by the JWST have been included: two pristine, high-extinction dark clouds toward background stars, NIR38 and J110621, observed with the Mid-InfraRed Instrument ( The 7.24 µm band is present to some extent in all of the sources, usually at an optical depth similar to the 7.41 µm band in the local continuum-subtracted spectra. The position and FWHM of the band were extracted from the spectra that have spectral resolutions high enough to clearly define the shape and position of the peak -that is, the ISO-SWS and JWST MIRI-MRS MYSO spectra -by fitting a Gaussian profile to the peak. Figure 8 shows these observed peak positions and FWHMs (indicated with star shapes) in a scatter plot with the peak positions and FWHMs of the CH bend extracted from the laboratory spectra. The peak positions and FWHMs extracted from laboratory spectra of ethanol in a H 2 O mixture (van Scheltinga et al. 2018), formic acid in a H 2 O:CH 3 OH mixture (Bisschop et al. 2007a), and ammonium formate in a H 2 O mixture at 150 K (Galvez et al. 2010) are also included in this figure (indicated with the letters E, F, and H respectively) to enable a comparison between formamide and the other commonly proposed carriers. From this plot, it is evident that, while the polar mixtures have the band position and profile closest to the observations, they are all still too blue-shifted (by ∼7 cm −1 /0.04 µm) from the astronomical values for formamide to be the major carrier of this band. In contrast, ethanol, formic acid, and the formate anion in polar mixtures are much better candidates.
It is still possible that formamide could be contributing to the blue wing of this band. However, to result in nonnegligible upper limits, the formamide must be present in a matrix containing other polar molecules, as the band is far too blue-shifted in the purely apolar mixtures to contribute significantly to the observed absorption. Therefore, we derived upper limits of formamide by fitting the CH bend in the laboratory spectrum of the formamide:H 2 O mixture at 15 K to the 7.24 µm band in the local continuum-subtracted observed spectra (see example fits in Figure 9). The water mixture was chosen for the fit for simplicity's sake and due to the fact that water is by far the most abundant interstellar ice component. The water contribution was subtracted out of the laboratory ice spectrum using a spectrum of pure water ice to ensure that absorption by the broad water bending band did not contribute to the calculated formamide upper limit. The band strength used to perform the upper limit calculation was 1.5×10 −17 cm molec −1 , the band strength of the CH bend in pure formamide at 15 K (from Table 2) multiplied by the relative band strength of formamide in H 2 O at 15 K (1.63, from Appendix B).
When deriving upper limits, it is prudent to ensure that the laboratory spectrum fits to the observed spectrum across a wider wavelength range, as upper limits can be easily overestimated if only one band is considered. Subtracting out the contributions of other ices that absorb in the analyzed spectral region, if their abundances can be unambiguously determined from other spectral regions, also prevents further upper limit overestimations. Therefore, we ensured that the calculated upper limits in Table 3 do not result in a C=O stretch absorption that exceeds the observed optical depth of the ∼6 µm band in our selected objects. Prior to checking the C=O absorption in this region, the spectral contribution of water's OH bend ∼1655 cm −1 /6.04 µm was removed from the observed spectra by scaling a laboratory water spectrum at 15 K from Öberg et al. (2007), so that the water column density of the scaled spectrum was the same as what was previously determined for these objects, and then performing a subtraction. (For Fig. 9: Examples of fits of the CH bending band in the formamide:H 2 O laboratory spectrum (in black, water contribution subtracted out) to the local-continuum subtracted observed spectra (in color) used to derive solid-state formamide upper limits. The peak position of the CH bend in the laboratory spectrum is marked with a dotted blue vertical line. These selected fits showcase the variation in the profile of the observed 7.24 µm band across the selected sources. In the observed spectra on the left, excess absorption on the blue wing of the observed 7.24 µm band allows for relatively high formamide upper limits (∼≤1.5-5.1% with respect to H 2 O), while the observed spectra on the right lack such a wing, resulting in lower formamide upper limits (∼≤0.35-0.68% with respect to H 2 O).
the ISO and Spitzer data, the water column densities from Boogert et al. (2008) were used for scaling; for the JWST MIRI-LRS data, the water column densities from McClure et al. (2023) were used. For the JWST MIRI-MRS spectrum (L1527), the water column density was determined by first subtracting the silicate contribution by fitting the GCS3 spectrum to the 10 µm silicate band and then fitting the laboratory water spectrum from Öberg et al. 2007 to the water libration band.) The resulting upper limits of solid-state formamide, presented in column densities as well as with respect to the abundance of water in each source, are presented in Table 3. These upper limits (ranging from 0.35-5.1% with respect to H 2 O) are all at least an order of magnitude greater than (but consistent with) the observed gas-phase formamide abundances in three comets (0.016-0.021% with respect to H 2 O) as well as the average beam dilution-corrected abundance of 22 MYSOs from the ALMAGAL survey (∼0.05% with respect to H 2 O, assuming a CH 3 OH/H 2 O ratio of ∼5%). As a beam dilution-corrected gas-phase formamide abundance has also been obtained for the LYSO B1c (∼0.05%), one of the sources investigated here, it can be directly compared to our solid-state formamide upper limit derived from the object's low-resolution Spitzer data. While our upper limit (≤0.93%) is consistent with this gasphase abundance, it is an order of magnitude greater. We Article number, page 13 of 25 A&A proofs: manuscript no. ms expect the precision of this upper limit to be further refined by future high-resolution observations of B1c, planned to be observed by MIRI-MRS in the JOYS program.
A formamide upper limit of 2.1% with respect to H 2 O was previously derived for W33A in Schutte et al. (1999) by assuming that the entire 7.24 µm band consisted of formamide and using a band strength of 3.2 × 10 −18 cm molec −1 attributed to Wexler (1967), where it is unclear for what phase of formamide this band strength was derived. Despite our very different approaches, we have fortuitously arrived at nearly the same upper limit value for W33A (2.2%).
In the higher resolution observational data of MYSOs explored here, the lack of a formamide CH bending feature distinct from other COM absorptions prevents a secure formamide ice detection. However, it is clear from the example upper limit fits shown in Figure 9 that the profile of the 7.24 µm feature is not uniform across different sources, and sev-eral sources, such as NGC 7538 IRS 9, NIR38, and RNO 91, may have a blue wing on this band that spectrally overlaps with the CH bend of formamide. Therefore, it is possible that a more distinct absorption at the expected 7.20 µm will emerge more clearly in sources targeted by future JWST MIRI-MRS observations. The first ice spectra arriving now from MIRI-MRS illuminate a promising future. In the spectrum of the LYSO IRAS 15398-3359 acquired by the JWST CORINOS program (program 2151, PI: Y. -L. Yang, Yang et al. 2022), the COM features between 7-8 µm previously detected barely above 3σ levels in the spectra in Figure 7 are beautifully resolved (although a distinct absorption centered at 7.20 µm is not present). More sources known to have strong COM absorptions in this spectral region have been specifically targeted by the JOYS program as well as the JWST proposals "It
Conclusions
In an effort to facilitate the hunt for formamide in interstellar ices, laboratory spectra of pure formamide and formamide in various astronomically relevant ice mixtures ranging from temperatures of 15 -212 K have been collected and made freely available to the astronomical community on the Leiden Ice Database for Astrochemistry (LIDA). The band strengths at 15 K for all pure formamide features between 4000 -500 cm −1 /2.5 -20 µm are presented, and the peak positions, FWHMs, and relative apparent band strengths of the three bands identified as the most promising for future formamide detection were extracted from the pure and mixed formamide spectra. These spectra and extracted data were used to assess present and future detectability of ices in various interstellar objects. The primary conclusions drawn from this work are as follows: 1. Out of the eight formamide features in the investigated IR spectral region, the C=O stretch (1700.9 cm −1 /5.881 µm), the CH bend (1388.3 cm −1 /7.203 µm), and the CN stretch (1328.0 cm −1 /7.530 µm) are likely to be the most useful for future formamide identification due to their strength, sharp profile, and low overlap with the strongest features of the major ice components, with the CH bending feature being the most promising. The NH 2 stretching features (3371.2 cm −1 /2.966 µm and 3176.4 cm −1 /3.148 µm) and the NH 2 wagging and twisting features (689.2 cm −1 /14.510 µm and 634.0 cm −1 /15.773 µm) directly overlap with strong water absorptions, while the CH stretch (2881.9 cm −1 /3.470 µm), the CH bend overtone (2797.7 cm −1 /3.574 µm), and the convolved NH 2 rock and CH out-of-plane deformation (1108.1 cm −1 /9.024 µm and 1056.1 cm −1 /9.469 µm) have both low band strengths and direct overlap with methanol absorptions, making them less suitable for formamide identification. 2. In the mixtures investigated here, the CN stretch is the most affected by ice composition -its peak position Article number, page 14 of 25 varies by up to ∼68 cm −1 and its FWHM by up to ∼50 cm −1 across the mixtures investigated here, with peak splitting observed in the apolar mixtures. The C=O stretch can also change significantly, depending on the matrix, by up to ∼27 cm −1 in peak position and up to ∼40 cm −1 in FWHM, although peak splitting in the apolar mixtures is not as prominent as in the CN stretch. The CH bend is relatively unaffected by ice composition, with its peak position and FWHM only varying by ∼11 cm −1 and ∼15 cm −1 , respectively, across the different mixtures. Relative to the pure spectrum, the band strength of the C=O stretch increases in all of the investigated mixtures. The CH bend band strength also increases in all of the mixtures except the binary CO 2 mixture, while a significant increase in the band strength of the CN stretch is only observed in the mixtures dominated by an apolar component. 3. Although the polar formamide mixtures provide the closest match to the 7.24 µm band observed toward nine lines of sight (including dense clouds, LYSOs, and MYSOs) with three different space telescopes (ISO, Spitzer, and JWST), none provide a convincing fit, with all having their CH bend peak position approximately 7 cm −1 /0.04 µm too far to the blue from the clearly observed band at 1381 cm −1 /7.24 µm. Instead, formic acid and ethanol mixtures containing H 2 O provide a better fit. However, this does not exclude the possibility of formamide being present in these ices. The calculated formamide upper limits in these objects range from 0.35-5.1% with respect to H 2 O, which are consistent with gas-phase abundances of formamide in several LYSOs, MYSOs, and comets. The upper limit value derived for W33A, 2.2% with respect to H 2 O, is fortuitously in agreement with that derived by Schutte et al. (1999).
While a more secure formamide detection is not possible with the telescopic data explored in this work, the first ice observations arriving from JWST demonstrate an unprecedented sensitivity and spectral resolution that will enable us in the near future to broaden the search for formamide ice in both objects previously observed by Spitzer, whose analysis is limited by low spectral resolution, as well as newly observed objects that were too dim to be observed by Spitzer or ISO.
Appendix A: Peak positions and FWHMs of formamide in pure and mixed ices This appendix contains the peak positions and FWHMs of the formamide features selected for complete IR characterization in this work. The values are listed for the formamide features in pure ice as well as in mixtures containing H 2 O, CO 2 , CO, CH 3 OH, and NH 3 . The peak position is the wavelength at which the absorption reaches its maximum, and the FWHM is the width of the peak between the half-maximum values on each side. A Savitzky-Golay filter with a second-order polynomial was applied to many of the mixture spectra here before extraction of the peak position and FWHM to eliminate shifts in these values caused by noise. The smoothing windows used ranged from 10-100 depending on level of noise present in each spectrum, and care was taken that these smoothing windows did not warp the shape of any features. Values were extracted until the temperatures at which the major matrix component desorbed were reached. For formamide features in mixtures where there is direct overlap with weaker matrix component bands (e.g., the C=O stretch in the NH 2 CHO:H 2 O mixture), the spectrum of the matrix component without formamide collected using identical experimental parameters at the corresponding temperature was scaled to the formamide mixture spectrum via a feature without overlap with formamide features and subtracted prior to peak position and FWHM extraction. These cases are denoted with a M . For formamide features in mixtures where the formamide features lie on the tails of bands or on very wide bands without sharp features (e.g., the CH bend and CN stretch in the NH 2 CHO:NH 3 mixture), a second-order polynomial was used to perform a local continuum subtraction. These cases are denoted with a P . For formamide features in mixtures where overlap with a strong matrix component band was very substantial and difficult to reliably subtract (e.g., the C=O stretch in the NH 2 CHO:NH 3 mixture), only peak positions are given. These cases are denoted with a N . For formamide features that contain multiple peaks, all peak positions are given, and the FWHM of the strongest peak is given. However, if a weaker peak maximum occurs within the two half maximum values of the stronger peak (e.g., the CN stretch in the NH 2 CHO:CO 15 K mixture), it is included in the FWHM. These cases are denoted with a B . Appendix C: QMS calibration of an independent tri-dosing leak valve system and mixing ratio determination Appendix C.1: Calibration procedure and mixing ratio determination The new tri-dosing system mentioned in Section 3 allows for simultaneous but independent deposition of gases and vapors via three leak valves, each connected to a separate gas line. Compared to our previous method in which gases and vapors were premixed in the desired ice ratio in a gas bulb and then dosed into the chamber through a single valve, the new method allows for codepositing multiple gases and vapors without experimental errors in the ratio caused by mixing gases with different volatilities in a single bulb or dosing gases that may have different flow, pumping, and substrate deposition rates through the same valve. Subsequently, it greatly improves the ability to create mixtures with precisely determined ratios of molecules with low volatilities like formamide, which is challenging in traditional premixing procedures. The benefits of independent multidosing systems were also described for similar systems with two leak valves in Gerakines et al. (1995) and Yarnall & Hudson (2022). There are several ways to calibrate such a system to ensure a certain ratio of ice components. One such method is calibrating the deposition rate on the substrate to a specific leak valve position with a specific pressure of the gas or vapor of choice in its manifold line. However, because formamide has a very low vapor pressure compared to liquids like H 2 O and CH 3 OH and tends to stick to and condense in various parts of the line, reproducing a specific line pressure throughout multiple experiments using this method is difficult. Therefore, to conduct a systematic and thorough IR characterization of formamide in a wide variety of ices with precisely constrained mixing ratios, a different method is necessary.
For this purpose, we calibrate molecules' ice deposition rates with the intensity of their mass signals during the deposition with a QMS. In this calibration procedure, a pure molecule is dosed at a constant rate into the chamber, with the substrate cooled to the desired deposition temperature and the IR spectrometer continuously collecting IR spectra, while the QMS continuously collects mass peak intensity values of selected mass-to-charge ratios (m/z) in the selected ion monitoring (SIM) mode. The IR spectrometer is used to measure the ice column density rather than the laser interference because the formamide deposition pressure does not remain stable over the long period of time necessary to generate multiple interference fringes (>18 hours), which is necessary to reliably extract a deposition rate. Conversely, a deposition rate can be extracted from integrated absorbance growth rates (obtained via a least-squares fit to the integrated absorbance over time) in ∼30 mins, during which time the formamide deposition rate remains stable (as indicated by the linearity of the integrated absorbance increase over time). The integrated absorbance growth rate for that molecule can then be correlated to a specific mass peak's signal intensity (typically the molecule's base peak) in the QMS (obtained via averaging the mass peak's signal intensity values collected during the deposition and simultaneous IR data collection). The integrated absorbance growth rate can then be converted to the ice column density growth rate, dN/dt, via the following equation if the band strength of the pure molecule, A, is known: (C.1) Table 1 provides the peak used for the calibration of each pure molecule and its corresponding band strength and reference.
Via this method, a calibration curve relating a mass peak's signal intensity in the QMS to its column density growth rate can be determined, with the slope of this curve referred to here as a molecule's sensitivity (see Figure C.1 for an example of such a calibration). When starting a deposition, the leak valve can then be opened accordingly so that the mass signal of the molecule in the QMS corresponds to the desired column density growth rate. In this work, such calibration curves were completed for all molecules used in these spectra with a Spectra Microvision Plus QMS. The relationship between column density growth rate and QMS signal intensity is linear for all molecules within the deposition pressure ranges used (R 2 values of the linear fits ranged from 0.9699-0.9999 with an average of 0.9936).
After the experiment, the mass signal data during the deposition can be converted via the equation from the calibration curve to a column density growth rate, which is then integrated over time to give the absolute column density of each species at the end of the deposition. However, in the case that some of the species in a given mixture share their strongest mass peaks and have no alternative strong peaks without overlap with the other mixture components (which is the case for several mixtures in this work), the individual column density growth rates must be extracted from the mass spectra by utilizing ratios of a given molecule's base peak to another mass peak that is not shared with any other molecules in a given mixture. For example, the mass spectrum of formamide contains a peak at 28, the base peak of CO. Thus, during the deposition of the formamide:CO mixture, the 28 m/z signal contains contribution from both formamide and CO. The contribution of formamide to the signal at 28 m/z was calculated by dividing the signal at 45 m/z (which, in this mixture, only formamide contributed to) by the ratio of the 45 and 28 m/z peaks during pure formamide deposition. This calculated contribution was then subtracted from the 28 m/z signal to yield the CO 28 m/z signal.
In order to estimate the error of the calculated column density of each component and, subsequently, the mixing ratios in each ice, multiple sources of error have been considered. These are discussed in the following subsections.
Appendix C.2: Ion interference effect
Ion interactions within the instrument, such as ion-molecule interactions or ions interacting with the QMS filament or rods, during the dosing of multiple species into the chamber can effect a molecule's sensitivity. Such interactions between two different species can cause their sensitivities to deviate from the values determined in the calibration of each species in pure form. This phenomenon is often referred to as the ion interference effect, and it complicates using a mass spectrometer to quantify gases or vapors in a mixture (Basford et al. 1993;Yongjun et al. 2022). how a calibration curve is obtained, shown here for CO. The rate of ice growth is found via monitoring the molecule's integrated absorbance of a selected peak, which can be converted to column densities via known pure band strengths, and relating them to the time of the spectrum collected (top left). Then the average value of a given mass signal (28 m/z for CO), measured by the QMS during the same time interval as the collection of the IR spectra, is calculated (bottom right). The calibration curve is obtained by correlating the two values to each other for a range of ice deposition rates (top right).
The magnitude of this effect is highly dependent on the species as well as the instrument. It increases with total pressure and decreases for a given species as its proportion in a mixture increases (Nemanič & Žumer 2018;Sun et al. 2020). Thus, the sensitivities that are most affected by this effect are those of species that are present in the lowest proportions in a mixture. Given that our formamide dosing pressure was in the range of a couple 10 −9 mbar and that the intended ratio of formamide to matrix components was ∼5:100 in the case of binary mixtures and 5:100:25 in the case of tertiary mixtures, we treated the interference effect of formamide on the matrix components as negligible and accounted for ion interference only in the formamide signal. While the formamide absolute column densities are necessary to calculate its relative band strengths (see Section 3), the absolute column densities of the matrix components are not needed to find any values other than the mixing ratios.
In order to quantify the ion interference effect on formamide in each mixture, at the start of each deposition, formamide was first dosed alone, and its mass signal was given ∼5 min to stabilize before the other matrix components were introduced into the chamber. Although this meant that each experiment started with a very brief deposition of pure formamide, the deposition rate of formamide was so slow in all of the experiments (on the order of tens of monolayers per hour) that this brief pure deposition was usually not even noticeable above the noise level in the IR spectra. Then, the ratio between formamide's signal before and after the matrix molecules were added to the chamber was used as a correction factor to remove the ion interference effect from formamide's signal. An example of this correction is shown in Figure C.2 for the formamide:CH 3 OH mixture, which had the highest correction factor of all the mixtures (1.11).
The ion interference effect on formamide was noticeable in all of the mixtures where the major matrix component was polar, while it was not detected above the noise level in the mixtures in which the major matrix component was
|
2023-07-12T06:42:41.951Z
|
2023-07-10T00:00:00.000
|
{
"year": 2023,
"sha1": "258efaa146831130e9fcb7cb5298d48fd09f8a64",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "258efaa146831130e9fcb7cb5298d48fd09f8a64",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
215350
|
pes2o/s2orc
|
v3-fos-license
|
On the inverse braid and reflection monoids of type $B$
There are well known relations between braid groups and symmetric groups, between Artin-Briskorn braid groups and Coxeter groups. Inverse braid monoid the same way is related to the inverse symmetric monoid. In the paper we show that similar relations exist between the inverse braid monoid of type $B$ and the inverse reflection monoid of type $B$. This gives a presentation of the last monoid.
Introduction
Let V be a finite dimensional real vector space (dim V = n) with Euclidean structure. Let W be a finite subgroup of GL(V ) generated by reflections. We suppose that W is essential, i.e. that the set of fixed vectors by the action of W consists only of zero: V W = 0. Let M be the set of hyperplanes such that W is generated by orthogonal reflections with respect to M ∈ M. We suppose that for every w ∈ W and every hyperplane M ∈ M the hyperplane w(M) belongs to M.
Consider the complexification V C of the space V and the complexification M C of M ∈ M. Let Y W = V C − M ∈M M C . The group W acts freely on Y W . Let X W = Y W /W then Y W is a covering over X W corresponding to the group W .
This generalized braid group Br(W ) corresponding to the Coxeter group W is defined as the fundamental group of the space X W of regular orbits of the action of W and the corresponding pure braid group P (W ) is defined as the fundamental group of the space Y W . So, for the generalized braid groups is Br(W ) = π 1 (X W ), P (W ) = π 1 (Y W ). The groups Br(W ) were defined by E. Brieskorn [3], and are called also as Artin-Brieskorn groups. E. Brieskorn [3] and P. Deligne [4] proved that the spaces X W and Y W are of the type K(π, 1).
The covering which corresponds to the action of W on Y W gives rise to the exact sequence So, there is a naturally defined map ρ : Br(W ) → W . Geometrical braid, as a system of n curves in R 3 lead to the notion of partial braid where several among these n curves can be omitted; partial braids form the inverse braid monoid IB n This notion was introduced by V. V. Wagner in 1952 [15]. See the books [10] and [9] as general references for inverse semigroups. The multiplication of partial braids is shown at Figure 1.1. At the last stage it is necessary to remove any arc that does not join the upper or lower planes.
So, the classical braid group (which corresponds to W = Σ n , symmetric group) is included into the inverse braid monoid IB n .
The most important example of an inverse monoid is a monoid of partial (defined on a subset) injections of a set into itself. For a finite set this gives us the notion of a symmetric inverse monoid I n which generalizes and includes the classical symmetric group Σ n . A presentation of symmetric inverse monoid was obtained by L. M. Popova [11], see also formulas (2.1-2.3) below. Now let W be the Coxeter group of type B n . The corresponding inverse braid monoid IB(B n ) was studied in [14] and the reflection monoid I(B n ) in [6].
The aim of the present paper is to show that in the case of type B the situation is quite similar: there exists a map ρ B : IB(B n ) → I(B n ) such that the following diagram (where the vertical arrows mean inclusion of the group of invertible elements into a monoid) is commutative.
Inverse braid monoid and type B
Let N be a finite set of cardinality n, say N = {v 1 , . . . , v n }. The inverse symmetric monoid I n can be interpreted as a monoid of partial monomorphisms of N into itself. Let us equip elements of N with the signs, i.e. let SN = {δ 1 v 1 , . . . , δ n v n }, where δ i = ±1. The Weyl group W (B n ) of type B can be interpreted as a group of signed permutations of the set SN: The monoid of partial signed permutations I(B n ) is defined as follows where dom σ means a domain of definition of the monomorphism σ.
We remind that a monoid M is factorisable if M = EG where E is a set of idempotents of M and G is a subgroup of M. Evidently the monoid I(B n ) is factorisable [6] as every partial signed permutation can be extended to an element of the group of units of I(B n ) i.e. a signed permutation with the domain equal to SN.
Usually the braid group Br n is given by the following Artin presentation [1]. It has the generators σ i , i = 1, ..., n − 1, and two types of relations: The following presentation for the inverse braid monoid was obtained in [5]. It has the generators σ i , σ −1 i , i = 1, . . . , n − 1, ǫ, and relations and the braid relations (2.1). Geometrically the generator ǫ means that the first string in the trivial braid is absent. If we replace the first relation in (2.2) by the following set of relations , for all i, and delete the superfluous relations ǫ = ǫσ 2 1 = σ 2 1 ǫ, we get a presentation of the symmetric inverse monoid I n [11] . We also can simply add the relations (2.3) if we do not worry about redundant relations. We get a canonical map [5] ρ n : IB n → I n which is a natural extension of the corresponding map for the braid and symmetric groups.
More balanced relations for the inverse braid monoid were obtained in [7]. Let ǫ i denote the trivial braid with ith string deleted, formally: So, the generators are: . . , n, and relations are the following: plus the braid relations (2.1).
Let EF n be a monoid of partial isomorphisms of a free group F n defined as follows. Let a be an element of the symmetric inverse monoid I n , a ∈ I n , J k = {j 1 , . . . , j k } is the image of a, and elements i 1 , . . . , i k belong to domain of the definition of a. The monoid EF n consists of isomorphisms . . , i k and not defined otherwise and w i is a word on x j 1 , . . . , x j k . The composition of f a and g b , a, b ∈ I n , is defined for x i belonging to the domain of a • b. We put x jm = 1 in a word w i if x jm does not belong to the domain of definition of g. If we put w i = 1 we get an inclusion of I n into EF n . Sending each f a ∈ EF n to a ∈ I n we get a homomorphism EF n → I n .
Proposition 2.1. The canonical maps I n → EF n and EF n → I n give the following splitting I n → EF n → I n .
We remind that the Artin-Brieskorn braid group of the type B is isomorphic to the braid group of a punctured disc [8], [12], [13]. With respect to the classical braid group it has an extra generator τ and the relations of type B: The monoid IBB n of partial braids of the type B can be considered also as a submonoid of IB n+1 consisting of partial braids with the first string fixed. An interpretation as a monoid of isotopy classes of maps is possible as well. As usual consider a disc D 2 with given n + 1 points. Denote the set of these points by Q n+1 . Consider homeomorphisms of D 2 onto a copy of the same disc with the condition that the first point is always mapped into itself and among the other n points only k points, k ≤ n (say i 1 , . . . , i k ) are mapped bijectively onto the k points (say j 1 , . . . , j k ) of the set Q n+1 (without the first point) of second copy of D 2 . The isotopy classes of such homeomorphisms form the monoid IBB n .
Theorem 2.1. [14] We get a presentation of the monoid IB(B n ) if we add to the presentation of the braid group of type B (2.5) the generator ǫ and the following relations We get another presentation of the monoid IB(B n ) if we add to the presentation (2.4) of IB n one generator τ , the type B relations (2.5) and the following relations
It is a factorisable inverse monoid.
We define an action of IB(B n ) on SN by partial isomorphisms as follows . . , δ n v n }, Direct checking shows that the relations of the inverse braid monoid of type B are satisfied by the compositions of partial isomorphisms defined by σ i , τ and ǫ i . Proof. Let us temporarily denote by IB n the monoid with the presentation given in the statement of Theorem. To see that the homomorphism ρ B is an epimorphism we use the fact that the monoid I(B n ) is factorisable, so its every element can be written in the form ǫg where ǫ belong to the set of idempotents and g is an element of the the Weyl group of type B, W (B n ). For the Weyl group the map ρ B is an epimorphism W (B k ) = Br(B k )/P (B k ) and the sets of idempotents for the monoids IB(B n ) and I(B n ) coincide and the map ρ B restricted to E(IB(B n )) is identity. It follows from the definition of the action that τ 2 and σ 2 i are mapped to the unit by the map ρ B . So the homomorphism ρ B is factorised by the homomorphismρ B : IB n → I(B n ): To show thatρ B is an isomorphism we compare the cardinalities of IB n and I(B n ). It is easy to calculate that the cardinality of I(B n ) is equal to n k=0 2 k n k 2 k!.
It was proved in [5] that every partial braid has a representative of the form σ i 1 . . . σ 1 . . . σ i k . . . σ k ǫ k+1,n xǫ k+1,n σ k . . . σ j k . . . σ 1 . . . σ j 1 , k ∈ {0, . . . , n}, 0 ≤ i 1 < · · · < i k ≤ n − 1 and 0 ≤ j 1 < · · · < j k ≤ n − 1, where x ∈ Br k . The same is true for IB(B n ), where x ∈ Br(B n ). The elements τ 2 and σ 2 i are mapped to 1 by ρ B , so each equivalence class modulo pure braid group of the type B k is mapped to the same element in I(B n ). These equivalence classes form the Weyl group W (B k ). The order of the Weyl group of type B k is equal to 2 k k!. We see that the set of cardinality less or equal than n k=0 2 k n k 2 k! is mapped epimorphically onto the set of exactly this cardinality. It means that the epimorphismρ B is an isomorphism.
Let E be the monoid generated by one idempotent generator ǫ. The canonical map from Ab(IBB n ) to Ab(I(B n )) consists of factorising Z 2 modulo 2.
|
2009-02-28T16:42:04.000Z
|
2009-02-28T00:00:00.000
|
{
"year": 2009,
"sha1": "2ad99d1aa4f710f8f86c33e5797a7f32739ea595",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0903.0085",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2ad99d1aa4f710f8f86c33e5797a7f32739ea595",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
186207005
|
pes2o/s2orc
|
v3-fos-license
|
Prophet Inequalities on the Intersection of a Matroid and a Graph
We consider prophet inequalities in a setting where agents correspond to both elements in a matroid and vertices in a graph. A set of agents is feasible if they form both an independent set in the matroid and an independent set in the graph. Our main result is an ex-ante 1/(2d+2)-prophet inequality, where d is a graph parameter upper-bounded by the maximum size of an independent set in the neighborhood of any vertex. We establish this result through a framework that sets both dynamic prices for elements in the matroid (using the method of balanced thresholds), and static but discriminatory prices for vertices in the graph (motivated by recent developments in approximate dynamic programming). The threshold for accepting an agent is then the sum of these two prices. We show that for graphs induced by a certain family of interval-scheduling constraints, the value of d is 1. Our framework thus provides the first constant-factor prophet inequality when there are both matroid-independence constraints and interval-scheduling constraints. It also unifies and improves several results from the literature, leading to a 1/2-prophet inequality when agents have XOS valuation functions over a set of items and use them for a finite interval duration, and more generally, a 1/(d+1)-prophet inequality when these items each require a bundle of d resources to procure.
Introduction
Prophet inequalities analyze the performance of online vs. offline algorithms in sequential selection problems, and have enjoyed a recent surge of uses in postedprice mechanism design. The typical online selection problem can be described as follows.
A set of T agents is denoted by N = {1, . . . , T }. Each agent t has a valuation V t drawn from a known distribution F t . The valuations are realized independently, and revealed sequentially. Each agent must be irrevocably accepted or rejected upon her valuation being revealed, with the feasibility constraint that the set of agents accepted by the end must lie in F , a downward-closed collection of subsets of N . The objective is to maximize the expected sum of valuations of agents accepted. We will refer to this as the welfare.
The algorithm's expected welfare is compared to that of a clairvoyant who can see all the realized valuations beforehand and make "prophetic" accept/reject decisions. All of our results also hold relative to the stronger ex-ante prophet, who can choose the correlation between the marginal distributions F 1 , . . . , F T to maximize his welfare (but for the algorithm, the valuations are still independent). We let OPT denote the prophet's expected welfare, which equals E[max S∈F t∈S V t ].
In this paper, we analyze the structure where F is defined by the intersection of a matroid and a graph. Specifically, there is a matroid M = (N, I) and an undirected graph G = (N, E), both defined on the set of agents N . F then consists of the subsets S ⊆ N that are both independent in the matroid, i.e. S ∈ I, and independent in the graph, i.e. {t, t ′ } / ∈ E for all t, t ′ ∈ S. To state our main result, we need the following definitions.
where G[·] denotes the subgraph of G induced by a set of vertices, and α(·) denotes the maximum size of an independent set in a graph.
We explain expression (1). {t, t ′ } ∈ E implies that t cannot be accepted alongside t ′ , and t ′ < t implies that t ′ could have been accepted before t to "block" agent t. However, some of these agents t ′ may also block each other, in which case they are adjacent in the induced subgraph G[·]. α(·) counts the maximum number of such agents that can be simultaneously accepted, and d 2 (G) takes the maximum of these numbers over t ∈ N . We note that d 2 (G) is upper-bounded by max t α(G[{t ′ : {t, t ′ } ∈ E}]), the maximum size of an independent set in the neighborhood of any vertex.
Theorem 1. For any matroid M and graph G, the expected welfare of an online algorithm is at least Our algorithm is order-aware, in that it needs to assume the agents' valuations will be revealed in the given order 1, . . . , T . Before elaborating on our techniques in Section 1.2, we outline the implications of our Theorem 1 and various generalizations relative to the literature, and describe settings where d 2 (G) is small.
Our Results, in relation to Previous Results
In Theorem 1, d 2 (G) is small if an agent cannot be blocked by many agents that don't block each other. One setting where this arises is when the agents arrive in order 1, . . . , T , each requesting service for a duration starting with her time of arrival, and need to be served by a single server. Formally, associated with the agents are intervals I 1 = [ℓ 1 , u 1 ], . . . , I T = [ℓ T , u T ] satisfying ℓ 1 ≤ . . . ≤ ℓ T , and a set of agents S can be feasibly served if In the graph G induced by constraints (2), two agents are adjacent if their intervals overlap. For an agent t, any agents t ′ < t with I t ′ ∩ I t = ∅ must have I t ′ contain the point ℓ t , since ℓ t ′ ≤ ℓ t and the intervals are contiguous. Therefore, all of these agents t ′ are also adjacent to each other in G through the point ℓ t , which implies that d 2 (G) ≤ 1. We contrast this with a different type of interval constraint where the agents request service starting from a common point in time ℓ, and there is a timedependent service capacity B(z) ∈ Z ≥0 for all z ≥ ℓ. The agents request intervals , and a set of agents S can be feasibly served if In (3), since the intervals starting from the same point are nested, the constraints can be captured by a laminar matroid M , with d 1 (M ) ≤ 1. Therefore, Theorem 1 shows that the guarantee relative to the ex-ante prophet is at least 1 (d1(M)+1)(d2(G)+1) ≥ 1/4 under the combination of constraints (2) and (3). This could model an online rectangle packing problem, where the horizontal projections have increasing left-boundaries and must satisfy (2), while the vertical projections have identical top-boundaries and must satisfy (3). More generally, Theorem 1 implies a (1/4)-guarantee for any online matroid selection problem under the additional constraint that each agent requires a processing time, during which no other agent can be served even if they are independent in the matroid. To our knowledge, our framework provides the first constantfactor guarantee under the combined families of feasibility constraints. Indeed, the constraints (2) do not correspond to a matroid. 3 Meanwhile, (3) cannot be captured by the pairwise independence constraints of a graph. If E = ∅ and the graph imposes no feasibility constraints, then d 2 (G) = 0 and the guarantee from Theorem 1 is 1/2, which is the matroid prophet inequality from [11].
In Section 3, we consider the generalized setting studied in [8,5], where the agents have XOS valuation functions over a set of items. We impose matroid-and graph-independence constraints on the subset of items allocated to the agents by the end, and show that the guarantee of 1 (d1(M)+1)(d2(G)+1) from Theorem 1 still holds (Theorem 2). If the matroid is free, then d 1 (M ) = 0, and a corollary of Theorem 2 is that the 1 2 -guarantee for XOS from [8] still holds if the agents use the items allocated to them for a finite interval duration (instead of keeping the items forever). More generally, we show that if each item requires a bundle of at most d underlying resources (possibly for a finite interval duration) to procure, then d 2 (G) ≤ d, leading to a guarantee of 1/(d + 1) (Proposition 6).
Our Techniques, in relation to Previous Techniques
Central to the development of prophet inequalities is the notion of a residual function. In the basic setting with an arbitrary feasible collection F , if Y ∈ F is the set of agents that have already been accepted, then its residual is defined as whereṼ 1 , . . . ,Ṽ T is a freshly sampled set of valuations. The algorithm decides whether to accept an agent t by comparing the actual realization of V t with the simulated threshold of α(R(Y ) − R(Y ∪ {t})), where α is a constant in (0, 1). α is chosen depending on F to balance the thresholds-for example, if F is the independent sets of a matroid, then α = 1/2 ensures that the thresholds are neither too high nor too low [11]. In the simplest XOS-valuation setting with only item capacity constraints, the difference in residuals decomposes very nicely as a sum of item-prices [8]. It is important, however, that these residuals and prices are always computed based on a prophet who "starts over" and considers every agent 1, . . . , T , even when some agents have already come and gone. Unfortunately, this "starting over" does not exploit the temporal aspect of graph-independence constraints, as illustrated by the following example. Example 1. T agents arrive in order, with agent 1 requesting service for a long interval I 1 = [1, T +1], and each agent t ≥ 2 requesting service for a short interval I t = [t, t + 1/2]. There is a single server, so a set of agents S is feasible if and only if S satisfies constraints (2). Agent 1 has valuation C+T ε ε with probability ε, and valuation 0 otherwise, where C, ε > 0 are constants. Agents 2, . . . , T deterministically have valuation 1.
The residual-based approach 5 performed poorly on Example 1 because the first agent's existence continued to inflate the thresholds of agents 2, . . . , T , even after she had already come and gone. To improve upon it, we incorporate dynamic programming, which is particularly designed to account for these temporal dynamics. Motivated by recent developments in approximate dynamic programming [20,17], we consider the following modification to the residual function.
For each agent t, let x * t denote the probability that she is accepted by the prophet, and let y * t denote her expected valuation conditional on being accepted. We then define π t as follows, using backward induction over t = T, T − 1, . . . , 1: π t can be interpreted as the "cost" of accepting agent t with respect to the graph-independence constraints. Indeed, for all agents t ′ > t, the summand in (5) is the already-computed "surplus" earned by the prophet on agent t ′ , and the sum is over all future surpluses t ′ which are blocked by accepting agent t.
Our modified residual function is based on a "restricted" prophet, who sees valuationsV 1 , . . . ,V T that have been reduced in two ways. First, the restricted prophet only sees a non-zero valuation for an agent t if the actual prophet would have accepted t on that sample path (this is formalized in Section 2); otherwise, the agent's valuation is zero. Second, the valuation of every agent t is further reduced by π t , with π t as defined in (5). Our restricted residual function is then and we define threshold Our algorithm accepts an agent t if and only if she is both feasible and satisfies Returning to Example 1, we would have π 2 = . . . = π T = 0 (because agents t ≥ 2 do not block any future agents), and π 1 ≈ T − 1. In this case, the matroid is free (because all the constraints are captured by the graph), so τ (t|Y ) = 0 and our algorithm ends up accepting every agent t using decision rule (7), which is the optimal control for Example 1.
In general, τ (t|Y ) represents our price for the matroid and π t represents our price for the graph. π t discriminates based on the agent t, looking at which agents t ′ > t get blocked, but is static in that it does not depend on the current state Y . By contrast, τ (t|Y ) dynamically considers the addition of element t to the current Y , but otherwise does not discriminate based on the agent t. A further contribution of our work is that we show how both τ (t|Y ) and π t can be computed efficiently when G is induced by an intersection of interval-scheduling constraints of the form (2), by implementing an ex-ante relaxation (Section 2.1).
Finally, we describe our analysis, which consists of two steps. First, we show that the algorithm earns at least 1 d1(M)+1R (∅), whereR(∅) represents the welfare of the restricted prophet (Proposition 1). Proposition 1 differs from the original matroid prophet inequality in that the algorithm is further constrained by the graph, but gets to play against a prophet who sees valuationsV t which have been reduced by π t . If the matroid is free and thus d 1 (M ) = 0, then Proposition 1 is still non-trivial, as it says that the algorithm, constrained by the graph, can match the restricted prophet in welfare. Our analysis concludes by showing that the restricted prophet earns at least 1 d2(G)+1 the welfare of the actual prophet (Proposition 2).
A general take-away from our paper is that the way in which constraints are modeled can lead to different algorithms and prophet inequalities. For example, simple constraints on item supplies can be modeled either with a partition matroid or by adding edges to our graph, which results in substantially different algorithms. In general, is there a systematic way of dividing up constraints between feasibility structures to yield the best prophet inequality? We leave this open as interesting future work.
Other Related Work
Prophet inequalities originated in [13,14], and the connection to posted-price mechanism design was discovered in [1]. There has since been a surge of literature on prophet inequalities, and we defer a complete literature review to the survey by Lucier [16]. Our work can be classified as having a fixed (adversarial) arrival order, which can be contrasted with random-order prophet inequalities [7]; general but structured feasible sets, which can be contrasted with arbitrary feasible sets [18] or refined results on rank-1 matroids [3]; and additive rewards, a special case of combinatorial rewards [19]. Our paper is most related to the existing work involving matroids [11,6,5] and XOS valuation functions [8,5]. We should mention that interval-scheduling constraints have also been studied in [10,2], where it is shown that with no assumptions on the intervals, the guarantee relative to the prophet is at most O(log log L/ log L), where L is the length of (number of items in) the longest interval. That is, with no assumptions on the intervals, a constant-factor is impossible.
Finally, we discuss two recent developments in approximate dynamic programming (ADP) from which we borrow techniques.
[20] has developed an ADPbased algorithm which is within 1/2 of the optimal DP in an application with reusable resources. This is the motivation behind our interval-scheduling constraints of the form (2). [17] has established a guarantee of 1/(d + 1) in a setting where each item uses up to d resources.
Our work makes further contributions beyond these existing results in three ways. First and most importantly, we show how to include ADP-based thresholds in the matroid residual function and analyze feasible sets defined by the intersection of a matroid and a graph. Second, we unify the two existing ADP results by abstracting them using a graph, which leads to a more general result-we can allow for items to use multiple (up to d) resources, each for a different duration. Finally, we extend their guarantees to be relative to the prophet (instead of the optimal DP), and also show how they can be applied on combinatorial auctions (instead of assortment optimization).
Proof of Theorem 1
We first summarize and formalize the notation and definitions from the Introduction, for the basic setting in Theorem 1.
There is a matroid M = (N, I) defined on the ground set, where I is a collection of subsets of N satisfying: (i) ∅ ∈ I; (ii) if S ∈ I and S ′ ⊆ S then S ′ ∈ I; and (iii) for S, S ′ ∈ I with |S| > |S ′ |, there exists t ∈ S \ S ′ such that S ′ ∪ {t} ∈ I (we refer to [12] for more background on matroids and their use in optimization). There is also a graph G = (N, E) defined on N , where E is a collection of size-2 subsets of N . We let F denote the collection of feasible sets, where a set of agents S is feasible if it is both independent in the matroid (i.e. S ∈ I) and independent in the graph (i.e. {t, t ′ } / ∈ E for all t, t ′ ∈ S). The goal is to accept a max-value feasible set of agents as compared to a prophet. Prophet. The prophet chooses a joint valuation distribution over R T with marginals F 1 , . . . , F T . On every realization, he sees the valuations and then selects a feasible set of agents. Let x * t denote the probability that agent t is selected, and let y * t denote her expected valuation conditional on being selected. Let OPT denote the prophet's expected welfare, which equals We note that such a backward-induction computation is only possible because we have assumed that the arrival order 1, . . . , T is known in advance. Restricted Prophet. The restricted prophet sees valuationsV = (V 1 , . . . ,V T ) drawn according to a joint distributionD defined as follows. First, an independent setÎ in the matroid (which need not be independent in the graph) is randomly selected in a way such that Pr[t ∈Î] = x * t for all t ∈ N (this is possible because x * lies in the matroid polytope defined by (8)-we elaborate in Section 2.1). The restricted prophet then seesV t = y * t − π t if t ∈Î, and V t = −π t otherwise. The residual function (6) based on the restricted prophet iŝ We note thatR(∅) = T t=1 x * t [y * t − π t ] + . This is because on every realization ofV, the optimal S to take is the set of agents t withV t > 0, which is guaranteed to be independent in the matroid (since all such agents must have hadV t = y * t − π t ). A corollary is that if the graph is empty and π t = 0 for all t, then the restricted prophet earns T t=1 y * t x * t , matching the welfare of the actual prophet despite seeing "binarized" valuationsV t . This reduction was introduced in [15]. Algorithm. The algorithm, having already accepted agents in Y , accepts an agent t if and only if Y ∪ {t} is independent in the graph and V t ≥ τ (t|Y ) + π t , as defined in (7). Note that τ (t|Y ) = Proof. Let Y denote the random set of agents accepted at the end of the algorithm, and for all t = 1, . . . , T , let Y t denote Y ∩ {1, . . . , t}, the set of agents accepted up to and including agent t. The algorithm's expected welfare equals where the second equality follows from the definition of τ , and the third equality follows from the fact that Y t−1 ∪ {t} = Y t for all t ∈ Y , causing the latter sum to telescope.
. Lemma 1 places an upper bound on the negative term from (10). It mostly follows from existing results [11,15], so its proof is deferred to the appendix. It relies on the submodularity of the matroid residual function. Since our restricted residual function considers a prophet who is only constrained by the matroid, our function is also submodular.
. Lemma 2 lower-bounds the "surplus" earned by the algorithm beyond the thresholds τ (t|Y t−1 ), and needs to consider that the algorithm is constrained by both matroid and graph independence. It is novel and crucial to our analysis.
Proof (of Lemma 2). We decompose the LHS as E t∈Y (V t − τ (t|Y t−1 ) − π t ) + E t∈Y π t and analyze the two expectations separately. The first expectation can be re-written as after using both the linearity of expectation and the tower property of conditional expectation. Now, recall that as agent t arrives, she is accepted if and only if she is feasible (in both the matroid and graph), and V t − τ (t|Y t−1 ) − π t ≥ 0. If Y t−1 ∪ {t} does not form an independent set in the matroid, then τ (t|Y t−1 ) = ∞, sinceR(Y t−1 ∪ {t}) is understood to equal −∞ when the maximization problem in the residual is infeasible. Therefore, we can write where Feas G (Y t−1 ∪ {t}) is the indicator random variable for Y t−1 ∪ {t} forming an independent set in the graph. Making this substitution for every agent t on the RHS of (11), we get that it equals where we have used the fact that V t is independent from Y t−1 . Meanwhile, the second expectation can be re-written as after applying the definition of π t from (5) and switching sums. Now, agent t ′ forms an independent set with Y t ′ −1 in the graph if and only if none of its neighbors t < t ′ have been accepted into Y . Therefore, the sum in parentheses in (13) , for all t ′ = 1, . . . , T . Adding (12) and (13), we get that We argue that both of the terms inside the min{·} operator are at least For the second term, this is obvious, since the thresholds τ (t|Y t−1 ) are non-negative. For the first term, note that V t takes an average value of y * t on an x * t -fraction of sample paths. Hence by Jensen's inequality, the expectation over V t is at least x * t [y * t − τ (t|Y t−1 ) − π t ] + (the [·] + operator is convex). This completes the proof of Lemma 2.
Equipped with Lemmas 1-2, the proof of Proposition 1 now follows from (10). Indeed, taking an expectation over Y on both sides in the result of Lemma 1, (10) implies that ALG ≥ Proof. Recall thatR(∅) = T t=1 x * t [y * t − π t ] + . We apply the definition of π t from (5) and switch sums to derive that
Computing and Constructing the Prophet's Distribution via an Ex-ante Relaxation
In this section we establish computational efficiency assuming that the graph G is induced by d-dimensional interval-scheduling constraints. We use an ex-ante relaxation defined by an LP, and establish three facts: 1. Theorem 1 still holds if we replace the prophet with this ex-ante relaxation, resulting in a guarantee of Our algorithm based on this ex-ante relaxation is computationally efficient; 3. The ex-ante relaxation upper-bounds the welfare of any prophet.
These facts together show that a computationally-efficient algorithm can earn at least 1 (d1(M)+1)(d+1) the welfare of any prophet.
Definition 3 (d-dimensional Interval-scheduling Constraints).
The agents arrive at times 1, . . . , T to be served by J different resources. Each agent t requests the attention of up to d resources, for different durations of time starting from t. Formally, associated with agent t are intervals {I j t = [t, u j t ] : j ∈ U t }, where U t is a set of at most d resources, with u j t ≥ t for all j ∈ U t . An agent t can be served only if all of the resources in U t are available. Thus, a set of agents is feasible only if their requested intervals are disjoint for every resource. Agents t, t ′ are adjacent in the graph if I j t ∩ I j t ′ = ∅ for any j. Definition 4 (Discrete Valuations). We assume that the marginal valuations are input as discrete distributions. That is, they are supported over a finite set of K values v 1 , . . . , v K ∈ R, and for each agent t, we let p k t ≥ 0 denote the probability that V t = v k for every k = 1, . . . , K, with k p k t = 1.
Definition 5 (Ex-ante Relaxation). The ex-ante relaxation is defined by an LP.
We then consider the values of x tk from an optimal LP solution and define In the LP, variable x tk can be interpreted as the probability that agent t has valuation v k and is accepted into the feasible set. Note that in an optimal solution, y * t will equal the average value of V t on its top x * t quantile. We now formalize the three facts stated above, with the proofs deferred to the appendix. The second fact references classical results in combinatorial optimization about separation [9] and rounding [4] for the matroid polytope.
Proposition 3 (Fact 1). Our algorithm, when defined based on the values of x * t , y * t from the ex-ante relaxation, has welfare at least Proposition 4 (Fact 2). Assuming oracle access to the matroid rank function, the values of x * t , y * t from the ex-ante relaxation can be efficiently computed. Furthermore, the restricted prophet's correlated distributionD has a compact representation which can be efficiently computed.
Proposition 5 (Fact 3).
The expected welfare of any prophet, who can choose the correlation between V 1 , . . . , V T and select a feasible S ∈ F maximizing t∈S V t on every realization, is upper-bounded by the optimal LP value of T t=1 y * t x * t .
Generalization to XOS Combinatorial Auctions
We generalize our result to a setting where each agent has a random valuation function over a set of items, and the graph and matroid constraints are defined on the items. An agent's valuation function is realized upon arrival, and the set of items allocated to the agent must then be decided. Specifically, there are T agents and a set of items N . There is a matroid (N, I) and a graph (N, E) defined over these items, and the total set of items allocated must be both independent in the matroid and the graph. We assume without loss of generality that N is partitioned into N 1 , . . . , N T such that agent t can only be given items in N t . An allocation is denoted by (Y 1 , . . . , Y T ) ⊆ T t=1 2 Nt , and let Y t = (Y 1 , . . . , Y t ). Sometimes we abuse notation to assume Y t = Y 1 ∪ · · · ∪ Y t . The set of feasible allocations is We have similar definitions of d 1 (M ) and d 2 (M ) as before.
denotes the subgraph of G induced by a set of vertices, and α(·) denotes the maximum size of an independent set in a graph.
Each agent t has a random valuation function v t : 2 Nt → R ≥0 drawn from a known distribution. At time t, agent t's valuation function realizes independently to a valuation function v k t with probability p k t for k = 1, . . . , K. Then, the set of items Y t ⊆ N t allocated to the agent is decided. We require the all valuation functions to be fractionally-subadditive, i.e. XOS (see [8] for a definition).
We now formalize the prophet and algorithm.
Prophet. As before, we compare our online algorithm to a prophet who is able to choose an arbitrary correlated distribution D * over v 1 , . . . , v T . Furthermore, for every realization of the valuation functions v = (v 1 , . . . , v T ), we define the prophet's allocation to be Alloc * (v) = (Alloc * (v) 1 , . . . , Alloc * (v) T ), which always satisfies be the conditional probability that the prophet chooses to allocate S ⊆ N t to agent t given that v t realizes to v k t . Then, by linearity of expectation, the prophet's value is Since the prophet's allocation must be independent in the graph on every realization, for every S = S 1 ∪ · · · ∪ S T ⊆ N , the following must be satisfied: That is, the number of items in S that is taken must not be greater than the size of the biggest independent set in the graph induced by S. By taking expectations, we get: XOS Decomposition. For S ⊆ N t , consider the valuation v k t (S). Since v k t is XOS, it can be written as v k t (S) = max ℓ∈[L] w ℓ (S), where each w ℓ : 2 Nt → R is an additive function. Let u k t (i, S) be the value of item i for the additive function that supports v k t (S). That is, if v k t (S) = w ℓ (S), then let u k t (i, S) = w ℓ ({i}) for all i ∈ S.
The following property holds for any XOS function v k t : since the additive function that supports S has exactly the value i∈J u k t (i, S), so v k t (J) can only be higher. This property is also used in [8]. Dynamic Programming Coefficients. Define π t (i) andû k t (i, S) recursively using backwards induction over all t = T, T − 1, . . . , 1, S ⊆ N t , and i ∈ N t : Analogous to (5), π t (i) is the "cost" of allocating item i at time t. We sum over all future surplusesû k t ′ (i ′ , S ′ ) that are "blocked" by G if item i is taken, where the future surpluses are pre-computed and depend not only on i ′ but also on S ′ . This is because we need to separately account for the sets S ′ which could be allocated to agent t ′ , for each possible realization of valuation function v t ′ . Restricted Prophet. For S ⊆ N , we define the restricted residual of S as: where J = (J 1 , . . . , J T ), and J ⊆ Alloc * (v) means J t ⊆ Alloc * (v) t ∀t. That is, from the prophet solution Alloc * (v), we take the subset of it which is feasible with S and maximizes the restricted rewards. If S / ∈ I,R(S) is defined as −∞. If S = ∅, then for every v setting J = Alloc * (v) is a maximizer (because Alloc * (v) ∈ I), and henceR(∅) equals , which we refer to as the value of the restricted prophet.
) denote the threshold for the matroid at time t, when the set of items taken so far is if the agent's valuation realizes to v k t , the algorithm allocates subset That is, the algorithm allocates the feasible subset which has the largest "surplus" over the sum of the two thresholds.
We are now ready to state our generalization of Theorem 1.
where OPT is the expected welfare of a prophet who can choose the correlation between v 1 , . . . , v T and see their realizations beforehand.
The proof of Theorem 2 is deferred to Appendix C. Below, we also generalize Definition 3, and bound d 2 (G) in the setting where every item i ∈ N requests the attention of up to d resources for a duration of time. When combined, these results imply a 1 (d1(M)+1)(d+1) -guarantee. Proposition 6. Let J be a set of resources, and let U i ⊆ J for every i ∈ N with A Why weakly balanced prices do not exist for Example 1 We explain why even the most general framework from [5], which seeks weakly balanced prices in a deterministic setting, does not appear to yield a constantfactor guarantee for Example 1. The reasoning is similar to that described in Section 1.2, with the issue caused by the optimum "starting over". Consider Theorem 3.2 from [5] and consider any constants α, β 1 , and β 2 . We construct values of T , C, and ε for which their algorithm based on balanced prices extracts an arbitrary-small fraction of welfare. We will follow the notation from Section 3 of [5].
For every agent t, the corresponding outcome space is {∅, acc}, where acc refers to agent t being accepted while ∅ refers to agent t being rejected. Let x be the allocation in which only the second agent is accepted. Suppose v is the valuation profile where the first agent has valuation C+T ε ε , which occurs with probability ε. In this case, v(ALG(v)) = C+T ε ε , since the prophet only allocates the item to the first agent. Then, F x (the exchange-compatible set) cannot contain any allocation y that accepts the first agent, because if it did, then it would not satisfy (y 1 , x −1 ) ∈ F . Therefore, it must be that v(OPT(v, F x )) ≤ T − 1. Then, for x and v to satisfy the first constraint in weakly balanced prices, it must be that Their posted price mechanism uses prices δ·E v [p v i (x i |y)] for δ = 1 β1+max{2β2,1/α} , and the aforementioned valuation function realizes with probability ε. Therefore, the price for agent 2 in the posted price mechanism is greater than Similarly, the price for agents i = 3, . . . , T will also be greater than 1, so those agents will never be accepted. Then, the posted-price mechanism will achieve welfare C + T ε, whereas the prophet will achieve C + T − 1 + ε. As T → ∞ and ε = o(1/T ), the fraction of welfare achieved by the posted-price mechanism goes to 0.
B Deferred Proofs
Proof (of Lemma 1). The residual functionR, which involves the prophet selecting a max-value independent set in a matroid, is submodular by [11]. That is, for all subsets S and S ′ ,R(S ∪ S ′ ) +R(S ∩ S ′ ) ≤R(S) +R(S ′ ). Applying this inequality repeatedly yields the following: Therefore, we can rearrange (21) to get Now, recall that eachV t takes value y * Proof (of Proposition 4). The LP is polynomially-sized except for the exponential family of constraints (14). This family of constraints define the matroid polytope and can be efficiently separated over, assuming oracle access to the matroid rank function (note that this is a submodular function minimization problem). Furthermore, using the GLS ellipsoid method, separation implies that the LP can be solved to optimality and hence the vectors x * , y * can be computed. For further background on these results, we refer to [12, Sec. 14.3] and [9]. Since x * lies in the matroid polytope defined by (8), which is integral [12,Sec. 13.4], x * can indeed be represented by a distribution over independent sets. Furthermore, there are explicit rounding procedures for doing so, which can compute a small convex combination of independent sets equaling x * assuming an oracle to the matroid [4] (note that a convex representation with at most T +1 sets exists, by Caratheodory's theorem [12,Sec. 3]). Therefore, the restricted prophet's distributionD has small support and can be explicitly constructed in polynomial time, which allows us to efficiently evaluate the expectation overV in the definition of the residualR, and hence efficiently run our algorithm.
Proof (of Proposition 5). Consider any correlated distribution for V 1 , . . . , V T and prophet selection rule. Set x tk to be the probability that the prophet accepts agent t when her valuation realizes to v k , for all t and k. We argue that this forms a feasible solution to the LP. The constraints (14)-(15) follow from the linearity of expectation, since the prophet must select a set that is independent in both the matroid and the graph on every realization (note that the edges in the graph are defined so that that the interval constraints (15) are indeed satisfied by the {0, 1}-incidence vector of any independent set in the graph). Meanwhile, x tk ≤ p k t because the marginal probability that the valuation of agent t is v k is at most p k t . Finally, the objective value of the LP equals the expected welfare of the prophet. Since the prophet corresponds to a feasible LP solution, the optimal LP solution can be no less, completing the proof.
Proof (of Proposition 6). Let G = (N, E) be the graph where {i, i ′ } ∈ E if I j i ∩ I j i ′ = ∅ for any resource j. Then, an item i ∈ N t can be allocated if and only if none of its neighbors are allocated. Therefore, a set of items is independent in the graph if and only if it is a feasible allocation.
We show d 2 (G) ≤ d. Consider any i ∈ N t , and let S = {i ′ ∈ N t ′ : {i, i ′ } ∈ E, t ′ < t} be the neighbors of i that come before time t that "blocks" i. We must show that the largest independent set in the graph G[S] contains at most d nodes. Since every neighbor of i uses at least one resource in common with i, we can partition S into |U i | sets based on which resource they have in common. (If a neighbor has more than one resource in common with i, then choose one of the resources at random.) Each set in the partition form a clique in the graph, since they all share a resource in common and their intervals overlap at time t. Therefore, an independent set in the graph G[S] can contain at most one item from each of the sets in the partition. Thus, α(G[S]) ≤ |U i | ≤ d.
C Proof of Theorem 2
The proof structure is similar to that of Theorem 1. Propositions 7 and 8 are analogous to Propositions 1 and 2, respectively. Lemmas 3 and 4 also correspond to Lemmas 1 and 2. Proof. Let Y t ⊆ N t be the random variable corresponding to the set of items allocated to agent t by the algorithm, and let Y t = Y 1 ∪ · · · ∪ Y t . The algorithm's expected welfare equals where the second equality follows from the definition of τ , and the third equality follows from the fact that Y t−1 ∪ Y t = Y t for all t, causing the latter sum to telescope. Proof (of Lemma 3).
R(Y T ) = E v∼D * [ max
Fix any J = (J 1 , . . . , J T ) such that J ∪ Y T ∈ I. The residual functionR is submodular by [11]. That is, for all subsets S and S ′ ,R(S ∪ S ′ ) +R(S ∩ S ′ ) ≤ R(S) +R(S ′ ). Applying this inequality repeatedly yields the following: T to be the subset of φ k t,Yt−1 (S) that is independent in G and whose value is above the threshold π t (i).
We decompose the LHS as
|
2019-06-12T02:34:09.000Z
|
2019-06-12T00:00:00.000
|
{
"year": 2019,
"sha1": "69671f51cb3ae4a18a189ac7552245cbf7e68958",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0fb55db62f59a2c0056a36ba35a322f70e7e28da",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
219954113
|
pes2o/s2orc
|
v3-fos-license
|
A study on compare the efficacy of epidural, bupivacaine with buprenorphine and bupivacaine with fentanyl in lower limb surgeries
Background and Objectives: Pain is a complex subjective experience which has proved difficult to measure in reproducible way. It is found that operative pain is more severe after surgery and thereafter gradually diminishes over next 24 hours. Providing effective analgesia for patients undergoing major surgery is a daily challenge for most anaesthetists. Methods: 60 patients in the age group 20-60 years belonging to ASA I-II posted for elective lower limb surgeries were studied. The patients were divided in to two groups of 30 each. Group A0.5% Bupivacaine 15ml (75mg) with 0.5ml (150 ug) Buprenorphine (preservative free) Group B0.5% Bupivacaine 15ml (75mg) with 1ml (50ug) Fentanyl (preservative free) Intraoperatively, sensory and motor blockade, quality and duration of Postoperative analgesia, hemodynamic and respiratory parameters, side effects like nausea, vomiting, respiratory depression, urinary retention, pruritus were studied. Patients were monitored for 48 hours postoperatively to look for any delayed complications. Results: Addition of 50 mcg fentanyl to 0.5 % bupivacaine (group B) resulted in faster onset of sensory and motor blockade which was statistically insignificant compared to 150mcg buprenorphine with 0.5% bupivacaine (group A). Duration of analgesia was significantly longer in Group A with mean duration of 766.6 minutes as compared to 471 min in Group B. Both the groups provided a good hemodynamic stability. There was no significant respiratory depression in both the groups. The incidence of Nausea and vomiting was more in group A (40 %) compared to group B (10 %) and mild pruritus which did not require any treatment was more in group B (10%) compared to none in group A. Conclusion: In this comparative study an effort was made to study the peri operative analgesic efficacy of Inj. Buprenorphine and Inj. Fentanyl with 0.5 % Bupivacaine epidurally for lower limb surgeries. There were no significant hemodynamic and respiratory side effects in either of the groups. Both buprenorphine and fentanyl along with bupivacaine 0.5% can be given epidurally as a single shot injection for perioperative analgesia obviating the need for epidural catheter.
Introduction
The word pain is derived from the Greek term poine (-penalty‖) [1] . Pain is not just a sensory modality but is an experience. The international Association for the study of pain defines pain as -an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage‖. Intrathecal anaesthesia and epidural anaesthesia (EA) are the most popular regional anaesthesia techniques used for lower limb orthopaedic surgeries. Intrathecal anaesthesia also called as spinal anaesthesia has few limitations like, short duration of anaesthesia, extension of anaesthesia can be done for prolonged surgeries but chances of life threatening complications are more, shorter duration of post-operative analgesia and troublesome complication of postdural puncture headache (PDPH) [2] . EA is becoming one of the most useful and versatile procedures in modern anesthesiology. It is unique in that it can be placed at virtually any level of the spine, allowing more flexibility in its application to clinical practice. It is more versatile than spinal anesthesia, giving the clinician the opportunity to provide anesthesia and analgesia, as well as treatment of chronic disease syndromes. The present study is designed to compare between epidural, Bupivacaine with Buprenorphine and Bupivacaine with Fentanyl in Lower Limb Surgeries.
Materials & Methods
This study is a prospective study conducted at Pratima Medical College and Hospital, Karim Nagar. After Ethical committee clearance and informed consent, a total of 60 patients of either sex aged between 20-60 years belonging to ASA Grade I & II scheduled for elective lower limb surgeries were randomly selected. Methodology 60 Patients posted for elective lower limb surgeries were randomly selected for the study. All patients undergone thorough pre-anaesthetic evaluation a day before surgery and explained in detail regarding the anaesthetic procedure. Routine investigations were done. Drugs used were explained to the patients and also educated about Verbal numerical scale for assessment of pain.
Grading of Post-Operative Pain is Done Using Vns (Verbal Numerical Scale):
The patient will be asked to quantify their pain by using VNS pain scores with 0 corresponding to no pain and 10 to the worst imaginable pain. For the purpose of assessing the pain • 0 -2.5 taken as no pain • 2.5-5 taken as mild pain • 5 -7.5 taken as moderate pain • 7.5 -10 taken as severe pain.
Written informed consent was obtained. All patients received Tab. Alprozolam 0.25 mg orally on the previous night of surgery as pre-medication. Patients were advised nil orally for a period of 6 hours prior to surgery. A test dose of 3ml of 2% lignocaine with adrenaline (1:2,00,000) was given to rule out intravascular or intrathecal placement. 5 minutes after test dose, in the absence of any adverse sequelae, 16ml of study drug was injected depending on patient study group through epidural catheter and patient were made to lie supine. After adequate blockade (T10) patient was repositioned based on surgical requirements.
Patients were divided into two groups
Group A Buprenorphine with Bupivacaine group -0.5% Bupivacaine 15ml (75mg) with 0.5ml (150 ug) Buprenorphine (preservative free) with 0.5ml sterile normal saline made to a total of 16ml.
Results
A total of 60 patients of either sex randomly selected for the study. Statistical data was analysed using SPSS package. SD: Standard Deviation -It is observed that onset of analgesia in Group-A (0.5% bupivacaine + 150mcg buprenorphine) was 7.56 min. When compared to Group-B (0.5% bupivacaine + 50 mcg fentanyl) which was 6.6 min, which is statistically insignificant (P<0.05). It shows that there was no difference in the onset of action. The onset of motor blockade, degree and time required to achieve complete blockade were recorded. The degree of motor blockade was graded according to modified Bromage scale.
Demographic Data Analysis
The mean time to achieve complete motor blockade was 18.9 min in group A and 18.63 in group B which was statically insignificant in both the groups.
Discussion
Pain is a more terrible lord of mankind than death itself. Pain is a complex subjective experience, which has proved difficult to measure in reproducible way. 3 Pain perception can be sensory discriminative aspect that describes the location and quality of the stimulus called fast pain and motivational affective portion that leads to aversive aspect of pain, also known as slow pain. Satisfactory pain relief has always been a difficult problem in clinical practice [4] . The pain in the postoperative period demands relief not only on humanitarian ground but also to reduce physical morbidity following the operation. In postoperative period when the effect of the anaesthetic disappears, the tissue injury persists and pain producing substances which are liberated during the operation greatly reduce the normally high threshold of the nociceptors, so that innocuous stimulation produces pain. Moreover the cut ends of axons further contribute to nociception. A wide range of options exist to combat pain both pharmacologically and nonpharmacologically. However, despite the increasing complex armamentarium that we have at our disposal, the satisfactory alleviation of pain remains difficult goal. Thus the extent of our pharmacological alternatives is rather a reflection of our constant efforts to obtain more effective and safer analgesics. Epidural anaesthesia is superior to Spinal as the desired block levels can be achieved without significant haemodynamic disturbances and top-up doses of anaesthetics & analgesics can be given. In modern anaesthesia practice Epidural anaesthesia is widely being used especially in patients undergoing surgical procedures involving lower parts of the body. To fulfil this demand, there is a need for local anaesthetic with desirable properties like longer duration of sensory blockade and shorter duration of motor blockade [3] . Traditionally epidural bupivacaine was used for postoperative analgesia. The epidural bupivacaine 0.5% causes motor, sensory and sympathetic blockade, 0.25% causes sensory and autonomic blockade and 0.125% causes autonomic blockade only Epidural and intrathecal opioids are today being used for intraoperative and postoperative analgesia.
A study entitled, a comparative study between epidural, "Bupivacaine With Buprenorphine and Bupivacaine with Fentanyl In Lower Limb Surgeries‖, was undertaken at Pratima Medical College and Hospital, Karimnagar, Telangana, India to evaluate sensory and motor blocking properties, quality and duration of analgesia and side effects if any. After informed consent 60 patients of ASA class I and II, posted for various elective lower limb surgeries were grouped randomly into either Buprenorphine with Bupivacaine (A) group or Fentanyl with bupivacaine (B) group. Epidural space was identified with loss of resistance technique to air. Epidural catheter was inserted and secured 3cms inside epidural space and 3 ml of lignocaine 2% with ~ 334 ~ adrenaline test dose given, observed for 3mins for any intravascular or intrathecal placement of catheter. Later 16 ml of the study drug was injected and various parameters were studied. In our study all the patients were given epidural block in sitting position, because the patients with lower limb fractures, found sitting position more comfortable. Demographic data Demographic data comparing age, sex, weight, height shows no statistically significant difference among both the groups.
Sensory characteristics Onset of sensory blockade
Onset of sensory blockade is taken as the time from the completion of the injection of the study drug till the patient does not feel the pin prick at T12 level on the dependent side. Mean time of analgesia in our study was Group-A 7.53 min Group-B 6.60 min There was no significant difference in the onset of analgesia between the Group A and Group B. Zenz M, Pipenbrocks S, did a double blind comparison of epidural Buprenorphine and epidural morphine for postoperative pain relief. Morphine 4 mg and buprenorphine 0.15 mg were given through epidural route. Buprenorphine produced analgesia with short latency 6.8 min. This is close to our observation of 7.53 min.5 High lipid solubility and high potency may explain the faster onset of pain relief in buprenorphine group. Suraj Dhale and Vaishali Shelgaonkar, in 2000 studied different doses of epidural fentanyl (25µg, 50µg, 75µg) with 0.5% bupivacaine for perioperative analgesia found that 50µg had a quicker onset of analgesia within 9.53 min which is close to our observation [6] .
Duration of Analgesia
Duration of analgesia is taken from the time of injection till the patient complains of pain at the site of surgery. Time at which, patients complained of pain more than 5 and above on the verbal numerical scale was noted. That point was taken as the end of fair analgesia and at that point, top up doses were given based on requirement. In our study mean duration of analgesia in group A was 766 min which was significantly longer compared to group B of mean duration of analgesia was 471 min. In their comparative study between epidural buprenorphine and epidural Ketamine for postoperative pain relief D.Kumar, N.Dev and N.Gupta found that 0.15mg Buprenorphine with 10 ml of 0.9% saline had longer duration of action 13.1 hours (range 8-12 hours) compared to 10mg of Ketamine with 10 ml of 0.9% saline, which had mean duration of 5.2 hours. In our study mean duration of analgesia in Group A was 766 min (12 hours) [7] .
Motor Blockade
The mean time to achieve complete motor blockade was 18.9 min in group A and 18.63 which was statically insignificant in both the groups. Suraj Dhale and Vaishali Shelgaonkar, in 2000 studied different doses of epidural fentanyl (25µg, 50µg, 75µg) with 0.5% bupivacaine for perioperative analgesia where mean onset of motor blockade was 26.13 ± 1.80 min [6] . In 1981, Zenz M, Pipenbrock S, Hubner S, Glocke M, did a double blind comparison of epidural buprenorphine and epidural morphine in post-operative pain. Morphine 5 mg and buprenorphine 0.15 mg given by epidural route were compared, in fifty patients, recovering from abdominal surgery. They observed there was decreased respiratory rate and increased tidal volume; however there was no severe respiratory depression [8] .
Side Effects
The four classic side effects of neuraxial opioids are Pruritus, Nausea and vomiting, Urinary retention and Depression of ventilation. Side effects are caused by the presence of drug either in CSF or systemic circulation. Most side effects are dose dependant.
Conclusion
In this comparative study an effort was made to study the peri operative analgesic efficacy of Inj. Buprenorphine and Inj. Fentanyl with 0.5% Bupivacaine epidurally for lower limb surgeries. There were no significant hemodynamic and respiratory side effects in either of the groups. The postoperative analgesia was definitely of a longer duration with the buprenorphine group. So it is concluded that epidural buprenorphine is better in providing prolonged satisfactory postoperative analgesia as compared to Inj. Fentanyl. Regarding the side effects, the incidence of nausea and vomiting was more in buprenorphine as compared to fentanyl group, which is easily treated with antiemetic's like Ondansetron. Both buprenorphine and fentanyl along with bupivacaine 0.5% can be given epidurally as a single shot injection for perioperative analgesia obviating the need for epidural catheter.
|
2020-05-28T09:14:44.798Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "8fd2081dda9a7a57d0d5e1526cb499f26515d1ae",
"oa_license": null,
"oa_url": "https://www.anesthesiologypaper.com/article/view/111/3-1-20",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c615cf033302403560886d16adfe4ad4fc5ff64b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227906386
|
pes2o/s2orc
|
v3-fos-license
|
The Interaction of TRAF6 With Neuroplastin Promotes Spinogenesis During Early Neuronal Development
Correct brain wiring depends on reliable synapse formation. Nevertheless, signaling codes promoting synaptogenesis are not fully understood. Here, we report a spinogenic mechanism that operates during neuronal development and is based on the interaction of tumor necrosis factor receptor-associated factor 6 (TRAF6) with the synaptic cell adhesion molecule neuroplastin. The interaction between these proteins was predicted in silico and verified by co-immunoprecipitation in extracts from rat brain and co-transfected HEK cells. Binding assays show physical interaction between neuroplastin’s C-terminus and the TRAF-C domain of TRAF6 with a Kd value of 88 μM. As the two proteins co-localize in primordial dendritic protrusions, we used young cultures of rat and mouse as well as neuroplastin-deficient mouse neurons and showed with mutagenesis, knock-down, and pharmacological blockade that TRAF6 is required by neuroplastin to promote early spinogenesis during in vitro days 6-9, but not later. Time-framed TRAF6 blockade during days 6–9 reduced mEPSC amplitude, number of postsynaptic sites, synapse density and neuronal activity as neurons mature. Our data unravel a new molecular liaison that may emerge during a specific window of the neuronal development to determine excitatory synapse density in the rodent brain.
INTRODUCTION
Synaptogenesis is a timely coordinated cellular process, which sets up the neuronal connectivity essential for information flow and processing in healthy brains (McAllister, 2007;Sudhof, 2008Sudhof, , 2017. Indeed, inaccuracy in synaptogenesis occurring massively during neuronal development in childhood is proposed as a critical factor in neuropsychiatric disorders including intellectual disability, autism spectrum disorders, and schizophrenia (Sudhof, 2008(Sudhof, , 2017Zhang et al., 2009;Boda et al., 2010;Caldeira et al., 2019). One key step in synaptogenesis is the massive appearance of spinogenic structures, named dendritic protrusions, in young dendrites, which differentiate into mature excitatory spine synapses. Protrusion formation seems to be controlled by molecules able to trigger spinogenic signaling mechanisms during a critical period in the neuronal development (Okawa et al., 2014;Jiang et al., 2017;Sudhof, 2017). Currently, there is limited knowledge on how such molecules organize the formation of primordial glutamatergic synapses and thus, it has not been fully appreciated how signaling events occurring during the development of neurons contribute to the establishment of future connectivity yielding correct synapse density in the brain (Yoshihara et al., 2009;Sudhof, 2017).
Neuroplastin is a type-1 transmembrane glycoprotein of the immunoglobulin superfamily of cell adhesion molecules (CAMs) (Langnaese et al., 1997;Beesley et al., 2014) shown to mediate the formation of a fraction of excitatory synapses in the hippocampus in vivo Amuti et al., 2016;Bhattacharya et al., 2017) and to belong to a group of highly expressed CAMs which define a "connectivity code" in the hippocampus during early postnatal development (Földy et al., 2016). Furthermore, neuroplastin has been identified as candidate to mediate the formation of synapses in the inner ear in vivo (Carrott et al., 2016). In mice, we have shown that constitutive elimination of neuroplastin expression goes along with autistic-and schizophrenic-like behaviors, altered brain activities, reduced synaptic plasticity, and unbalanced synaptic transmission (Bhattacharya et al., 2017;Herrera-Molina et al., 2017). Constitutive deficiency of neuroplastin expression results in lower numbers of excitatory synapses or abnormal synapse morphology in the mouse hippocampus Amuti et al., 2016). In contrast, inducible elimination of neuroplastin expression in fully developed adult mice does not modify the number of hippocampal excitatory synapses (Bhattacharya et al., 2017). However, it remains unknown how and when neuroplastin participates in synaptogenesis necessary for the proper establishment of synapse density, synaptic transmission, and neuronal activity.
The tumor necrosis factor (TNF) receptor-associated factor 6 (TRAF6) is essential for brain development as reduced programmed cell death in the diencephalon and mesencephalon resulted in lethal exencephaly in KO embryos (Lomaga et al., 1999). Moreover, TRAF6 has been closely related to pathologies of the central nervous system including traumatic brain injury, stroke and neurodegenerative diseases (for review see Dou et al., 2018). Furthermore, TRAF6 knockdown destabilizes PSD-95 and facilitates the plasticity of excitatory spines in mature neurons (Ma et al., 2017). Nevertheless, the functions of TRAF6 in young postnatal neurons, i.e., during the major period of excitatory synapse formation, are unknown. TRAF6 is a prominent adaptor protein with E3 ligase activity. It harbors an N-terminal RING domain followed by four zinc fingers and a C-terminal region that comprises a coiled coil domain and a TRAF-C domain (Chung et al., 2002;Yin et al., 2009). To initiate cell signaling in processes like neuroinflammation (Dou et al., 2018) as well as cell differentiation, activation and tolerance of immune cells, and migration of cancer cells (Lomaga et al., 1999;Kobayashi et al., 2001;Xie, 2013;Walsh et al., 2015), the TRAF-C domain docks the factor to a specific motif in cytoplasmic domains of transmembrane proteins allowing lateral homo-oligomerization of TRAF6 RING domains and assembly of a three-dimensional lattice-like structure (Yin et al., 2009;Ferrao et al., 2012;Wu, 2013). These TRAF6 structures are reported as plasma membrane-associated "fluorescent spots" on the micrometer scale where hundreds of cell signaling intermediaries would nest (Ferrao et al., 2012;Wu, 2013).
As we identified a TRAF6 binding motif in neuroplastin, but not in other known synaptogenic CAMs, we tested the hypothesis that TRAF6 interaction is required by neuroplastin for its capability to promote formation of excitatory synapses. This study uncovered a hitherto unanticipated function for TRAF6 in synaptogenesis during early neuronal development ultimately required for the adequate functioning of mature neurons.
In silico Modeling
We performed local peptide docking based on interaction similarity and energy optimization as implemented in the GalaxyPepDock docking tool . The proteinpeptide complex structure of the hTRANCE-R peptide bound to the TRAF6 protein as provided by Ye et al. (2002) was used as input (PDB: 1LB5). The docking employs constraints of local regions of the TRAF6 surface based on the interaction template. The energy-based optimization algorithm of the docking tool allows efficient sampling of the backbone and side-chains in the conformational space thus dealing with the structural differences between the template and target complexes. Models were sorted according to protein structure similarity, interaction similarity, and estimated accuracy. The fraction of correctly predicted binding motif residues and the template-target similarity was used in a linear model to estimate the prediction accuracy. The model using target-template interactions based on the QMPTEDEY motif of the hTRANCE-R template was selected (TM score: 0.991; Interaction similarity score 108.0; Estimated accuracy: 0.868).
Surface Plasmon Resonance
Protein-Protein interaction measurements were carried out on a BIACORE X100 (GE Healthcare Life Sciences). Sensorgrams were obtained as single cycle kinetics runs. Therefore, increasing concentrations of neuroplastin peptide (2.5, 5, 100, 200, and 400 µM) or just running buffer (startup) were sequentially injected on GST-TRAF6 coated CM5 sensor chip (GE). Unspecific bindings were calculated by using a GST-coated sensor as reference response. Immobilization of these proteins was done using the amine coupling kit as we described in Reddy et al. (2014). All runs were performed in HBS-P buffer. Analysis of affinity was performed using the BIACORE X100 Evaluation Software 2.0.1 (Reddy et al., 2014).
GST Pull-Down Assay
GST, GST-TRAF6 and GST-TRAF6 cc−c were transformed into Escherichia coli BL21 (DE3) bacterial strain and induced by 0.5 mM of isopropyl-1-thio-b-D-galactopyranoside (IPTG) for 6 h at 25 • C. The cells were lysed in resuspension buffer [50 mM Tris-HCl pH 8.0, 150 mM NaCl and protease inhibitor cocktail (Roche)] with sonication on ice. The purifications of these proteins from transformed bacterial cell extract were performed according to manufacturer instructions (GST bulk kit, GE Healthcare Life Sciences). The purified soluble GST proteins were immobilized on glutathione sepharose 4B beads (GE Healthcare Life Sciences). The beads were washed with binding buffer at least four times, and the pull-down samples were subsequently subjected to immunoblot analyses. The 5 µg of fusion protein coupled beads (GST, GST-TRAF6 and GST-TRAF6 cc−c ) were incubated with lysate from HEK cells transfected with Np65-GFP for 1 h at 4 • C in 500 µl RIPA lysis buffer. The beads were washed and eluted with pre-warmed SDS sample buffer. The eluted proteins were resolved by SDS-PAGE.
Hippocampus from 2 weeks-old Nptn +/+ and Nptn −/− mice or forebrains of 3 weeks-old rats were stored at -80 • C until use. After homogenization in ice-cold RIPA buffer, which preserves strong protein-protein interactions (Müller et al., 1996;Lin et al., 1998), supplemented with and protease inhibitor cocktail (Roche) at 4 • C, total homogenates were precleared by 30 min incubation with Protein G Sepharose TM 4 Fast Flow (GE Healthcare) and then incubated overnight with a rabbit antineuroplastin antibody that recognized the Ig-like domain 2 and 3, which are common for Np65 and Np55 (1 µg/ml, Smalla et al., 2000;Bhattacharya et al., 2017). Precipitation was performed by adding Protein G Sepharose beads for 2 h at 4 • C. Beads were washed ones in RIPA buffer, two times in 20 mM Tris, 150 mM NaCl, 0.5% Digitonin, pH 7.5 followed by a short rinse in 20 mM Tris/150 mM NaCl. For SDS-PAGE, bound proteins were eluted with 1x Rotiload (Roth). Eluted proteins were subjected to SDS-PAGE.
Image Acquisition and Processing and Co-localization
Images were acquired using HCX APO 63/1.40 NA or 100/1.4NA objectives coupled to a TCS SP5 confocal microscope under sequential scanning mode with 4.0-to 6.0-fold digital magnification. Z-stacks with 41.01 × 41.01 × 5 µm physical lengths were digitalized in a 512 × 512 pixels format file or with 61.51 × 15.33 × 2 µm in a 1024 × 256 pixel format file. To correct optical aberrations, z-stack images were deconvolved using the Huygens Professional software v. 19.10 (Scientific Volume Imaging B.V., Netherlands). Pearson's co-localization index from single z-planes was obtained from dendritic segments containing dendritic protrusions using Imaris software v. x64 9.5.1 (Bitplane Scientific Software, Oxford Instruments plc).
Quantification of Filopodia and Dendritic Protrusions
In HEK cells, filopodia number and length were quantified using a MATLAB-based algorithm, FiloDetect, with some modifications (Nilufar et al., 2013). The algorithm was run for every single image, and the image threshold was adjusted to avoid false filopodia detection and to quantify precise filopodia length and number. The filopodia number per µm was calculated from perimeter of the cell using ImageJ. In neurons, the dendritic protrusions were quantified manually using maximum intensity and Z-projection method of ImageJ software. The dendritic protrusions were considered between 0.25 and 20 µm length. Shank2 clusters were quantified from manually cropped images using brightness-enhanced original GFP fluorescent as reference to identify puncta of interest. For this, Shank2 clusters overlapping with GFP fluorescence were obtained by using the "image calculator" command in ImageJ. Regions of interest (the dendritic protrusions) were defined according to GFP fluorescence with polygon selection tool. Images were processed with watershed segmentation to refine the shapes of Shank2positive objects in binary images. The area, intensity and number of Shank2 clusters in dendritic protrusions were measured by filtering the cluster size (minimum 0.02 µm 2 ) using ImageJ software as further detailed in Herrera- .
Synaptotagmin Uptake Assay
Presynaptic activity driven by endogenous network activity was monitored as described before . Hippocampal neurons were washed once with prewarmed Tyrodes solution (119 mM NaCl, 2.5 mM KCl, 25 mM HEPES, pH 7.4, 30 mM glucose, 2 mM MgCl 2 , 2 mM CaCl 2 ) and immediately incubated with an Oyster 550-labeled anti-synaptotagmin-1 rabbit antibody (Synaptic Systems, #105 103C3; 1:500) for 20 min at 37 • C. After the antibody uptake, neurons were washed, fixed, and stained with anti-VGAT guinea pig (Synaptic Systems, #131 004; 1:1,000) and anti-synaptophysin mouse (company, catalog number; 1:1,100) primary antibodies overnight at 4 • C. Subsequently, samples were incubated with anti-rabbit Cy3-, anti-guinea pig Cy5-and anti-mouse Alexa 488-conjugated donkey secondary antibodies (1:1,000) for 1 h. Z-stack images of soma and secondary/tertiary dendrites were acquired using an oilimmersion (HCX APO 63/1.40 NA) objective coupled to a TCS SP5 confocal microscope under sequential scanning mode with a 4.0-fold digital magnification, and digitalized in a 512 × 512 pixels format file (61.51 × 61.51 µm physical lengths). All parameters were rigorously maintained during the image acquisition. For quantification, z-stacks were projected using "sum slices" Z-projection method of ImageJ software. We quantified the synaptotagmin-associated fluorescence colocalizing with 1-bit masks derived from VGAT-positive (inhibitory presynapses) or VGAT-negative synaptophysinpositive (excitatory presynapses) puncta using the "image calculator" in the ImageJ software. During image processing the original settings of the synaptotagmin channel were carefully maintained as the original. One-bit masks were generated using the analyze particle in the ImageJ software for a segmented image of each presynaptic marker (range of particle size 0.15 -2.25 µm 2 for inhibitory presynapses and 0.15 -1.50 µm 2 for excitatory presynapses).
Electrophysiology
Whole-cell patch clamp recordings were performed under visual control using phase contrast and sCMOS camera (PCO panda 4.2). Borosilicate glass pipettes (Sutter Instrument BF100-58-10) with resistances ranging from 3 to 7 M were pulled using a laser micropipette puller (Sutter Instrument Model P-2000). Electrophysiological recordings from neurons were obtained in Tyrodes solution ([mM] 150 NaCl, 4 KCl, 2 MgCl 2 , 2 MgCl 2 , 10 D-glucose, 10 HEPES; 320 mOsm; pH adjusted to 7.35 with NaOH and Osmolarity of 320 mOsm) + 0.5 µM TTX (Tocris). Pipettes were filled using standard intracellular solution ([mM] 135 K-gluconate, 4 KCl, 2 NaCl, 10 HEPES, 4 EGTA, 4 MgATP, 0.3 NaGTP; 280 mOsm; pH adjusted to 7.3 with KOH). Whole-cell configuration was confirmed via increase of cell capacitance. During voltage clamp experiments neurons were clamped at -70 mV. Whole-cell voltage clamp recordings were performed using a MultiClamp 700B amplifier, filtered at 8 kHz and digitized at 20 kHz using a Digidata 1550A digitizer (Molecular Devices). Data were acquired and stored using Clampfit 10.4 software (HEKA Electronics) and analyzed with Mini-Analysis (Synaptosoft Inc., Decatur, GA, United States). The neuronal activity from 200.000 hippocampal cells was sampled extracellularly at 10 kHz using MC_Rack software and MEA1060INV-BC system (MultiChannel Systems, Reutlingen, Germany) placed inside of a cell culture incubator in order to provide properly controlled temperature, humidity, and gas composition as described (Bikbaev et al., 2015). The recordings were initiated after a resting period of 30 min after physical translocation of each individual MEAs to the recording system. The off-line analysis was carried out on 600-s long sessions per MEA at each experimental condition. The detection of spikes was performed after a high-passed (300 Hz) filtering and processing of signals and analyses of neuronal activity were carried out using Spike2 software (Cambridge Electronic Design, Cambridge, United Kingdom).
Statistical Analysis
For statistical analysis, Prism 5 software (GraphPad) was used. The results are presented as mean ± SEM (standard error of the mean). The n number of cells or N individual experiments or samples as well as statistical tests used to evaluate significant differences are given in the figure legends.
A TRAF6 Binding Motif Is Present in Neuroplastin but Not in Other Synaptogenic CAMs
Using the ELM database 1 , we identified a single TRAF6 binding motif in the cytoplasmic tail of all neuroplastins from human, rat, and mouse ( Figure 1A and Supplementary Figure S1A) matching the well-characterized TRAF6 binding motif Sorrentino et al., 2008;Yin et al., 2009). Due to alternative splicing two neuroplastin isoforms Np55 and Np65 differ in an additional Ig domain in the extracellular part, and another alternative splicing event concerns a miniexon encoding four additional amino acids Asp-Asp-Glu-Pro (DDEP) in the C-terminal part (Langnaese et al., 1997). This DDEP sequence is close to the identified TRAF6 binding motif ( Figure 1A). Based on crystallographic studies on the interaction of the TRAF6 TRAF-C domain with the TRANCE receptor , in silico modeling was applied to TRAF6 TRAF-C domain-neuroplastin interaction ( Figure 1B and Supplementary Figure S1C). A strikingly similar three-dimensional structure was predicted for the TRAF6 binding motif of neuroplastin 1 http://elm.eu.org/ when compared to the TRANCE receptor TRAF6 binding motif ( Figure 1C and Supplementary Figure S1C). In particular, the coordinates and stereo specificity of key amino acids ( Figure 1B, P −2 = Pro, P 0 = Glu, and P 3 = Aromatic/Acidic) involved in docking of the TRANCE receptor to TRAF6 TRAF-C domain (TRAF-C) were conserved in the TRAF6 binding motif of neuroplastin ( Figure 1C and Supplementary Figure S1C). Thus, we conclude that the cytoplasmic tail of neuroplastin displays a proper TRAF6 binding site.
The data indicate that direct TRAF6 binding is not a generalized feature among spinogenic CAMs, but rather highlight the potential specificity and importance of the association of TRAF6 to neuroplastin.
We sought to confirm that there is a direct physical interaction between TRAF6 and neuroplastin. To this end, we characterized the binding of the purified neuroplastin intracellular peptide containing the TRAF6 binding motif to immobilized recombinant TRAF6 by surface plasmon resonance (Figures 1D,E and Supplementary Figures S1D,E). Binding to TRAF6 was found dependent on neuroplastin peptide concentration, saturable, and displayed a 1:1 stoichiometry. We calculated a K d value of 88 µM for the neuroplastin-TRAF6 interaction (Figures 1D,E), which is very similar to the K d of 84 µM for the TRANCE receptor-TRAF6 binding (Yin et al., 2009). To establish whether the TRAF6 motif in neuroplastin binds TRAF6 in living cells, we performed coimmunoprecipitation assays from HEK cells transfected with different GFP-tagged constructs of neuroplastins and flagtagged TRAF6. HEK cells have been successfully used before to evaluate the protein interactions of other spinogenic CAMs at the molecular level (Sarto-Jackson et al., 2012;Jiang et al., 2017). Due to alternative splicing of the primary transcript, both major neuroplastin isoforms Np65 and Np55 can contain the alternative DDEP insert close to their TRAF6 binding motif. To consider potential differences in binding, splicing variants with and without DDEP were tested. DDEP splice variants of Np65-GFP co-precipitated flag-TRAF6 suggesting that the mini exon-encoded insertion is not critical for the binding (Figures 1F,G). Similarly, Np55 with and without DDEP insertion co-precipitated with TRAF6 (Supplementary Figure S1F). In contrast, co-precipitation was largely decreased when GFP-tagged versions of Np65 either with deleted TRAF6 binding motif (Np65 -GFP) or with triple (Np65 PED -GFP) or single (Np65 P -GFP) amino acid substitutions in the binding motif were used (Figures 1F,G) as confirmed by densitometric analysis (Supplementary Figure S1G). Additionally, pull-down assays demonstrated that Np65-GFP isolated from HEK cells binds similarly well to purified recombinant GST-TRAF6 or to the GST-TRAF6 C-domain (coiled coil + TRAF-C domain, GST-TRAF6 cc−c ) (Supplementary Figures S1D,E). The data FIGURE 1 | Characterization of the binding of TRAF6 to neuroplastin. (A) Potential TRAF6 binding motif in the intracellular tail of neuroplastin 65 (and identical in neuroplastin 55) fits the canonical and specific motif recognized by TRAF6. The alternatively spliced DDEP sequence is underlined. Bs, Ac, and Ar stand for basic, acidic, and aromatic amino acids, respectively. (B,C) Neuroplastin-TRAF6 binding in silico. (B) Three-dimensional model of the TRAF6 binding motif in the intracellular tail of neuroplastin (cyan) and key amino acids responsible for the binding to TRAF6 fit to the well-known TRAF6 binding motif present in the TRANCE receptor (green). (C) Docking of the TRAF6 binding motif of neuroplastin into the TRAF6 C-domain. Similar to the binding of TRANCE receptor to TRAF6 documented by crystallographic data (Yin et al., 2009), interaction of neuroplastin with TRAF6 would be mediated by the Proline (P) in the position P −2 Glutamic acid (E) in P0, and Aspartic acid (D) in P 3 . (D,E) Direct binding of the neuroplastin-derived intracellular peptide comprising the TRAF6 binding motif to purified recombinant TRAF6. Time-dependent (D) and concentration-dependent (E) binding curve for the neuroplastin-TRAF6 binding where obtained using surface plasmon resonance. (F-H) Neuroplastin-TRAF6 co-precipitation is drastically decreased by deletion or mutation of key amino acids in the TRAF6 binding motif of neuroplastin. (F) Neuroplastin constructs included in the experiments are listed. (G) HEK cells were co-transfected with constructs encoding either GFP, Np65-GFP or Np65 DDEP(−) -GFP and with TRAF6-flag or flag alone for 24 h. Alternatively, (H) HEK cells were co-transfected with GFP, Np65-GFP, Np65 -GFP (TRAF6 binding motif deficient construct), Np65 PED -GFP (containing a TRAF6 binding motif with triple substitution to alanine) or Np65 P -GFP (with single substitution to alanine) and with TRAF6-flag or flag constructs for 24 h. After homogenization, anti-GFP antibody-coupled beads were used to precipitate GFP-tagged complexes. We used anti-Flag or anti-GFP antibodies to detect the proteins as indicated. Representative images from 4-6 independent experiments. (I) Three-weeks old rat forebrains (left panel) and hippocampus from 2 weeks-old Nptn +/+ and Nptn −/− mice (right panel) were lysed and homogenized with RIPA lysis buffer and incubated with a KO-controlled antibody recognizing all neuroplastin isoforms raised in rabbit or pre-immune IgG from rabbit for 24 h at 4 • C. Precipitated proteins were resolved by SDS-PAGE and immunoblotted with a KO-controlled pan anti-Np65/55 antibody from sheep or an anti-TRAF6 antibody from mouse (see section Materials and Methods). support the idea that the TRAF6 binding motif in the cytoplasmic tail of neuroplastin is fully capable of binding the TRAF-C domain of TRAF6. Using highly specific neuroplastin antibodies Bhattacharya et al., 2017;Korthals et al., 2017), we could also show that TRAF6 coimmunoprecipitated with neuroplastin isoforms from brain extracts of 3 weeks-old rats or 2 weeks-old Nptn +/+ mice, but not from 2 weeks-old Nptn −/− mice ( Figure 1I).
TRAF6 Mediates the Formation of Filopodial Structures by Neuroplastin
We have reported disorganization of polymerized actin in dendrites of Nptn −/− primary hippocampal neurons . Coincidently, TRAF6 is known to increase of actin polymerization (Armstrong et al., 2002;Wang et al., 2006;Yamashita et al., 2008). Therefore, we performed experiments to explore if and how TRAF6 and neuroplastin interact to increase actin-based filopodia formation in HEK cells. Overexpression of either of the two neuroplastin isoforms Np55 and Np65 in HEK cells was sufficient to induce a massive increase in filopodia number and length as compared to control cells transfected with either soluble or membraneattached GFP (Figures 2A-C). DDEP-lacking variants of Np55 or Np65 were as effective as the ones that carry the insert to promote filopodial structures (Supplementary Figures S2A-D). However, the capacity of neuroplastin to promote filopodia was abolished by mutation or elimination of the TRAF6 binding site (i.e., Np65 -GFP, Np65 PED -GFP, Np65 P -GFP) (Figures 2A-C). Furthermore, after decreasing protein levels of endogenous TRAF6 by ∼80% using a specific siRNA (Supplementary Figures S2E,F), neither expression of Np65-GFP nor of Np55-GFP did increase the number or length of filopodia in HEK cells (Figures 2A-C). Thus, Np55 and Np65 (±DDEP) are similarly effective to promote the formation of filopodial structures and seem to require endogenous TRAF6 and binding to their TRAF6 motifs to do so.
TRAF6 translocates from the cytoplasm to the membrane by recruitment to integral membrane proteins with TRAF6 binding domains (Yin et al., 2009;Wu, 2013). Therefore, we tested whether neuroplastins via their C-terminal TRAF6 binding motif have the capacity to recruit endogenous TRAF6 to the plasma membrane. In HEK cells transfected with GPIanchored GFP or with Np65 -GFP, TRAF6 immunoreactivity was primarily located in the cytoplasm ( Figure 2D). In contrast, TRAF6 immunoreactivity was abundantly associated with the plasma membrane in cells expressing recombinant Np65-GFP ( Figure 2D) or other variants of neuroplastin (Supplementary Figure S2A). Analyses of co-distribution ( Figure 2E) and colocalization ( Figure 2F) confirmed that plasma membraneassociated TRAF6 co-localizes with Np65. Thus, neuroplastin recruits TRAF6 to the plasma membrane and thereby changes its subcellular localization. This capacity is independent of the presence or absence of the DDEP insert. These experiments favorably complement the binding assays in co-transfected HEK cells (Figure 1).
Next, we asked whether the recruitment and binding of TRAF6 by neuroplastin mediate filopodia formation. To test this prediction, we co-expressed GFP-tagged TRAF6 (TRAF6-GFP) with Np55-RFP. Clearly, co-expression of TRAF6-GFP fostered the increase of filopodia number by Np55-GFP (Figures 2G-I). Intriguingly, endogenous TRAF6 and TRAF6-GFP co-localized with Np55-RFP in filopodia-associated microscopic spots ( Figure 2G). Indeed, analyses of fluorescent intensity and distribution revealed high co-localization of TRAF6-GFP with Np55-RFP in single spots of filopodia ( Figure 2J). The potential involvement of the N-terminal RING domain of TRAF6 was tested using TRAF6 cc−c -GFP containing the coiled coil and TRAF-C domains and lacking the N-terminal domain (Supplementary Figures S1D,E). Despite being recruited to the plasma membrane and co-localized with Np55-RFP (Figures 2G,J), TRAF6 cc−c -GFP blocked neuroplastin-induced filopodia formation (Figures 2G-J). Accordingly, the recruitment and binding of TRAF6 cc−c by neuroplastin is insufficient to promote filopodial structures. Because the RING domain is well-known to be responsible for three-dimensional assembly of functional TRAF6 lattice-like structures (Yin et al., 2009;Ferrao et al., 2012;Wu, 2013), we conclude that only the recruitment and binding of fully functional TRAF6 increases formation of filopodial structures by neuroplastin.
Neuroplastin Promotes the Formation of Spinogenic Dendritic Protrusions
Neuroplastin has been related to synapse formation in vitro and in vivo (Herrera- Amuti et al., 2016;Carrott et al., 2016;Zeng et al., 2016), but the underlying molecular mechanism is unknown. As TRAF6 mediated filopodia formation by neuroplastin in HEK cells, we studied the involvement of the two proteins in the formation of dendritic protrusions, which act as precursors of spines in mature neurons (Ziv and Smith, 1996;McClelland et al., 2010). By confocal microscopy we quantified the number of protrusions per 10 µm length expanding from MAP2-stained dendrites of GFP-filled pyramidal neurons in primary hippocampal cultures from wildtype and neuroplastin-(Nptn-)deficient mice (Figures 3A,B). Absence of neuroplastin gene expression resulted in reduced density of dendritic protrusions in Nptn −/− compared to Nptn +/+ hippocampal neurons at 9 days in vitro (DIV). This phenotype was rescued by transfection of mutant neurons with recombinant neuroplastin isoforms Np55-GFP or Np65-GFP at 9 DIV ( Figure 3C). In parallel experiments with rat primary hippocampal neurons, we observed that the over-expression of either neuroplastin isoform promotes dendritic protrusion density significantly (Figures 3D,E). Rat neurons transfected with either Np55-GFP or Np65-GFP at 7 DIV displayed higher density of dendritic protrusion than control GPF-transfected neurons when evaluated at 8 DIV (Figures 3D,E).
Neuroplastin Promotes Dendritic Protrusions in a Restricted Developmental Time Period and Requires TRAF6
We tested whether and when neuroplastin requires TRAF6 to raise dendritic protrusion density in neurons. Using confocal microscopy and image deconvolution procedures in single z-planes, we assessed the co-localization/co-distribution of endogenous TRAF6 and endogenous neuroplastin in young and Nptn +/+ neurons transfected with GFP-encoding plasmids at 6-7 DIV using Lipofectamine. At 9 DIV, neurons were fixed and stained with anti-GFP antibody followed by an Alexa 488-conjugated antibody to enhance their intrinsic fluorescence (green) and with anti-MAP2 antibodies followed by a proper secondary antibody to detect dendrites (magenta). Images were obtained using a confocal microscope. Scale bar = 100 µm. (B) Protrusion density (number of dendritic protrusions per 10 µm) of GFP-filled Nptn −/− and Nptn +/+ neurons (circles) is expressed as mean ± SEM from three independent cultures. ***p < 0.001 between genotypes using Student's t-test (Nptn +/+ GFP = 4.12 ± 0.18, n = 33; Nptn −/− GFP = 1.72 ± 0.19, n = 36). (C) Protrusion density of GFP-, Np65-GFP-, or Np55-GFP-expressing Nptn −/− neurons from two independent cultures. ***p < 0.001 or **p < 0.01 vs. Nptn −/− GFP using Student's t-test (Nptn −/− GFP = 1.92 ± 0.22, n = 26; Nptn −/− Np65-GFP = 3.67 ± 0.18, n = 20; Nptn −/− Np55-GFP = 3.77 ± 0.19, n = 26). (D,E) Both neuroplastin isoforms increase dendritic protrusion density in rat neurons at 8 DIV. (D) Confocal images show rat neurons transfected with plasmids encoding GFP, Np65-GFP or Np55-GFP at 7 DIV. At 8 DIV, neurons were fixed and stained with anti-GFP antibody followed by an Alexa 488-conjugated antibody (white). Scale bar = 10 µm (E) Protrusion densities of 40-50 neurons per group (circles) from 3 to 4 independent cultures. ***p < 0.001 vs. GFP transfected cells using Student's t-test (GFP = 1.95 ± 0.19, n = 39; Np65-GFP = 3.23 ± 0.14, n = 56; Np55-GFP = 3.58 ± 0.16, n = 38). neurons at 7 and 9 DIV (Figures 4A-D). At 7DIV, ∼95% of neuroplastin spots displayed high or medium degrees colocalization with TRAF6 spots (Figures 4A,B) indicating that both proteins are in close proximity and may interact in dendritic protrusions during this earlier stage of neuronal development. The degree of neuroplastin-TRAF6 co-localization was lower at 9 DIV as only ∼15% of neuroplastin spots showed some co-localization with TRAF6 (Figures 4C,D). Then, we To avoid interference with the intracellular interaction of TRAF6 with the tail of neuroplastin, methanol-fixed rat hippocampal neurons at 7 and 9 DIV were stained with a sheep pan-neuroplastin antibody recognizing the common extracellular Ig2-like domain of Np55 and Np65 and with an rabbit anti-TRAF6 antibody recognizing the amino acids 1-274 located at the intracellular N-terminal of TRAF6. These primary antibodies were followed by proper fluorophore-tagged secondary antibodies, mounted, and imaged using a 100x objective and confocal microscopy. Images were deconvolved to eliminate optic aberrations and improve resolution (see section "Materials and Methods"). (A,C) Confocal pictures and digital magnifications of dendritic protrusions are shown. Green arrows show co-localized TRAF6 spots, red arrows point to co-localized neuroplastin spots, and blue arrows show neuroplastin spots with lower co-localization degree. Scale bars = 5 µm. (B,D) Quantification of the fractions of neuroplastin spots co-localized with TRAF6 spots was performed using single z-planes. The degree of co-localization of neuroplastin was scored as high (Pearson's coefficient from 0.5 to 0.884 or 0.811), medium (0.3-0.499) or low/no-localization (0.1-0.299). 51 (B) or 81 (D) dendritic protrusions from 2 independent cultures were analyzed. (E) Neuroplastin promotes dendritic protrusion during a distinct time window in the development. The panel summarizes the results from transfections of Np55-GFP or Np65-GFP at 6, 7, 8, and 9 DIV as indicated. After 24 or 48 h, evaluation of dendritic protrusion density was performed at the end of each experimental series. In green, time periods when transfections effectively promoted protrusion density. In red, neuroplastin did not promote protrusion density. Data from 3 independent cultures. (F,G) Np65 requires its TRAF6 binding motif to foster formation of dendritic protrusions. (F) Dendritic segments of 8 DIV rat neurons expressing the indicated proteins upon transfection are shown. (G) Protrusion densities from three independent cultures are expressed as the mean ± SEM (GFP = 1.72 ± 0.15, n = 52, Np65-GFP = 3.63 ± 0.11, n = 43; Np65 -GFP = 1.64 ± 0.16, n = 28). ***p < 0.001 vs. GFP using Student's t-test. Images and data from 3 independent cultures. evaluated further the timing for neuroplastin-mediated increase in dendritic protrusions by transfecting rat hippocampal neurons with either Np55-GFP or Np65-GFP at DIV 6, 7, 8, or 9. The density of dendritic protrusions was evaluated 24 or 48 h after transfection ( Figure 4E). Neurons transfected with either Np55-GFP or Np65-GFP at 6 or 7 DIV displayed higher density of dendritic protrusion than control GPF-transfected neurons when evaluated at 8 or 9 DIV (green blocks, Figure 4E). Later transfections of Np65-or Np55-GFP performed at 9 DIV were ineffective to raise the protrusion density in rat neurons analyzed at 10 or 11 DIV (red blocks, Figure 4E). Therefore, we can conclude that neuroplastin increases the density of dendritic protrusions during a time period of major synapse formation in neuronal development. Following this observation, we elucidate if neuroplastin requires its intracellular TRAF6 binding site to promote dendritic protrusions at 8 DIV. While Np65-GFP fostered the density of dendritic protrusions as expected, Np65 -GFP failed to do so (Figures 4F,G).
TRAF6 Confers the Spinogenesis-Promoting Capacity to Neuroplastin
We evaluated whether TRAF6 is essential for neuroplastin to raise the density of dendritic protrusions during the defined critical time period of neuronal development. Consistently, the protrusion density in Nptn −/− dendrites expressing Np65-GFP was higher than in control Nptn −/− dendrites expressing GFP at 9 DIV (Figures 5A,B). Np65 -GFP failed to rescue the dendritic protrusion density in Nptn −/− neurons (Figures 5A,B). We also confirmed the specificity of the TRAF6-neuroplastin interaction to increase the density of dendritic protrusions in rat neurons co-transfected with TRAF6-specific siRNA (characterized in Supplementary Figures S2E,F) and with GFP-, Np65-GFP or Np65 -GFP at 6 DIV. When TRAF6 levels were knocked down by 60% or more at 9 DIV, the dendritic protrusion density was reduced in GFP-, Np65-GFP-, and Np65 -GFP-expressing neurons (Figures 5C,D). Additionally, we evaluated whether Np65 -GFP affects the number of protrusions and interferes with the normal enrichment of Shank2 in dendritic protrusions in rat neurons at 9 DIV. The density of dendritic protrusions and the distribution of Shank2-positive vs. Shank2-negative protrusions were similar between GFP-and Np65 -GFP-expressing rat neurons at 9DIV (Figures 5E-G). These data show that, in contrast to Np65-GFP, Np65 -GFP neither rescued impaired spinogenesis in Nptn −/− neurons nor increased the number of dendritic protrusions in rat neurons. Independently of the rodent model from which neurons were derived, Np65 depends on its TRAF6 motif and TRAF6 expression to increase the density of spinogenic protrusions in hippocampal neurons.
small molecule inhibitor 6860766 (SMI TRAF6), which reversibly binds the TRAF-C domain of TRAF6 blocking its capacity to interact with its binding partners (Chatzigeorgiou et al., 2014;van den Berg et al., 2015), in rat hippocampal neurons at 9 DIV. SMI TRAF6 (2 µM) reduced the density of protrusions in GFP-and in Np65-GFP-expressing neurons compared to vehicletreatment (0.01% DMSO) (Figures 5H,I). Treatment with SMI TRAF6 decreased the fraction of Shank2-positive protrusions in Np65-GFP-expressing neurons slightly but significantly ( Figure 5J). SMI TRAF6 also decreased the area, but not the intensity of Shank2 clusters, and it reduced the number of Shank2 clusters per protrusion in Np65-GFP-expressing neurons to the level of controls (Figures 5K-M). Moreover, SMI TRAF6 treatment evidenced that the size of Shank2 clusters depends on TRAF6 ( Figure 5K). Thus, neuroplastin strictly requires its TRAF6 binding motif and TRAF6 expression to increase dendritic protrusion density in hippocampal neurons. Either deficiency of these pre-requisites abrogates the spinogenic capacity of neuroplastin ( Figure 5N). Neuroplastin interacts through its transmembrane domain (Schmidt et al., 2017;Gong et al., 2018) with all four plasma membrane Ca 2+ ATPases (PMCA1-4) in mature neurons and immune cells (Korthals et al., 2017). Thus, we addressed the question whether neuroplastin requires PMCA to promote dendritic protrusion density. Consistent with our previous report (Herrera-Molina et al., 2017), Np65-GFP and Np65 -GFP were similarly effective to increase protein levels of PMCA2 compared to GFP when co-transfected in HEK cells (Supplementary Figures S3A,B). In rat hippocampal neurons at 9 DIV, confocal microscopy revealed that Np65-GFP and Np65 -GFP were effective to increase endogenous PMCA protein levels (Supplementary Figures S3C,D). Although PMCA inhibition seemed to slightly enlarge protrusions, the density of protrusions was not affected in GFP-filled (Supplementary Figures S3E,F) nor in Np65-GFPexpressing neurons at 9 DIV (not shown). Thus, the spinogenic function of neuroplastin is not critically dependent on PMCA levels or activity.
TRAF6 Effect on Synaptogenesis Impacts Neuronal Activity
To evaluate long-term implications of TRAF6 blockage during the critical time window when neuroplastin required this factor to foster spinogenesis (Figure 4), we treated young rat hippocampal neurons with SMI TRAF6 (2 µM) or with vehicle (0.01% DMSO) during various time periods and then analyzed the number of excitatory synapses (homer-positive puncta matching synapsinpositive puncta, Herrera-Molina et al., 2014) per 10 µm dendrite (Figures 6A,B). Treatment with SMI TRAF6 from 7 to 9 DIV was sufficient to significantly reduce the number of excitatory synapses at 12 DIV. In contrast, neurons treated with SMI TRAF6 from 10 to 12 DIV displayed a similar number of synapses than the vehicle-treated neurons at 12 DIV (Figures 6A,B). These data confirm that TRAF6 plays a critical developmental role in the formation of ∼25% of hippocampal excitatory synapses in vitro.
TRAF6 blockage slightly affected some characteristics of excitatory synapses formed in the absence of TRAF6 function. Evaluation of the area and fluorescence intensity of homer-and synapsin-positive puncta showed that the treatment with SMI TRAF6 from 7 to 9 DIV, but not from 10 to 12 DIV resulted only in a minor change in the area of postsynaptic homerpositive puncta of the synapses (Supplementary Figures S4A,B). On the other hand, area and fluorescence intensity of presynaptic synapsin-positive puncta were in all cases unaltered (Supplementary Figures S4A,B) indicating that synapses formed in the presence of SMI TRAF6 display an almost normal expression and distribution of the synaptic markers. Then, we FIGURE 6 | TRAF6 blockage during neuronal development reduces synapse formation affecting neuronal activity. (A,B) Treatment with SMI TRAF6 reduces the number of excitatory synapses. (A) Representative confocal images of dendritic segments stained with antibodies against synaptic markers (red, postsynaptic Homer; cyan, presynaptic Synapsin-1) at 12 DIV. As indicated, rat hippocampal neurons were previously treated with SMI TRAF6 or with the solvent only for 48 h between days 7-9 or 10-12. Scale bar = 10 µm. (B) Quantification of the number excitatory synapses per 10 µm of dendritic segment from N = 3 independent cultures (control = 7.36 ± 0.58, n = 19; 7-9 = 4.84 ± 0.35, n = 19; 10-12 = 7.36 ± 0.45, n = 9). **p < 0.01 vs. control and ### p < 0.001 vs. 10-12 using Student's t-test. tested whether the activity of the formed synapses is altered by TRAF6 blockade. Presynaptic uptake of synaptotagmin-1 antibody -reporting vesicle release and recycling driven by intrinsic network activity -showed a slightly decreased activity in mature excitatory (VGAT-negative) and inhibitory (VGATpositive) presynapses after treatment with SMI TRAF6 from 7 to 9 DIV, but not from 10 to 12 DIV (Supplementary Figure S4C,D). To interpret the physiological significance of these results, we calculated the area of vesicular release (mean area of puncta) and the activity level (mean intensity per pixel) for each presynapse type. From these data (Supplementary Figure S4E), we conclude that inhibitory synapses formed rather normally in the presence of SMI TRAF6 and that its decreased activity results from adaptation to reduced formation of spinogenic dendritic protrusions (Figures 4, 5) resulting in a lower density of excitatory synapses (Figures 6A,B).
To confirm that reduced synaptogenesis by TRAF6 blockage impacts synaptic transmission of matured neurons, primary hippocampal neurons were treated with SMI TRAF6 or vehicle from 6 to 9 DIV, let to mature, and impaled to record intracellularly miniature excitatory postsynaptic currents (mEPSCs) using patch-clamp technique in the presence of 1 µM TTX at 18-23 DIV (Figure 6C). In SMI TRAF6treated neurons, both amplitude and decay time of mEPSCs were altered, whereas rise time remained practically unchanged compared to vehicle-treated neurons ( Figure 6D) indicating physiological alterations at the postsynaptic levels. To confirm further the functional relevance of our findings for neuronal physiology, we evaluated the effect of TRAF6 blockade on network-driven activity of hippocampal neurons cultured on multi-electrode arrays ( Figure 6E). Consistent with cell biological and electrophysiological evidence described above, neurons treated with SMI TRAF6 from 6 to 9 DIV displayed lower numbers of extracellular spikes at 12 and 18 DIV, as compared to neurons in control arrays ( Figure 6F). This long-lasting impact on neuronal activity highlights the relevance of TRAF6 signaling in spinogenesis during a particular time window of the neuronal development, i.e., 6 to 9 DIV.
DISCUSSION
Our study addresses the question of how timely orchestrated signaling mechanisms allow neurons to form synapses to communicate with each other. Here, we identified a specific signaling mechanism that, during a critical time window in the neuronal development in primary neuronal cultures, regulates the capacity of neurons to form an adequate density of excitatory synapses. In particular, our findings not only uncover a novel function for TRAF6 in neuronal development but also link it to neuroplastin -shown to be relevant in vivo for defining numbers of excitatory synapses and balancing excitation and inhibition in the brain.
TRAF6-Neuroplastin Binding and Spinogenic Cell Signaling
An important finding is that neuroplastin harbors a single intracellular binding motif to bind TRAF6. The intracellular sequence RKRPDEVPD of neuroplastin fulfilled structural and three-dimensional criteria as well as binding affinity to be a proper TRAF6 binding motif Sorrentino et al., 2008;Yin et al., 2009). TRAF6 was only effectively coprecipitated by neuroplastin with an intact TRAF6 binding motif regardless of the presence or absence of the mini-exonencoded DDEP insert. Not surprisingly (Schultheiss et al., 2001;Yin et al., 2009;Ferrao et al., 2012;Wu, 2013), endogenous TRAF6 and GFP-tagged TRAF6 were recruited into the regularly spaced cell membrane-associated puncta by neuroplastin only when the TRAF6 binding motif was intact. Elimination of the lattice-forming RING domain did not prevent TRAF6 recruitment by neuroplastin but abrogated the capacity of the transmembrane glycoprotein to promote the formation of filopodial structures. After translocation from the cytosol, TRAF6 forms micrometric and geometrically organized lattice-like supramolecular structures that host downstream cell signaling elements beneath the cell membrane (Schultheiss et al., 2001;Yin et al., 2009;Ferrao et al., 2012;Wu, 2013). Thus, it is realistic to conclude that upon TRAF6 binding and higherorder oligomerization of the factor, neuroplastin might become a part of such supramolecular complexes to initiate downstream events of cell signaling. Despite their morphological similarities and the general purpose to sense the environment and facilitate cell-to-cell contact, HEK cell filopodia and neuronal dendritic protrusions serve for different specialized functions. While HEK cell filopodia represent more temporary structures engaged also in cell spreading, dendritic protrusions can become highly specialized structures as they are formed and filled with neuronspecific and membrane-associated and cytosolic proteins which interact with other partners to organize the molecular machinery of the mature spine. As in neurons the extracellular engagement of neuroplastin activates p38 MAPK (Empson et al., 2006), ERK1/2 and PI3 kinase (Owczarek et al., 2010(Owczarek et al., , 2011, these signaling pathways could also be related to homophilic transsynaptic engagement of Np65 to promote stabilization of the actin cytoskeleton and/or maturation in Shank2-containing protrusions (Boeckers et al., 1999;Sarowar and Grabrucker, 2016). Additionally, the literature recognized TRAF6 as a main upstream activator of the transcriptional factor NFκB pathway (Darnay et al., 1999;Xie, 2013). In young neurons, NFκB activity is not changed by neuronal activity; however, it is necessary for the formation of excitatory synapses during neuronal development (Boersma et al., 2011). Also, the constitutively high NFκB activity in young neurons maintains glutamatergic synapse formation contributing in turn to the establishment of future synapse density in mature neurons (Boersma et al., 2011;Dresselhaus et al., 2018). Future experiments will have to test whether TRAF6 binding to neuroplastin activates NFκB conferring gene expression regulation of synaptic proteins as part of the specialized program for neuronal development.
Although, recent studies have identified neuroplastin as an essential subunit of all four plasma membrane Ca 2+ ATPases (PMCA1-4) in mature neurons Schmidt et al., 2017;Gong et al., 2018), we found that elimination of the TRAF6 binding motif of neuroplastin or TRAF6 blockage neither affect the capacity of neuroplastin to interact with nor to promote the expression of PMCA in young neurons or HEK cells (Supplementary Figure S3). These results are discouraging to relate TRAF6-neuroplastin spinogenic function to PMCA in young neurons. Also, PMCA immunoreactivity is rather low in P1-P14 postnatal brains (Kip et al., 2006;Schmidt et al., 2017) and mostly intracellular in young hippocampal neurons (Kip et al., 2006) indicating that PMCA function may not be prominent at early developmental states of neurons. Furthermore, it has been shown that formation of dendritic protrusions does not seem to be triggered by neuronal activity (Verhage et al., 2000;Sando et al., 2017;Sigler et al., 2017), global intracellular calcium transients (Lohmann et al., 2005;Lohmann and Bonhoeffer, 2008) or calcium-dependent signaling in young neurons (Zhang and Murphy, 2004). Certainly, in mature neurons, calciumdependent signaling plays a critical role in the dynamic and morphology of synaptic spines where one would expect a significant participation of neuroplastin-PMCA complexes.
TRAF6 Partners Neuroplastin During Synapse Formation
We discovered that neuroplastin and TRAF6 have a spinogenic function operating during a time window in the neuronal development of cultured hippocampal neurons around 6-9 DIV, which is the equivalent time period to the postnatal developmental state of 2-3 weeks-old hippocampus in vivo (Dabrowski et al., 2003;Földy et al., 2016). TRAF6 co-localized with and was strictly required by neuroplastin to promote the number of spinogenic dendritic protrusions, which is a critical step for the formation of excitatory synapses. As demonstrated using pharmacological, knock-down, and colocalization approaches, TRAF6 operated to promote the density of postsynaptic protrusions at the same time period that expression of either Np55 or Np65 was effective to rescue the reduced number of dendritic protrusions in Nptn −/− neurons. This shows that the TRAF6-dependent mechanism is not essentially dependent on Np65 which, in contrast to Np55, can homophilically interact via its specific trans-adhesive extracellular Ig1 domain (Smalla et al., 2000). This does not rule out the possibility of a later participation of TRAF6 in the trans-stabilization of pre-and post-synapses by extracellular engagement of Np65 (Smalla et al., 2000;Herrera-Molina et al., 2014). Indeed, constitutive elimination specifically of Np65 is not sufficient to alter the synapse density; but was rather reported to cause morphological alterations of hippocampal spines (Amuti et al., 2016). Important in the context of this study is the finding that constitutive elimination of all neuroplastin isoforms, i.e., the absence of Np55 and Np65 reduces the density of excitatory synapses in the hippocampus whereas that synapse density is not altered in the hippocampus upon induced neuroplastin gene elimination in adult conditional mutant mice (Bhattacharya et al., 2017).
After this time window in the development, neither TRAF6 nor neuroplastin promote dendritic protrusion formation or synapse density. This could be explained by a switch from TRAF6 binding neuroplastin during synaptogenesis in young neurons to binding PSD-95 in mature neurons to promote synaptic plasticity (Ma et al., 2017). Indeed, PSD-95 levels are lower in early synapses than in mature spines (Buckby et al., 2004;Sheng and Hoogenraad, 2007). In mature neurons, TRAF6 binds to PSD-95 and stabilizes the structure of mature synapses and synaptic plasticity (El-Husseini et al., 2000;Ma et al., 2017). The function of TRAF6 in synapse formation is also different from the one reported for the molecule in the embryonic brain, where homozygous deficiency of TRAF6 suppresses programmed cell death induced via p75 neurotrophin receptors-promoted apoptosis (Lomaga et al., 1999;Yeiser et al., 2004). Accordingly, our report unravels a new function of TRAF6 operating in a time window between its functions in the survival of embryonic neurons and in plasticity of mature neurons. Admittedly, the novel spinogenic function of neuroplastin-TRAF6 was identified in primary neuronal cultures, which in general reproduce essential and critical molecular events related to CAMs and synapse formation and maturation (Henderson et al., 2001;Varoqueaux et al., 2006;Williams et al., 2011;Sarto-Jackson et al., 2012;Herrera-Molina et al., 2014;Jiang et al., 2017), and this finding needs to be critically evaluated during the major period of synaptogenesis in the hippocampus in vivo. However, the observation that lack of neuroplastin during development leads to a reduced number of excitatory synapses in the hippocampus , a phenotype that cannot be induced by switching off the Nptn gene in adult stages (Bhattacharya et al., 2017), can be taken as an indication that neuroplastin function is required during brain development to determine synapse numbers in this area. Clearly, a verification of the involvement of TRAF6 in this process in vivo needs to be tackled in future.
An interesting finding is that the TRAF6 binding motif is not present in other synaptogenic CAMs, suggesting that recruitment of TRAF6 to neuroplastin is a very specific mechanism. CAMs have been proposed as key participants in the regulation of synapse formation and maturation (Henderson et al., 2001;Missler et al., 2003;Varoqueaux et al., 2006;Chubykin et al., 2007;Linhoff et al., 2009;Bozdagi et al., 2010;Robbins et al., 2010). Indeed, CAMs can form specific transmembrane complexes in cis that in turn recruit intracellular proteins and activate different spinogenic signaling mechanisms (Yoshihara et al., 2009;Cavallaro and Dejana, 2011;Jang et al., 2017). How can neuroplastin coordinate with other CAM-dependent mechanisms during synapse formation? A compelling study by Földy et al. (2016) used single neuron mRNA sequencing and shown that neuroplastin is highly expressed in excitatory pyramidal neurons in the hippocampus at P7-P14 when massive synapse formation is ongoing (Földy et al., 2016). We interpreted that these high levels of neuroplastin vs. other CAMs during neuronal development may be required by the excitatory neurons to initiate unique and/or distinctive spinogenic mechanisms that other CAMs do not. Our results fit with this idea and with the possibility that neuroplastin may engage with cell-autonomously expressed molecular machineries to promote the formation of a specific group of synapses via regulation of TRAF6-dependent spinogenic signaling (Yoshihara et al., 2009;Jiang et al., 2017;Jang et al., 2017;Sudhof, 2017).
Another interesting finding is that the TRAF6-neuroplastindependent spinogenic mechanism induces the formation of a fraction of hippocampal excitatory synapses. Here, we revealed that TRAF6 is critically necessary for the formation of some ∼20-25% of the excitatory synapses while others synapses, including formed excitatory and inhibitory synapses, where unaffected. This was found important for neurons to develop proper excitatory synaptic transmission and neuronal activity. Coincidently, constitutive elimination of neuroplastin gene expression but results in a similar reduction in the number of excitatory synapses accompanied also by decreased synaptic transmission in cultured hippocampal neurons . As neuroplastin depended completely on TRAF6 to promote spinogenesis, it is very likely that the two proteins are promoting the formation of a specific group of excitatory synapses in the hippocampal circuit. Currently, we do not know the specific nature of these particular excitatory synapses, but we suspect that they could be located at the CA1 and/or DG pyramidal neurons as identified in neuroplastindeficient hippocampus Bhattacharya et al., 2017).
Could TRAF6 and Neuroplastin Be Players in Neurological Disorders With Altered Synapse Density?
It is proposed in the field of schizophrenia research that alterations in molecular mechanisms responsible for synapse architecture and/or density would impact on the pathogenesis of this disorder (Boda et al., 2010;Caldeira et al., 2019). As, reduced synapse density (Caldeira et al., 2019) and increased TRAF6 mRNA expression were demonstrated in the hippocampus and striatum of schizophrenic patients 2 , it is tempting to speculate that altered timing or levels of TRAF6 expression could contribute to an impairment in synapse formation -a hypothesis that needs to be tested. Elucidation of this matter may also contribute to the understanding of the association of neuroplastin expression with schizophrenia risk (Saito et al., 2007, see footnote 2).
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
FUNDING
This study was supported by funding from the DFG GRK 1167 and the ABINEP graduate school funded by the federal state of Saxony-Anhalt and the European Structural and Investment Funds (ESF, 2014(ESF, -2020, project number ZS/2016/08/80645 to EG, MN, and CS. RM received funding from FONDECYT Grant No. 1181260. MP was supported by the Leibniz Association (SheLi J28/2017). RH-M is an LSA fellow from the Center for Behavioral Brain Sciences (CBBS) and received the DAAD grant no. 57514679. EG was supported by the DFG SFB 854, and CS and EG received funding from BMBF 01DN17002.
|
2020-12-09T14:06:55.330Z
|
2020-12-09T00:00:00.000
|
{
"year": 2020,
"sha1": "43025f636d99672f8253debda1a635516cd6336b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2020.579513/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43025f636d99672f8253debda1a635516cd6336b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
230661890
|
pes2o/s2orc
|
v3-fos-license
|
New Bioactive Peptides Identified from a Tilapia Byproduct Hydrolysate Exerting Effects on DPP-IV Activity and Intestinal Hormones Regulation after Canine Gastrointestinal Simulated Digestion
Like their owners, dogs and cats are more and more affected by overweight and obesity-related problems and interest in functional pet foods is growing sharply. Through numerous studies, fish protein hydrolysates have proved their worth to prevent and manage obesity-related comorbidities like diabetes. In this work, a human in vitro static simulated gastrointestinal digestion model was adapted to the dog which allowed us to demonstrate the promising effects of a tilapia byproduct hydrolysate on the regulation of food intake and glucose metabolism. Promising effects on intestinal hormones secretion and dipeptidyl peptidase IV (DPP-IV) inhibitory activity were evidenced. We identify new bioactive peptides able to stimulate cholecystokinin (CCK) and glucagon-like peptide 1 (GLP-1) secretions, and to inhibit the DPP-IV activity after a transport study through a Caco-2 cell monolayer.
Introduction
The world population is projected to rise by 2 billion in the next 30 years, reaching 9.7 billion in 2050 [1]. This growth implies in an increase in food consumption, and the protein demand will significantly grow as a result of socio-economic changes, increased urbanization, rising incomes and the recognition of the role of protein in healthy diets. Dietary protein production exerts a high environmental impact, particularly for animalderived protein, which causes high greenhouse gas emissions, land-use changes linked to an important terrestrial biodiversity loss and a high-water demand [2]. In this context, there is an important need to valorize and to better characterize dietary protein derivedbyproducts to optimize their use and to answer the worldwide growing protein demand. For instance, in 2018, about 25% of the 178 million tons of global fish and shellfish production were lost or wasted [3]. The valorization processes of fish high-quality protein byproducts will partially address these issues by offering a renewable alternative, whilst creating added value in numerous domains such as in functional food or the pet food industry. The global pet food market was valued at USD 103.5 billion in 2016 in which the segment of healthcare and nutritional supplements shared 5%. In parallel, overweight and obesity and their associated chronic diseases such as type 2 diabetes mellitus (T2DM) are growing at a worrying rate around the globe. In 2016, 650 million adults were obese, amongst 1.9 billion overweight persons. In 2019, the prevalence of T2DM was estimated at 417 million and the projection for 2045 is about 630 million [4].
Like humans, companion dogs and cats are affected by overweight and obesity comorbidities such as diabetes and cancers, leading to impaired health and reduced life span. Depending on breeds and the methodology used to evaluate health status, overweight and obesity prevalence was estimated between 19.7% and 59.3% in dogs and between 7% to 63% in cats. This situation is mainly due to excessive food offer and related calorie intake, as pet owners do not follow nutritional guidance, and to a loss of physical exercise leading to the overweight-derived problems mentioned above but also to skin disorders, respiratory and locomotor diseases [5,6].
Dietary protein digestion produces the release of peptides and free amino acids, which regulate short term food intake. Protein-digested products stimulate the secretion of satiety signals via the "intestinal sensing" phenomenon, a nutrient recognition on the apical side of the enteroendocrine cells (EECs) [7,8]. Nevertheless, mechanisms that lead to gut hormones secretion by EECs after peptide and amino acid intestinal sensing remain unclear [9]. The two well-known intestinal anorexigenic hormones, cholecystokinin (CCK) and glucagon-like peptide-1 (GLP-1), exert their satiating effect via different pathways. GLP-1 also plays a significant role in glucose metabolism by regulating blood glucose via its incretin action [10]. After its secretion by EECs following a meal, circulating level of GLP-1 increases but has a short half-life because it is inactivated by the dipeptidyl peptidase 4 enzyme (DPP-IV) which is a serine protease present in a soluble form (plasma, urine, amniotic fluid) as well as in a membranous form in a wide type of cells and organs like the intestine, kidney or liver [11]. Hence, GLP-1 agonists and DPP-IV inhibitors have been targeted to treat insulin resistance occurring in T2DM [12]. These past few years, numerous dietary protein-derived peptides have been identified as DPP-IV inhibitors. Albeit they are less potent than drugs, they have shown a rising interest as a natural alternative to chemical DPP-IV inhibitors which harbor important side effects [13].
Fish protein-and hydrolysate-derived bioactive peptides have been identified to exert many in vitro and in vivo bioactivities, suggesting promising health benefits via several pathways involved in hypertension, obesity, inflammation, or in the regulation of glucose homeostasis in metabolic disorders [14]. Indeed, numerous studies have shown, in vitro as well as in vivo, the beneficial effects of fish hydrolysates on food intake regulation by the stimulation of the gut hormones secretion, in particular CCK and GLP-1 [15][16][17][18]. Moreover, fish hydrolysates could improve glucose homeostasis by increasing plasma GLP-1, gastric inhibitory peptide, also known as glucose-dependent insulinotropic polypeptide (GIP) increasing insulin secretion and lowering blood glucose [19,20] but also by reducing DPP-IV activity in vitro and in vivo [21][22][23].
The Nile tilapia (Oreochromis niloticus) is the third most produced species in the world for which the production was more than 4500 thousand tons representing 8.3% of the world aquaculture production in 2018. These days the development of fish processing has resulted in growing quantities of byproducts which can represent more than 70 percent of the processed fish [3].
In this work, we investigated whether a tilapia fish byproduct protein hydrolysate (FBPH) compared to its raw material (FBP), both submitted to an in vitro simulated gastrointestinal digestion (SGID), could stimulate gut hormones secretion in EECs and inhibit intestinal DPP-IV activity. This work aimed to future pet applications, so the consensual human SGID INFOGEST protocol was adapted to the dog digestion [24].
Peptide Profile Modifications during SGID
To characterize and compare the impact of SGID on elution profiles and peptide apparent molecular weight (MW) distributions, FBPH and FBP, gastric and intestinal digests were submitted to SEC-FPLC. The oral digest both for FBPH and FBP serves as a referent peptide elution profile. The gastric and intestinal SGID phases did not extensively modify the shape of the peptide elution profiles of FBPH. In contrast, for FBP, significant modifications occurred during the SGID, as illustrated by the curve shift towards lower MW between the oral and the intestinal phases ( Figure 1A). Besides, the MW distribution showed that the impact of the SGID was less significant for FBPH than for FBP. Thus, the proportions of high MW peptides (above 3 kDa) in oral, gastric and intestinal digests represented 72.8%, 66.6% and 48.1% for FBP and 52.4%, 50.9%, 45.4% for FBPH, respectively ( Figure 1B). Moreover, in the intestinal phase, high MW peptides (above 6 kDa) disappeared entirely for both FBP and FBPH digests. The same phenomenon was observed for small MW peptides (below 1 kDa) for which the proportion increased during the SGID in a less extent manner for FBPH than for FBP. Indeed, their proportions in oral, gastric and intestinal digests were 27.5%, 27.5%, and 32.0% for FBPH and 19.0%, 19.0% and 32.9% for FBP, respectively. Despite slight differences, at the end of the SGID, the MW distribution profiles of FBP and FBPH were similar.
CCK and GLP-1 Secretion Induced by FBPH and FBP Digests
The STC-1 cells exposure to increasing concentrations (2, 5 and 10 mg mL −1 w/v) of oral and gastric digests of FBP and FBPH induced a dose-dependent increase in CCK release (Figure 2A). At the higher concentration tested (10 mg mL −1 w/v), the FBPH contact led to a better stimulating effect with 7.2 ± 0.6 and 7.8 ± 0.2-fold of control (FOC) while the amounts of CCK obtained after FBP contact were 4.2 ± 0.4 and 5.7 ± 0.3 FOC for oral and gastric samples, respectively. Conversely, the intestinal digest of FBP highly stimulated the secretion of CCK (7.9 ± 0.7 FOC). The digestive process enhanced the ability of FBP to stimulate CCK secretion. Conversely, for FBPH, sole the intestinal phase of the SGID led to a slight diminution in CCK secretion in STC-1 cells at 10 mg mL (w/v).
The effects of oral and gastric digests of FBP and FBPH on GLP-1 secretion stimulation were equivalent ( Figure 2B). Thus, only the gastric 10 mg mL (w/v) dose induced a significant increase of GLP-1 secretion with 14.6 ± 1.4 and 10.2 ± 2.3 FOC for FBP and FBPH, respectively. After the intestinal phase, the stimulating effect of FBPH digest on GLP-1 secretion was highly enhanced and led to 20.6 ± 2.1, 43.3 ± 2.7 and 45 ± 3.0 FOC for 2, 5 and 10 mg mL (w/v) concentrations, respectively. The effect of the SGID intestinal phase on the ability of FBP to enhance the stimulation of GLP-1 secretion was weaker. Indeed, the results were significant only for 5 and 10 mg mL (w/v) doses with recovered GLP-1 secretions of 9.3 ± 1.3 and 29.4 ± 6.0 FOC, respectively. Means without a common letter within the same graph are significantly different (p < 0.05) using one-way ANOVA following by Tukey post-hoc test for pairwise comparisons.
Intestinal DPP-IV Inhibition Activity of FBPH and FBP Digests
The Caco-2 cells exposure to increasing concentrations (from 0.5 to 1.98 mg mL −1 , w/v) of oral, gastric and intestinal digests of FBP and FBPH induced a dose-dependent inhibition of the Caco-2 DPP-IV activity. The DPP-IV inhibitory activity potential of FBP increased through the different phases of the SGID. Indeed, the DPP-IV inhibitory activity observed with FBP digests assayed at 1.98 mg mL −1 (w/v) was 1.9-fold higher for intestinal digest than for oral sample. Moreover, the calculated IC 50 for the intestinal digest (IC 50 = 3.70 mg mL −1 ) is about 23-fold lower than for the oral sample (IC 50 = 86.08 mg mL −1 ) ( Figure 3). For FBPH, the percentage of DPP-IV inhibitory activity of the samples collected in the three SGID compartments reached approximately 80% at 1.98 mg mL −1 (w/v). The calculated IC 50 was very closed for the oral, gastric and intestinal digests (insert of Figure 3). This highlights the small effect of the SGID on the DPP-IV inhibitory activity of FBPH. Results also showed that the intestinal digest of FBPH was much more potent than the FBP one. Thus, at a concentration of 1.98 mg mL −1 (w/v), the DPP-IV activity inhibition was about 2.3-fold higher for FBPH than for FBP with a 5.5-fold lower calculated IC 50 (Figure 3). To identify active peptides able to stimulate the secretion of intestinal hormones, we first performed the SEC fractionation of the FBPH intestinal digest ( Figure 4A). Four fractions were recovered and put in contact with STC-1 cells at 5 mg mL −1 (w/v) for 2 h. Results obtained showed that all of them were able to stimulate CCK secretion with the F2 and F4 fractions that displayed the higher potential with 3.5-and 4.5-fold of the control CCK secretion level, respectively ( Figure 4B). Regarding GLP-1, the F2, F3 and F4 fractions were able to stimulate its secretion in STC-1 cells. The F2 fraction exerted a broadly higher potential than other fractions with 37 FOC and 2.6-fold of the FBPH digest ( Figure 4C).
The F2 fraction was thus selected to be fractionated by RP-HPLC on a C18-column and 7 subfractions were designed ( Figure 5A). The subfraction FE presented the higher potential to stimulate both CCK ( Figure 5B) and GLP-1 ( Figure 5C) secretion in STC-1 cells compared with other subfractions. Indeed, the CCK secretion stimulation by the FE subfraction was 31.9-, 8.4-and 7.2-fold higher than those obtained with the buffer, the F2 fraction and the FBPH intestinal digest, respectively. In the same way, the GLP-1 secretion stimulation by the FE subfraction was 32.0-, 2.5-and 17-fold higher than those obtained with the buffer, the F2 fraction and the FBPH intestinal digest, respectively. The SEC fractionation of the FBPH intestinal digest was performed using a HiLoad 16/600 Superdex prepgrade column with an isocratic gradient of 30% acetonitrile 0.1% TFA (A). The amounts of intestinal hormones in the supernatants, after 2 h of contact with the fractions and or the FBPH digest (0.5% w/v), were determined by radioimmunoassay for CCK (B) and active GLP-1 (C). Values are the means of three repeated measurements and are expressed in fold of control (buffer) ± SD. Means without a common letter within the same graph are significantly different (p < 0.05) using one-way ANOVA followed by a Tukey post-hoc test for pairwise comparisons.
RP-HPLC-MS/MS Peptides Identification in the FE Subfraction
The FE subfraction was then subjected to RP-HPLC-MS/MS analysis to identify peptides present in this fraction. The Figure 6 showed the mass signal 3D-map obtained.
A total of 1739 peptide sequences were identified (database + de novo with ALC > 80%, data not shown). Among all the identified peptides, 20 of them were selected on the basis of (i) their presence in the most intense peaks of the UV chromatogram (λ = 214 nm), (ii) their ion intensity and (iii) their ion fragmentation quality (Table 1). Those peptides were then chemically synthesized and their ability to stimulate CCK and GLP-1 secretion in STC-1 cells was further assayed. Grey signals represent all ions detected, blue squares represent identified peptides by database confrontation (false discovery rate (FDR) < 1%) and orange squares represent peptides sequenced by de novo mode (ALC score > 80%). Table 1. List of the 20 peptides selected for chemical synthesis, following their identification by database confrontation or de novo sequencing. Peptides were listed according to their RP-UPLC-MS/MS characteristics (retention time (RT), mass to charge ratio (m/z) and their identification score displayed (i) by the average local confidence (ALC) score for the de novo sequencing, (ii) by the −10 logP score for database confrontation with the mass error (in ppm) for both identification modes (ID). nd: when the identification of the parent protein was not possible. The synthetized peptides were put in contact with STC-1 cells for 2 h at a final concentration of 1 mM and the amounts of secreted CCK and GLP-1 were further determined by radioimmunoassay. As shown in Figure 7A, among the 20 peptides assayed almost two of them, DLVDK and PSLVH, were able to significantly stimulate CCK secretion (p < 0.0001). Regarding GLP-1, only the LKPT peptide was able to significantly stimulate active-GLP-1 release (p < 0.001) ( Figure 7B). . Values are means of three repeated measurements and are expressed in fold of control (buffer) ± SD. Means were compared to control mean using one-way ANOVA following by a Dunnett post-hoc test, **** p < 0.0001; *** p < 0.001.
Identification of Peptides in the Basolateral Side of the Intestinal Barrier Able to Inhibit In Vitro and In Situ the DPP-IV Activity
After 2 h of contact of the FBPH intestinal digest with the Caco-2 cell monolayer in vitro IB model (apical side), 17 peptide sequences were identified by RP-UPLC-MS/MS in the basolateral side. Among these peptides, based on their presence in the most intense peaks of the UV chromatogram monitored at a wavelength of 214 nm, 13 were chemically synthesized, and their DPP-IV inhibitory activity assayed in vitro and in situ ( Table 2). Results showed that five peptides (GPFPLLV, VAPEEHPT, VADTMEVV, DPLV and FAMD) were able to inhibit the DPP-IV activity in vitro, with IC 50 values ranging from 263 to 775 µM. Seven peptides (GPFPLLV, MDLP, DLDL, FAMD, VADTMEVV, CSSGGY and VAPEEHPT) were able to inhibit the in situ DPP-IV activity with IC 50 values ranging from 456 to 2268 µM. Four peptides (GPFPLLV, VAPEEHPT, VADTMEVV and FAMD) were able to both in vitro and in situ inhibit the DPP-IV activity. Table 2. In vitro and in situ DPP-IV inhibitory activity of the 13selected-chemically synthesized peptides following their identification by database confrontation or de novo sequencing (ID mode). Peptides were listed according to their RP-UPLC-MS/MS characteristics (retention time (RT) and mass to charge ratio (m/z) and their identification score displayed (i) by the average local confidence (ALC) score for the de novo sequencing, (ii) by the −10 logP score for database confrontation with the mass error (in ppm) for both identification modes. Values of the in vitro and in situ DPP-IV inhibitory activity (IC 50 ) were determined by linear regression correlating the DPP-IV activity inhibition percentage and the Ln of the peptide concentration. nd: when the identification of the parent protein was not possible and when the IC 50 value was above 2500 µM or undeterminable.
Discussion
The first goal of this work was to study and to compare the effects of the dog gastrointestinal digestion of a tilapia byproduct protein hydrolysate and its raw material on in vitro cellular markers related to food intake and glucose homeostasis. Consequently, we first developed a static in vitro simulated dog gastrointestinal digestion based on the consensual INFOGEST protocol and on a previously one developed to study protein digestion and, according to previous works performed to investigate drug behavior [25,26] and nutrient digestibility [27] in dogs. As expected, the digestive enzymes (pepsin followed by pancreatin) exerted a more significant impact on the FBP peptide profiles than on those of FBPH, because of industrial enzymes previously digested the raw material. Although the peptide profiles and the apparent MW distribution of FBPH and FBP were quite similar at the end of the SGID, results obtained on intestinal bioactivities highlighted the added benefit of the raw material pre-hydrolysis. Indeed, the FPBH intestinal digest led to a better stimulation of active GLP-1 secretion (44.9 against 29.4 FOC) and a better inhibition of the in situ Caco-2 DPP-IV activity (5.5-fold lower IC 50 value). Previous results obtained after the SGID of cuttlefish viscera byproduct hydrolysates and their raw material had already showed the added value of the pre-hydrolysis on the recovered intestinal digest DPP-IV inhibitory activity and GLP-1 secretion stimulation [17]. However, this was not the case for CCK secretion stimulation for which the FBPH intestinal digest was slightly less potent than the FBP one (6.2 against 7.9 FOC). The results also highlighted the crucial role of pancreatic enzymes in the apparition of protein-derived peptide bioactivities related to food intake and glucose metabolism regulation. This corroborates results obtained in previous works dealing with the SGID digestion of bovine hemoglobin and sepia byproducts on CCK and GLP-1 secretions in STC-1 cells and DPP-IV activity inhibition [17,28]. In the same way, previous works showed that intestinal digests of casein or bovine hemoglobin induced a higher stimulation of GLP-1 secretion than hydrolysate before SGID [29]. In contrast, hydrolysates may also lose their bioactivities during the gastrointestinal digestion as evidenced for a salmon skin gelatin hydrolysate which lost its GLP-1 stimulatory activity and had a significantly lower DPP-IV inhibitory activity after the SGID [30].
The FBPH intestinal digest exerted a DPP-IV inhibitory potential characterized by an IC 50 value equal to 1.52 mg·mL −1 when obtained by in vitro biochemical test. This is in line with numerous studies showing IC 50 values for marine byproduct hydrolysates ranging from 1 to 5 mg·mL −1 [17,19,22,31,32] and even less than 1 mg·mL −1 as for Gadus chalcogrammus gelatin [33] and Salmo salar hydrolysates [30]. Here, we used an in situ DPP-IV activity test using live Caco-2 cells, which mimics the intestinal environment and, in particular, the enzymatic action of peptidases produced by the epithelial cells of the intestinal brush border [34]. The IC 50 value obtained for the FBPH intestinal digest was 0.67 mg·mL −1 . This value is obviously not comparable with those obtained with the in vitro classical test. Nevertheless, this DPP-IV inhibitory activity appears very promising when compared with the IC 50 (1.57 mg·mL −1 ) obtained for an intestinal cuttlefish byproduct hydrolysate digest in a previous work with the same Caco-2 in situ test [34].
The in vitro results obtained here can be extrapolated to those previously obtained in vivo with other fish protein hydrolysates on food intake and glycemic managements in healthy mice [20] and rats [16], in high-fat-diet-induced obese mice [35] or diabetic and obese rats and as in several clinical studies [18,21,33,[36][37][38].
To identify, from the FBPH intestinal digest, active peptides able to stimulate intestinal hormones secretion by EECs, a methodology built on two different successive separation techniques were adopted Caron, J. et al., 2016 [23,25]. Using a first SEC-purification step, the fraction F2 composed by a majority of peptides characterized by apparent MW ranging from 400 to 1000 Da was selected on the basis of its intestinal hormones release activity and submitted to RP-HPLC. Finally, the FE subfraction obtained after RP-HPLC separation exerted the best stimulating release effect for both intestinal hormones, unambiguously.
Among the 1739 peptide sequences identified by the mass bioinformatics data processing, 20 of them were selected (based on their presence in the most intense peaks of the UV chromatogram (λ = 214 nm) and their ion intensity and fragmentation quality), chemically synthesized and assayed for their capacity to stimulate CCK and GLP-1 release. The results allowed to identify two new peptides, PSLVH and DLVDK, able to enhance CCK release by EECs, and one tetrapeptide, FAMD, able to stimulate GLP-1 release. Today, few peptide sequences are reported in the literature to stimulate CCK and GLP-1 secretion by EECs, and the relationship existing between the CCK and GLP-1 food-derived releasing peptide bioactivity and their structure and amino acid sequence is not well established [9,38]. Nevertheless, different signaling pathways, involving G protein-coupled receptors (GPCRs) like GPR93, GPRC6A and the calcium-sensing receptor (CaSR) but also the cotransporter PepT-1, have been evidenced in the food protein-derived peptide intestinal sensing leading to CCK and GLP-1 secretion [39][40][41]. CCK releasing food-derived peptides were identified from soybean β-conglycinin, bovine hemoglobin, lactoglobulin, bovine whey, casein, and egg white protein [42][43][44][45][46][47][48]. To our knowledge, it is the first time from fish source. The motif and the structure of the peptides appear crucial in the intestinal sensing leading to CCK secretion. CaSR was described to sense W and F aromatic amino acids [49,50] and the presence of aromatic residues in the peptide sequence seems to favor the bioactivity. In previous works, we evidenced two CCK-releasing fractions of a bovine hemoglobin SGID intestinal digest. They were able to highly stimulate CCK secretion and composed of more than 50% of peptides containing at least one aromatic amino acid residue in their sequence. The four hemorphins (LLVVYPWT, LVVYPWT, VVYPWT and VVYPWTQRF), released during bovine hemoglobin digestion, were synthesized and proved as CCK and GLP-1 secretion stimulating peptides [42,44]. However, in the present work, the two identified CCK-releasing stimulating peptides, PSLVH and DLVDK, do not possess aromatic residues, whereas DVSGGYDE did not stimulate CCK secretion. These two active peptides are both composed of five amino acid residues and some of them contain an aliphatic chain. These findings are in accordance with a precedent work which hypothesized that five amino acid residues were the minimal size and that the presence of aliphatic chain could be crucial in the CCK secretion in STC-1 cells [47]. In accordance, in a recent work, two peptides able to stimulate CCK secretion in STC-1 cells, VLLPDEVSGL and VLLPD, were identified from an egg white SGID intestinal digest. They both did not contain aromatic amino acid residues but are characterized by a high rate of aliphatic ones [48].
Like for CCK, only few food-derived peptides, able to stimulate the secretion of GLP-1, were identified. We previously identified four sequences (KAAVT, TKAVEH, ANVST and YGAE) from a bovine hemoglobin intestinal digest and proposed that the presence of basic amino acid residue (L-lysine) in the N-terminal side of the peptide, as well as the presence of a T residue in the C-or N-terminal, are common points that could be implied in the peptide sensing that led to GLP-1 secretion [42]. LKPT evidenced in the present work also possesses a lysine amino acid residue in N-terminal position as it was also found in the minimal sequence from α-actinin-2 (KPYIL) able to stimulate the GLP-1 secretion in murine GLUTag cells. However, the K residue position in the sequence does not seem crucial for the bioactivity as ASDKPYIL is also active [51]. The RVASMASEKM peptide, recently identified from egg white protein digest as GLP-1 secretagogue, also possesses a K residue but in C-terminal position [48]. Nevertheless, results obtained here showed that ELLK and EAPLNPK did not lead to GLP-1 secretion, and other works identified peptides able to stimulate GLP-1 secretion without having K or T residues in their sequences, such as GGGG, AAAA, GWGG [52], GPVRGPFPIIV [53], LGG and GF [54] and, PFL [48].
Taken together, these findings confirm the presence of multiple pathways involved in the intestinal peptide sensing leading to CCK and GLP-1 secretion by EECs. It will be necessary to identify which one is used by each peptide to elucidate the relationships between the physicochemical properties, the structure, and the sequence of the peptide and its secretagogue activity.
Among the 13 synthesized peptides identified in the basolateral side of the intestinal barrier model, 5 exerted an in vitro DPP-IV inhibitory activity, with IC 50 values ranging from 263 to 775 µM (Table 2). These peptides are promising compared with food-derived DPP-IV inhibitory peptides identified between 2016 and 2018 as recently reviewed by Liu et al. [55]. Indeed, when we perform from the Liu et al. list, the analysis of 74 peptides, identified and characterized by IC 50 values ranging from 43 to 2000 µM, the IC 50 mean and median values were 596 and 226 µM, respectively. The large majority of the studies which identified DPP-IV inhibitory dietary protein-derived peptides has used in vitro controlled method to calculate IC 50 values and did not assay the ability of the peptides to cross the intestinal barrier. In the recent work of Harnedy et al., the authors evaluated the DPP-IV inhibitory activity potential of peptides identified from two RP-HPLC fractions of a boarfish hydrolysate submitted to SGID. Several peptides were then synthesized and assayed for their in vitro inhibitory DPP-IV activity. The most promising peptides (IC 50 < 200 µM) were further assayed for their ability to inhibit in situ the human DPP-IV activity using culture Caco-2 cells in order to better mimic intestinal physiological conditions. They indeed identified 18 peptides with in situ IC 50 values ranging from 44 to 307 µM [32]. Despite these very interesting findings, the ability of the identified peptides to cross the IB still needs to be studied. Indeed, several studies showed that many DPP-IV inhibitory peptides identified in the intestinal tract cannot cross the IB without being cleaved and losing their bioactivities. Other studies showed that certain DPP-IV inhibitory peptides were able to cross the IB in vitro. Domenger et al. showed that among five DPP-IV inhibitory peptides identified in a bovine hemoglobin intestinal digest, only three of them were recovered intact after the passage through a Caco-2 cells monolayer [44,56]. Lacroix et al., also evidenced the susceptibility of certain milk protein-derived DPP-IV inhibitory peptides to be cleaved by brush barrier peptidases [57]. Indeed, the differentiated Caco-2 cells express mainly two peptidases, DPP-IV and transmembrane protease serine 4 (TMPRSS4) which have been evidenced to hydrolyze peptides during the passage through the simulated IB [58].
In the present study, we adopted the strategy to first incubate the whole digested hydrolysate at the apical side of the IB model for 2 h in order to further identify the peptides in the basolateral compartment. Adopting this strategy also permitted to mimic the interaction of the whole digest with the IB which may modify its permeability. There is some evidence that food-derived peptides could alter intestinal barrier permeability via their actions on tight junction proteins [59,60]. Indeed, we evidenced in a previous work, four hemorphins harboring DPP-IV inhibitory activity able to significantly decrease mRNA expression of the claudin 4, a protein present in tight junctions and involved in paracellular permeability [56]. Finally, the eight new DPP-IV inhibitory peptides which were evidenced in this study might be able in vivo to reach the plasmatic compartment in a sufficient concentration and to inhibit the DPP-IV circulating form, enhancing the half-life of the GLP-1 and therefore its incretin and satiating actions. Moreover, it is crucial to keep in mind that a substantial number of potentially bioactive peptides, in particular small ones, are unidentifiable due to the current peptidomics advance [61]. Further in vivo studies are needed to evidence the glucose and/or food intake regulatory effect of this FBPH, nevertheless, the present tilapia byproduct hydrolysate appears to be very promising as functional ingredient preventing or managing overweight and glucose tolerance.
In Vitro Simulated Canine Gastrointestinal Digestion of FBP and FBPH
The simulated gastrointestinal digestion (SGID) was adapted from the static in vitro consensual protocol coming from the INFOGEST cost action (http://www.cost-infogest.eu), as well as from Caron et al. in order to mimic the dog gastrointestinal digestion [24,28]. Briefly, the three first steps of the digestive tract (oral, gastric and intestinal) were simulated using a static mono compartmental process and under constant magnetic stirring in a reactor at 39 • C. Two grams of FBPH or FBP were solubilized in 16 mL of salivary fluid at pH 7.0 without salivary enzyme. A 4 mL aliquot (oral aliquot) was withdrawing after 2 min. Twenty-four mL of gastric fluids were then added before the addition of porcine pepsin in a 1:40 (w/v) E/S ratio (enzymatic activity > 2000 U mg −1 of dry weight). Gastric digestion was performed over 2 h, pH being monitored and maintained at pH 2.0 with NaOH (5 M) and HCl (5 M) solutions. Hydrolysate aliquots (gastric aliquots) were withdrawing after 2 h and directly heated at 95 • C during 10 min. Thirty-six mL of intestinal fluid and 4 mL of 1 M NaHCO 3 solution were added to reach the pH to 6.8. Pancreatin was added in a 1:50 (w/v) ratio E/S (enzymatic activity 100 U mg −1 of dry weight) and intestinal digestion was carried out over 4 h. Aliquots (intestinal aliquots) were withdrawn and heated as above. All aliquots were then centrifuged at 13,000× g for 10 min and supernatants were collected to be stored at −20 • C for further analysis.
Size Exclusion Chromatography by Fast Protein Liquid Chromatography (SEC-FPLC)
The peptide apparent molecular weight (MW) distributions of oral, gastric and intestinal aliquots were obtained by SEC using a Superdex Peptide 10/300 GL column (GE Healthcare, Uppsala, Sweden) on an AKTA Purifier system (GE Healthcare). SEC was carried out in isocratic conditions with an elution solution of 30% acetonitrile, 69.9% ultrapure water and 0.1% TFA solvent at a flow rate of 0.5 mL.min −1 . Oral, gastric and intestinal aliquots were first diluted in ultrapure water (18.5 g L −1 , w/v) and subjected to a magnetic stirring for 15 min. The diluted samples were then centrifuged at 15,000× g for 15 min and the supernatants were filtered through a 0.22 µm membrane filter before injection. The absorbance was monitored at 214 nm for 70 min. The column was calibrated with the following standard peptides: cytochrome C (12,327 Da), aprotinin (6511 Da), insulin beta-chain (3496 Da), neurotensin (1673 Da), substance P (1348 Da), substance P fragment 1-7 (900 Da) and leupeptin (463 Da).
Cell Culture Conditions
The Caco-2 cell line was purchased from Sigma-Aldrich (Villefranche-sur-Saône, France) and the STC-1 cell line was a grateful gift received from Corinne Grangette (Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, U1019-UMR 8204-CIIL, France). Cells were grown in flask of 75 cm 2 at 37 • C, 5% CO 2 atmosphere in DMEM supplemented with 4.5 g L −1 of glucose, 10% of fetal bovine serum, 100 U mL −1 of penicillin, 100 µg mL −1 of streptomycin and 2 mM of L-glutamine. Caco-2 and STC-1 cells were weekly and twice a week subcultured, respectively. All cells used in this study were between the 40 and the 50 passages for Caco-2 cells and between the 10 and 30 passages for STC-1 cells.
DPP-IV Activity Assay
In situ method using confluent Caco-2 cells described by Caron et al. was slightly modified and used to study DPP-IV activity [34]. A 1 mM (Gly-Pro-AMC) substrate solution, the digests and the synthetic peptides dilutions were prepared in phosphate saline buffer pH 7.4 (PBS). Briefly, after 7 days of growth, Caco-2 cells were trypsinized and seeded at a density of 8000 cells/well in 96-well optical black plates (Nunc, ThermoFisher Scientific, Rochester, NY, USA). After 7 days, culture media were removed from wells and the cells were washed with 100 µL of PBS buffer (pH 7.4). Then, 100 µL of PBS was added to the wells followed by 25 µL of digests diluted in PBS at increasing concentrations (3.47, 6.95 and 13.89 mg mL −1 ) or 25 µL of synthetic peptides diluted in PBS at increasing concentrations (between 0.2; 0.6; 1 and 1.5 mM) or PBS buffer (control wells). After 5 min of incubation at 37 • C, 50 µL of (Gly-Pro-AMC) substrate solution were added to each well. Fluorescence was recorded every 2 min for 1 h at 37 • C using a Xenius XC spectrofluorometer (Safas Monaco, Monaco). Excitation wavelength was set to 260 nm while the emission wavelength was of 480 nm. The percentage of the DPP-IV activity inhibition was defined as the percentage of DPP-IV activity inhibited by a given concentration of digest or diprotin A (commercial DPP-IV peptide inhibitor) as positive control compared with control buffer response. The concentration of digests or synthetic peptide solutions required to obtain 50% inhibition of the DPP-IV activity (IC 50 ) was determined by plotting the percentage of DPP-IV activity inhibition as a function of digest or peptide final concentration natural logarithm. IC 50 was expressed in mg mL −1 or in mM.
HPLC Fractionation
The FPLC fractions displaying the strongest bioactivities was fractionated with a semipreparative C18 Gemini column (150 × 10 mm, particles size 5 µm, 110 Å, Phenomenex, Le Pecq, France) on a 4250 Puriflash system (Interchim, Montluçon, France). The peptide elution was performed at a flow rate of 5 mL.min −1 with two solvents: eluent A was composed of 99.9% of ultrapure water and 0.1% TFA and eluent B was composed of 99.9% of acetonitrile and 0.1% TFA. The following hydrophobic gradient was used: an isocratic step at 98% of eluent A for 20 min followed by a linear gradient from 2% to 15% of eluent B in 35 min, then a linear gradient from 15% to 90% of eluent B in 10 min and finally the column was washed with 90% of eluent B for 5 min and equilibrated again with 98% of eluent A for 10 min. The collected subfractions were then dried by centrifuge evaporation (MiVac Quattro Concentrator, Biopharma Process Systems).
RP-HPLC-MS/MS Analysis of HPLC Fractions
Selected dried HPLC subfractions were re-solubilized in 50 µL of ultrapure water containing 0.1% of formic acid (FA), vortexed, submerged in ultrasonic bath three times and finally centrifuged 5 min at 12,000× g. The peptides of these fractions (10 µL injection volume) were then chromatographed by reverse phase-ultra high-performance liquid chromatography (RP-UPLC) using an ACQUITY biocompatible chromatography system (Waters, Manchester, UK) equipped with an analytical C18 Uptisphere column (250 × 3 mm, particles size 5 µm, 300 Å, Interchim). The peptides elution was performed at 30 • C with a flow rate of 0.6 mL.min −1 using two solvents: eluent A was composed of 99.9% of ultrapure water and 0.1% FA and eluent B of 99.9% of acetonitrile and 0.1% FA. Apolar elution gradient used was: 100% of eluent A for 2 min followed by a linear gradient from 0 to 15% of eluent B in 45 min, then a linear gradient from 15% to 35% of eluent B in 20 min and from 35% to 90% of eluent B in 15 min. The column was finally washed with 90% of eluent B for 10 min and equilibrated with 100% of eluent A for 7 min.
The chromatographed peptides were then ionized into the electrospray ionization source of the qTOF Synapt G2-Si™ (Waters). MS analysis was performed in sensitivity, positive ion and data dependent analysis (DDA) modes. The source temperature was set at 150 • C, the capillary and cone voltages were set at 3000 and 60 V, respectively. MS and MS/MS measurements data were performed in a mass/charge range fixed between 100 to 2000 m/z with a scan time of 0.2 s. A maximum of 15 precursor ions with an intensity threshold of 10,000 were selected for the fragmentation by collision induced dissociation (CID) with specified voltages ranging from 8 to 9 V and from 40 to 90 V for the lower molecular mass ions and for those with a higher molecular mass, respectively. The leucin enkephalin ([M + H] + of 556.632) was injected in the system every 2 min for 0.5 s to follow and to correct the measure error during all the time of analyze.
Mass Spectrometry Data Processing
Mass spectrometry data processing and the protein database search were performed via Peaks Studio version 8.5 software (Bioinformatics Solutions, Waterloo, ON, Canada) using UniProt database restricted to the complete proteome of the Cichlidae family (updated the 2018/08/28, 44,684 entries). Tolerance threshold of precursor ion masses and fragments were defined at 35 ppm and 0.2 Da, respectively. The in-database identification search was performed with consideration of oxidized methionine but without notifying the choice of enzyme. Peptide sequences identified by the Peaks Studio 8.5 were filtered with a fault discovery rate (FDR) strictly lower than 1% while peptide sequences identified by de novo processing were filtered according to an average local confidence score (ALC score) up to 80%. In the day of experimentation, a transport medium Hepes-Hanks salt solution (HBSS) was extra temporary prepared and filtered on PVDF filter 0.22 µm. Samples were diluted at 4 g L −1 with the transport medium. The apical and basolateral sides of each well were washed with 500 µL and 1 mL of transport medium (heated at 37 • C), respectively. Then, 1 mL of transport medium at 37 • C was added in apical side and 2.5 mL in basolateral side. Plate was incubated at 37 • C, 5% of CO 2 for 30 min, and the supernatant was discarded and replaced with 1 mL of pre-heated samples or pre-heated transport medium (for the control). Kinetic studies were performed by sampling 100 µL from the apical and basolateral sides at 15 min, 250 µL in apical side and 1 mL in basolateral side at 60 min and the rest of the supernatant in apical (650 µL) and in basolateral side (1.4 mL) at 120 min of incubation at 37 • C, 5% CO 2 . The peptides were identified after 120 min incubation time.
Peptide Sequences Identification in Apical and Basolateral Supernatant by Mass Spectrometry
Apical and basolateral supernatants at 120 min were prepared and analyzed by mass spectrometry with the same protocol described above for HPLC fractions with minor changes. The UPLC column used was a C18-AQ (150 × 3 mm, particles size: 2.6 µm, 83 Å, Interchim) and the peptide chromatography was performed at a flow rate of 0.5 mL·min −1 and 30 • C. The apolar elution gradient used was as follow: 5 min at 99% of eluent A/1% eluent B, then a linear gradient from 1% to 30% of eluent B in 40 min, followed by a linear gradient from 30% to 70% of eluent B in 8 min, and finally after 2 min at 95% of eluent B, the column was equilibrated with 99% of eluent A/1% eluent B for 3 min. The ionization mode, and the MS and MS/MS measures were performed exactly as described previously.
Statistical Analysis
Data presented are means ± SD. To compare GI hormone secretion levels induced by the digests, a one-way ANOVA using general linear model and pairwise comparisons with Tukey's or Dunnett's tests were performed using Graph Prism (GraphPad Software, San Diego, CA, USA). Values were considered as significantly different for a p-value < 0.05.
Conclusions
A dog in vitro static simulated gastrointestinal digestion model that permitted us to evaluate in vitro the potential effects of a tilapia byproduct hydrolysate on the regulation of food intake and glucose metabolism was developed. Promising effects on intestinal hormones secretion and dipeptidyl peptidase IV (DPP-IV) inhibitory activity were thus evidenced and the added-value of the pre-hydrolysis was highlighted. New bioactive peptides able to stimulate CCK (DLVDK and PSLVH) and GLP-1 (LKPT) secretion and to inhibit the DPP-IV activity after a transport study through an intestinal barrier (VAPEEHPT, DLDL, MDLP, VADTMEVV, DPLV, FAMD, CSSGGY and GPFPLLV) were identified. This tilapia byproduct hydrolysate appears to be promising to manage overweight.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to this study is an industrial work and some results are confidential.
|
2020-12-31T09:02:26.688Z
|
2020-12-30T00:00:00.000
|
{
"year": 2020,
"sha1": "08cdc9f365c702119f184d75c79cb870ad0eb5ae",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/1/136/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e4433727158cddeb31c8eed8b5be7c4a0aec6ce",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264439349
|
pes2o/s2orc
|
v3-fos-license
|
Nonlinear dimensionality reduction then and now: AIMs for dissipative PDEs in the ML era
This study presents a collection of purely data-driven workflows for constructing reduced-order models (ROMs) for distributed dynamical systems. The ROMs we focus on, are data-assisted models inspired by, and templated upon, the theory of Approximate Inertial Manifolds (AIMs); the particular motivation is the so-called post-processing Galerkin method of Garcia-Archilla, Novo and Titi. Its applicability can be extended: the need for accurate truncated Galerkin projections and for deriving closed-formed corrections can be circumvented using machine learning tools. When the right latent variables are not a priori known, we illustrate how autoencoders as well as Diffusion Maps (a manifold learning scheme) can be used to discover good sets of latent variables and test their explainability. The proposed methodology can express the ROMs in terms of (a) theoretical (Fourier coefficients), (b) linear data-driven (POD modes) and/or (c) nonlinear data-driven (Diffusion Maps) coordinates. Both Black-Box and (theoretically-informed and data-corrected) Gray-Box models are described; the necessity for the latter arises when truncated Galerkin projections are so inaccurate as to not be amenable to post-processing. We use the Chafee-Infante reaction-diffusion and the Kuramoto-Sivashinsky dissipative partial differential equations to illustrate and successfully test the overall framework.
Introduction
Separation of time-scales in dynamical systems is crucial toward the development of Reduced Order Models (ROMs).For a certain class of dissipative evolution equations, the long term dynamics are attracted exponentially fast to smooth invariant objects known as inertial manifolds (IMs), facilitating the construction of ROMs on those.The dynamics on the IM can be described by the Inertial Form (a finite ODE system), which accurately captures the long-term behavior of the original infinite-dimensional system [Shvartsman and Kevrekidis, 1998, Jolly et al., 1990, Titi, 1990, Akram et al., 2020].The purpose of this paper is to (somewhat systematically) outline (and demonstrate) links between "traditional" AIM technology and contemporary data-driven reduction tools, giving rise to "mathematics-assisted" algorithmic ROM workflows.Such connections had initially been experimentally attempted in the 1990s (e.g.Krischer et al. [1993], Theodoropoulos et al. [2000]); they are currently experiencing a strong revival due to the explosion in machine-learning-assisted modelling [Linot and Graham, 2020, Anirudh et al., 2020, Lee and Carlberg, 2020, Bar-Sinai et al., 2019, Benner et al., 2015].
IMs have been proven to exist for only a few systems, and even then, they have not been constructed explicitly [Jolly et al., 1990].It is, nevertheless, still possible to find approximations of either the global attractor or the IM itself, i.e.Approximate Inertial Manifolds (AIMs), or the dynamics on it, i.e. the Approximate Inertial Form (AIF), and then track the dynamics in this reduced space.The key ansatz is that the attracting dynamics in the complement space to the AIM, are quickly slaved to, and embodied in, the AIF.Along these lines, the Galerkin projection, as well as nonlinear Galerkin projections on approximate inertial manifolds are also popular choices for reduced order modeling [Marion and Temam, 1990, Jolly et al., 1991, Shen, 1990, Benner et al., 2015].AIM-based ROMs have been proposed for reaction-diffusion systems [Foias et al., 1988b, Adrover et al., 2002], the Kuramoto-Sivashinsky equation [Foias et al., 1988b, 1989, Jolly et al., 1990], the two-dimensional Navier-Stokes equations [Temam, 1989b,a, Jauberteau et al., 1990], and for the three-dimensional Navier-Stokes equations [Guermond and Prudhomme, 2008].
In the late 90s the post-processing Galerkin method was proposed [García-Archilla et al., 1998, García-Archilla andTiti, 1999], initially in the context of dissipative equations.Post-processing Galerkin takes into account the observation that the error between the result of integrating a truncated Galerkin on the one hand, and the projection of the true solution in the finite-dimensional Galerkin space, on the other, is significantly smaller than the error between the truncated Galerkin and the full solution (superconvergence [García-Archilla et al., 1999, Wahlbin, 2006]).We will return to this below and illustrate it in Sec. 4 and Fig. 6.Given this observation, one uses the dynamics expressed only in terms of the leading low modes (a truncated version of the equations) to integrate.Once the time integration is finished one can post-process the obtained solution by approximating the high modes as a function of the solution in the leading modes.Since, in the post-processing Galerkin framework, the correction is computed only at the end of time integration, this makes it much cheaper to implement computationally than true nonlinear Galerkin [García-Archilla et al., 1998, García-Archilla andTiti, 1999].Moreover, truncation analysis derivation of the spectral method for dissipative evolution equations, such as the Navies-Stokes equations, gives rise to the post-processing Galerkin as the leading order numerical scheme, and not the Galerkin scheme itself, as commonly believed [Margolin et al., 2003].
Model identification assisted by machine-learning has emerged in the 90s and is now experiencing a rebirth as a tool to discover minimal parametrizations of an IM, which can subsequently be used to evolve the dynamical system in a reduced space [Lu et al., 2017, Chorin and Lu, 2015, Zeng and Graham, 2023, Zeng et al., 2022, De Jesús and Graham, 2023, Linot et al., 2023].Some efforts implemented linear methods like POD [Krischer et al., 1993, Kang et al., 2015, Theodoropoulos et al., 2000], to identify a suitable subspace that contains the majority of the variance of the system, and parametrizes the long term dynamics.More recently, operator inference with quadratic manifolds has been proposed for model reduction [Geelen et al., 2023, Zastrow et al., 2023, Qian et al., 2022, McQuarrie et al., 2021].Nonlinear dimensionality reduction methods, such as autoencoders [Kramer, 1991] or Diffusion Maps (DMAPs) [Coifman et al., 2008] have also been used to discover latent variables of data that originally live in a highdimensional space.Learning a dynamical systems in the latent space of an autoencoder (even as a collection of local charts), or in Diffusion Maps space, also provides a systematic approach to ROM construction (e.g.Rico-Martinez et al. [1992], Sonday et al. [2010], Evangelou et al. [2023], Linot and Graham [2022], Lee and Carlberg [2020], Bar-Sinai et al. [2019]).Needless to say, nonlinear system identification assisted by machine learning remains a very active current research endeavor, encompassing a plethora of directions from symbolic methods e.g.[Brunton et al., 2016], to physics-informed methods, e.g.[Raissi et al., 2019], to numerics-informed methods [Bar-Sinai et al., 2019].
In our view, the "1980s" IM and AIM efforts towards useful reduced order models of dissipative PDEs can be succinctly summarized as follows: Given the functional form of the PDE for which we know (or believe) an IM exists, and having an estimate of the dimensionality of said manifold: (A) start by finding the (leading) eigenmodes, say k of them, of the (dissipative part of the) operator that "determines" (parametrizes) the IM.In that sense, the components of the solution in the remaining "higher order" eigenmodes can be expressed as functions of the components in the lower, determining, ones; (B) guided by separation of time scales ideas, construct the AIM approximating this function, by writing the components of the higher eigenmodes as (approximate) functions of the components of the lower, determining ones.Several implementable such approximations have been proposed and analysed: e.g. the "steady" manifold, the "Euler-Galerkin", and the Foias-Manley-Temam (FMT) manifold among others.We already have a practical result: if somebody provides as observations the lower mode amplitudes, we can meaningfully and analytically improve the full spatiotemporal solution, complementing it with the recovered higher mode components.We will return to this theme when discussing post-processing Galerkin.Let it be noted here that even though the original motivation of AIM was to find an approximation to the IM whenever the latter exists, however, this idea was generalized later and implemented by finding a manifold which approximates the global attractor as a set; observing that global attractor always exists for genuine dissipative dynamical system; (C) beyond just correcting such observations, these functions can be used to correct approximations of the dynamics through their low-order Galerkin truncation: From an accurate, high-order Galerkin truncation, we keep only the low, "determining" Galerkin ODEs; instead of omitting the higher order terms as negligible, we now substitute the AIM function in the low terms.We now have the "steady", or "Euler-Galerkin" or FMT inertial forms.
This original program is complemented by the "post-processing Galerkin" protocol: Here we actually keep the low order Galerkin truncation, ignoring the contribution of the higher order, slaved modes to it, expecting/believing that, in its low-dimensional space, these few ODEs are accurate enough to approximate the projection of the exact solution on the Galerkin space.The authors of [García-Archilla et al., 1998, García-Archilla and Titi, 1999, García-Archilla et al., 1999] took into account the observation that the total error of the solutions predicted by the truncated low-order Galerkin is appreciably larger than the error after adding to them (in a sense, "reinjecting") the AIM-approximated higher order solution components.This reinjection is performed after the truncated low-order Galerkin equations have been integrated until each time instance of interest (we remind the reader that this is revisited in Sec. 4 and Figure 6).
They named the approach "post-processing Galerkin" since it takes place after the truncated low-order Galerkin has been obtained and integrated: it is these concrete available solutions of the model that are being improved -not the model itself.
Explicit AIMs have been obtained in the context of spectral Galerkin approximation by writing approximations of the evolution equation of the high modes in terms of the low modes, a closure relation.In the context of spectral Galerkin approximation based on Fourier modes or eigenfunctions of the Stokes operator, one can naturally decouple the phase space into low Fourier (eigenfunctions) modes and their complement high Fourier (eigenfunctions) modes.Therefore, the above-described strategy of obtaining AIM is possible to be executed explicitly, and leads to an analytical closure.
For the examples we present in this work, the spectral Galerkin approximation could indeed provide a desirable closure.
However, we would like to note that in the context of the Finite Element Galerkin method, the above decomposition to coarse spatial scales and their complement is not a straightforward task.Therefore the above strategy can not be followed to obtain an explicit (paper and pencil) closure form, that expresses the fine spatial scales of the solution in terms of the coarse finite elements spatial scales.For this case, a more general framework for implementing the Post-processing Galerkin can be used [García-Archilla and Titi, 1999].In this more general case, an explicit form of an AIM in order to implement Post-processing Galerkin is not required.We briefly present this more general scheme in Sec A.0.1.
Today, beyond symbolic model (AIM) or solution (post-processing Galerkin) improvement, data driven techniques allow us (given accurate simulation data or observations) to: (a) Estimate the AIM dimensionality in a data-driven way (either through autoencoders or through manifold learning).
(b) Learn good reduced AIFs (the "correct", nonlinear Galerkin, right hand-side of the reduced, low order, components of the PDE) in a data-driven way.
(c) Learn the AIM functions (high order mode components as a function of low order model components) in a data driven way.
(d) Given the learned AIM in (c), correct the solutions of a low-order Galerkin truncation (a "data driven" postprocessing Galerkin).Beyond the steps (b-d) above, that more or less correspond to the traditional (A-C) analytical steps, there are now a couple of very useful data-driven "twists".
(e) Circumvent the assumption of accuracy of the low-order (linear) Galerkin truncation; the low order AIF is learned from observations of the low-order components of accurate full PDE dynamics; and now the "postprocessing" that follows can be done (1) with the same "old" analytical AIMs, or, interestingly (2) with data-driven learned AIMs from the same accurate full PDE dynamics.
(f) Gray-Box (in some sense "physics-assisted") learning: instead of a fully black-box learning of the AIF using PDE observations, we now learn the correction of the not-so-quantitative low-order linear Galerkin truncation.This correction can be learned as an additive (residual) term, or even as a functional correction -hoping for easier training, since what is learned is a perturbation of the identity [Martin-Linares et al., 2023].
(g) (This is not so much a step in our list, as a branching towards new capabilities).Up to now, everything but the eigenfunctions parametrizing the manifold was data-driven; the eigenfunctions themselves were still analytical.If we allow ourselves to find the parametrization of the manifold in a data-driven way, two individually significant new options arise: (a) Use linear data-driven eigenfunctions: the leading Principal Components (PODs) of the full accurate PDE simulations.Now the low-order PODs parametrize the manifold, and the higher order POD components embody the AIM.POD-Galerkin takes the place of traditional Galerkin.
(b) Use a nonlinear, data-driven AIM parametrization: One can here either (g2a) use the latent variables of an autoencoder to parametrize the AIM, learn the corresponding accurate AIF, and post-process it to more accurate spatiotemporal PDE solution reconstruction; or (g2b) use the leading POD components to parametrize the AIM, learn the accurate corresponding AIF, and post-process it for a more accurate spatiotemporal PDE solution reconstruction.At the risk of making this list ridiculously long, we also add -and illustrate below-the possibility (g2c) of using spectral (Diffusion Map) data mining to parametrize the AIM along with the associated Geometric Harmonics for the post-processing.
A schematic overview of the different options proposed in each case, are presented in Fig. 1, with references to the subsequent sections where they are discussed in detail.The remainder of the paper is organized as follows: After listing the illustrative examples used in this study (Sec 2), we proceed with describing the methodology (Sec.3.) We start with briefly reviewing the "traditional" approximations of IMs and IFs (and AIMs and AIFs) (Sec.3.1).We then discuss neural network-based alternatives to approximating IMs and IFs (Sec.3.1.2),followed by nonlinear manifold learning methods for determining the dimensionality and parametrization of the latent space (Sec.3.2.1 and 3.2.2).After presenting our results we conclude by pointing out that the technology can be easily "transferred" to POD parametrizations of the IM (Sec.3.2.3).
Data on
2 Illustrative examples: The Chafee-Infante and the Kuramoto-Sivashinsky equations Our first example is the reaction-diffusion Chafee-Infante partial differential equation (PDE), for which the dimensionality of the Inertial Manifold (IM), for the parameter range of interest, is known; it reads: (1) The parameter ν was chosen as ν = 0.16 and Dirichlet boundary conditions, u(0, t) = u(π, t) = 0, were used.The Chafee-Infante PDE, for ν = 0.16, has been shown to have a two-dimensional inertial manifold [Sonday, 2011, Gear et al., 2011, Jolly, 1989, Evangelou et al., 2022].To simulate the dynamics on/near this two-dimensional manifold, the Galerkin projection was used [Gear et al., 2011, Jolly, 1989, Evangelou et al., 2022].The first two leading sine coefficients α 1 (t), α 2 (t) are sufficient to parameterize this two-dimensional manifold, and Galerkin equations based only on the first two modes provide a qualitatively correct approximation of the dynamics in these two modes.We will consider the solution of the Chafee-Infante equation with three modes as the ground truth (cf Fig. 7a).For the post-processing process, the truncated equations with the first two sine coefficients α = {α 1 , α 2 } are the ones used for integration up to time t = T .Then, their solution is post-processed to recover α = α 3 and reconstruct the full solution u(x, T ).For this first example, the truncated dynamics governed by the first two sine coefficients are (considered to be) qualitatively, but not quantitavely accurate; the post-processing step aims to correct the obtained solution from these truncated dynamics.
To demonstrate the potential of the proposed methodology in a case with more complex dynamics, we select the Kuramoto-Sivashinsky (KS) PDE, The KS (Equation ( 3)) is a prototypical equation with dynamics that include chaos, derived in the context of a diverse range of physical systems such as, but not limited to, thin film flow on inclined planes and instabilities in a laminar flame front [Kuramoto and Tsuzuki, 1976, Sivashinsky, 1977, Alekseenko et al., 1985, Chang, 1986a,b, Jolly et al., 1990, Kevrekidis et al., 1990].The parameter ν in our case is set to ν = 33 and periodic boundary conditions are used u(0, t) = u(2π, t).In this example, Fourier series expansion with 8 terms is used to approximate the ground truth u(x, t): which results in 8 ODEs for the sine coefficients ({α k } 8 k=1 ) and 8 for the cosine coefficients ({β k } 8 k=1 ).Restriction to the space of odd functions leads to retaining only the sine terms, resulting in a system of 8 ODEs for the sine coefficients which is considered, in this work, as the exact solution of the KS.We use the truncation to the leading three sine coefficients α = {α 1 , α 2 , α 3 } to study the dynamics for ν = 33; however even though it has been shown that a 3D manifold exists, the truncated equations based on the leading coefficients do not provide an accurate approximation of the dynamics of these coefficients.In this case the traditional post-processing Galerkin methodology, does not apply (we do not have a good base solution to correct).We circumvent this issue by constructing Gray-Box models, as we show below in Sec.4.3.1.
Methodology
3.1 Approximating the IM and the IF (known latent space)
Euler-Galerkin
As a preamble to traditional post-processing Galerkin, here we discuss nonlinear Galerkin schemes, in particular the "Euler-Galerkin" algorithm, that provides a closed-form approximation of inertial manifolds [Foias et al., 1988a].Consider the evolution equation where H is an appropriate Hilbert space, A is a self-adjoint positive-definite linear operator with compact inverse, and let F be a nonlinear operator such that equation ( 5) is globally well-posed in time for all initial data in H.By denoting a projection onto the span of the first n eigenvectors of A by P and Q = I − P we can split Equation ( 5) into where p = P u, q = Qu and q+p = u.Assuming that the long-term dynamics of Equation ( 5) live in a n−dimensional inertial manifold described as the graph of a function Φ : P H → QH we can write the projection of the inertial manifold onto P H as An approximation of Φ is achieved through a Galerkin truncation of m modes in Equation ( 7) , where m > n.The projection to the space of the higher modes n + 1, . . ., m defines Q m .Since the higher modes are attracted exponentially fast to the IM and become functions of the lower modes, we perform an implicit Euler step to approximate the solution q with a step size τ .By assuming an initial condition q 0 = 0 we get Instead of completely solving equation ( 9) we perform a single fixed-point iteration using an initial q = 0 and holding the lower modes 1, . . ., n (the components of p) constant.This gives the approximation: an algebraic expression that estimates the higher modes {n + 1, . . ., m} as a function of the lower n modes and thus an approximation of the IM itself.
Substituting Φm (p) for the m − n higher modes gives and more precisely an Euler-Galerkin approximation consisting of n differential equations In this work, the (nonliner) Euler-Galerkin algorithm was applied to the Chafee-Infante partial differential equation, as detailed in Sec.A.0.2.
Neural network derived AIM and AIF
The higher sine modes' coefficients α, which are necessary for accurate reconstruction of the solution in physical space, can be obtained in a data-driven manner.Specifically, here we use deep neural networks, schematically shown in Fig. 2, to learn the coefficients α, given the values of leading (lower) sine modes' coefficients at a specific point in time, t = T , α(T ).α(t) = f N N ( α(t)), where α stands for leading sine coefficients (low modes) and α stands for the higher sine modes coefficients.The leading coefficients α(T ) have been obtained as a result of the time integration of the truncated dynamics.
Alternatively, when the result of time-integration of the two truncated lower sine coefficients equations, is inaccurate, we can correct it by learning a data-driven truncated ODE in the lower sine coefficients, with general form: where α ∈ R m , here m = 2, are the variables in which we observe the evolution of the dynamics.Observe that since: m=2 here the Poincaré-Bendixon theorem applies.Hence the dynamics of the low modes is either goes to a limit-cycle Figure 2: Illustrative example of a feed-forward neural network for prediction of higher coefficients.In this example the lower harmonics, α 1 (t) and α 2 (t) are used as inputs to the network that predicts α 3 (t) or to a steady state.This data-driven AIF was first explicitly described and implemented in [Theodoropoulos et al., 2000] (see also in [Krischer et al., 1993]).
The function f is approximated by a fully connected neural network, schematically represented in Fig. 3.The goal is to predict the time derivatives of the lower sine coefficients from the their values.Once this is done, the right-hand-side of the ODEs in Eq. 13 can be used in conjunction with any method of integration in time, such as the Runge-Kutta, to accurately approximate α(T ) and then proceed as above to post-process α(T ).
!̇() " ̇() !() " () For parameter values of the KS equation for which the long-term truncated dynamics may not be accurate, an appealing alternative to the Black-Box approach discussed above arises.
One can remedy the situation by first correcting the reduced dynamics, before deriving the missing terms for reconstruction.This can be achieved by constructing a "Gray-Box" data-driven dynamic model.This Gray-Box model describes the evolution of a reduced system, by adding to the truncated dynamics a learned correction term, which can be thought of as a closure.This correction is approximated by a neural network that takes as inputs the lower order sine coefficients and delivers the difference between their true time-derivatives and the truncated Galerkin time-derivatives: where d αp dt is the true vector field projected in the leading sine coefficients α and d αt dt is the vector field of the corresponding truncated Galerkin projection.
Here, g N N is approximated using a neural network implemented in tensorflow [Abadi et al., 2015] with 6 hidden layers, 95 neurons each, and a tanh activation function.The loss function used is the mean squared error (MSE), and the Adam optimizer is employed.
Finally, it is worth noting that the proposed workflow works equally well, when considering evolution equation of the leading POD mode coefficients, as parametrizing the IM.An illustrative example, based on the Chafee-Infante POD-based equations can be found in A.0.2.
Learning the dimensionality of the latent space
In most cases, a minimal parametrization of the IM of a dynamical system is not known a priori.It is possible to discover it, using different data mining approaches, such as Diffusion Maps and autoencoders.Both methods are discussed in the following paragraphs and summarized in Fig. 4 5 1 5
Diffusion Maps
Diffusion maps [Coifman and Lafon, 2006b, Nadler et al., 2006, Coifman et al., 2008] is a manifold learning framework that can (based upon diffusion processes) facilitate discovering low-dimensional intrinsic geometric descriptions of data sets, even when the data is high-dimensional, nonlinear and/or corrupted by (relatively small) noise.It is used here to discover the dimensionality of the IM and provide a data-driven parametrization of it.
The parametrization of the manifold is obtained through a few eigenvectors, ϕ i , of a scaled affinity matrix, which contains the Euclidean distances between all the pairs of available data points.A detailed description of the Diffusion Maps algorithm is provided in the Sec.A.1 of the Appendix and for the Double Diffusion Maps in Sec.A.2.
Autoencoders
Autoencoders [Kramer, 1991] are neural networks that are trained (a) to encode high-dimensional data into a lowdimensional representation (b) to reconstruct the original high-dimensional from this lower-dimensional representation (cf.Fig.4b).In this context, the input layer is the same as the output, and the low-dimensional encoding is parametrized by the weights of the bottleneck layer.The loss function is commonly used to train an autoencoder where α (k) represent a data point in the ambient space and ᾱ(k) the reconstructed data point k from the autoencoder.
In this work, we use autoencoders for an additional second use case, which relies on the observation that the discovered autoencoder latent coordinates are one-to-one with the leading sine coefficients α, as discussed in detail in the following paragraph.
Theoretical and data-driven latent variables: transformations and AIMs
The local one-to-one relation between the autoencoder's latent variables (L) and the leading sine coefficients ( α) is tested by computing the Inverse Function Theorem across the training data.The Inverse Function Theorem guarantees local invertibility in a neighborhood of any point L i ∈ L if the determinant of the Jacobian (det(J f (L)) is bounded away from zero.We provide a more detailed description of the Inverse Function Theorem in Sec.A.3 of the Appendix.The Jacobian computation in our case is performed by using automatic differentiation with tensorflow.
The fact that the latent variables are one-to-one with the leading sine coefficients, allows us to recover the full sine coefficients in two distinct steps, schematically shown in Fig. 5.The first step is training the autoencoder.In the second step, we learn to infer the latent variables L from the leading sine coefficients, using either a feedforward neural network or Double DMAPs.
Predicting full coefficients: 1. Decoder 2. Double DMAPs Alternatively, the decoder part of the autoencoder can be used to compute an inverse-map.This inverse map utilizes the leading Fourier modes, α, in which the dynamics have evolved, and the trained decoder, to find the latent autoencoder variables that minimize the algebraic optimization constraint In Equation ( 15) the latent autoencoder variables are denoted as L, the leading Fourier modes as α.After solving the optimization problem in Equation ( 15) the decoder can be used to recover all the Fourier modes given L.
This second use case of the autoencoder allows us to map from the leading Fourier modes to the latent space and back to the full Fourier models without the need of constructing an additional regression scheme.Once the latent variables are predicted, the decoder of the autoencoder or the inverse transformation from the DMAP to the ambient coordinates, is used to approximate the full set of reconstructed coefficients.
Results
Before presenting our results we remind the reader, through the illustration in Fig. 6 of the basic premise and the various errors associated with the post-processing Galerkin concept.The main premise is that the distance ∆ 1 , between the projection of the true solution (point 5) and the truncated Galerkin solution (point 3) is much smaller than the distance ∆ 3 between point 3 and the true solution (point 1), the total error of truncated Galerkin approximation [García-Archilla et al., 1999].This motivates the need for post-processing, which establishes that the distance ∆ 4 between point 1 and the post-processed Galerkin (point 2) is also much smaller than the total error (and comparable to ∆ 1 ) as shown in Fig. 6.
Figure 6: (a) A schematic illustrating the benefits of the post-processing Galerkin methodology.A trajectory of the exact solution is shown on the manifold in a 1 , a 2 , a 3 as a red solid trajectory, its final state denoted with an x marker and (1).The projection of the exact solution in a 1 , a 2 is shown with a red dashed line, its final state denoted as (3), and a blue square.The trajectory integrated by using the approximate inertial form is shown with a black dashed line, and its final state is shown with a black square and denoted as (4).The trajectory integrated by using the truncated Galerkin is shown with a blue dashed line, its final state denoted as (3) and a blue square, and its post-processing (mapping) on the manifold, denoted as (2), and indicated by a blue x marker.The dotted line ∆ 1 shows the distance between (1) and (5), the dotted line ∆ 2 shows the distance between (2) and (3), the dotted line ∆ 3 shows the distance between (1) and (4).The main premise of post-processing Galerkin is that ∆
Euler-Galerkin vs. neural-network AIMs: Chafee-Infante
For the Chafee-Infante we start by providing a comparison between the solution obtained with the three sine coefficients, here considered as the ground truth, (cf Fig. 7a) and the truncated equations with the first two modes.The different post-processing schemes are applied to the solution of the truncated equations at the end of the desired integration.The comparison between the two is shown in Fig. 7 where the reconstructed solution is shown with a blue dashed line and the ground truth simulation with a red line.The percent error along each step of the time integration until time T = 5, is shown in Fig. 7b.
The solution of the 2D truncated dynamics is then corrected, using the value of α 3 (T ) as predicted by a neural network (described in Sec.3.1.1),using as inputs, the values of α 1 (T ) and α 2 (T ) at the final time-step, t = T .The results are shown in Fig. 7c with a dashed blue line; included in the same figure, with a solid blue line, is the solution corrected with the theoretically (Euler-Galerkin AIM) derived value of α 3 (T ).The percent error along the integration time till time T = 5 for the ML-derived α 3 is shown in Fig. 7d.Both, the ML-derived and the theoretical corrections help recover the accuracy and both lead to a mean absolute percent error (MAPE) of less than 1%.The mean absolute percent error (MAPE) is also computed at the same time instance (T = 5) but for 100 randomly selected initial conditions, for the 2D and the ML-corrected 2D model.This is shown in Fig. 7e, where the favorable effect of the correction on the mean absolute percentage error is clearly visualized.In these and in subsequent results, the MAPE refers to point-wise average of the absolute percentage error in each sample.
As an alternative, it is also possible to correct the learned ODE in two dimensions, derived as described in 3.1.2.The accuracy achieved is similar to the accuracy of the true truncated 2D model.
Kuramoto-Sivashinsky: Data-driven latent spaces and their AIMs
The KS equation is selected in order to explore the application of the proposed methodology in cases where the minimum dimension of the Approximate Inertial Form (AIF) is not known a priori, although it has been argued to be three-dimensional [Jolly et al., 1990].Nevertheless, the truncated dynamics are not always quantitavely close to the actual behavior.The latter will be addressed with "Gray-Box" modeling, whereas the former is an important challenge in the implementation of post-processing Galerkin methods and will be addressed here with two different approaches: nonlinear manifold learning and in particular Diffusion Maps discussed in 3.2.1, and autoencoders discussed in 3.2.2.
Learning the dimensionality of the latent space Autoencoders
A collection of data is sampled for the KS parameter value ν = 33, in various time instances of time-integration sufficiently close or on the global attractor.The data are used as inputs to an autoencoder and are reduced by the encoder into into a low dimensional bottleneck layer which parametrizes an approximation of the inertial manifold.It is then possible to map to the approximation of the high dimensional variables with the decoder.The encoder/decoder components of the network can be used independently as it will be demonstrated in a subsequent section to improve the accuracy of the reduced order model.
The three latent variables of the bottleneck layer are one-to-one functions of the first three sine coefficients, α 1 , α 2 and α 3 .This is shown in Fig. 8, where the three bottleneck variables are plotted and colored according to the three sine coefficients.The smooth color variation suggests a one-to-one correlation between the latent and the ambient variables.It so happens that each one of the sine coefficients is one-to-one with each of the Diffusion Maps coordinates (the comparison is shown in the SI).The one-to-one relationship between the leading sine coefficients and the autoencoder's latent variables L facilitates the second use case of the autoencoder we discussed earlier.This second use case utilizes the decoder to solve an inverse-problem and map the leading sine coefficients α to the autoencoder's latent space.Since, we showed that f : L → α is a locally invertible map we can use the trained decoder and estimate L given α by solving the optimization problem described in Equation 15.As initial conditions to solve the optimization problem randomly sampled points from the training set were used.After optimization, the decoder can be used to reconstruct the remaining sine coefficients and from those the solution in u(x, t) space, from the obtained values in autoencoder's latent space.
In Figure 10a, we contrast, for one reconstructed trajectory (i) the true solution u(x, T ) obtained from the full equations, (ii) the reconstructed solution based on the first three learned sine coefficients, and (iii) the reconstructed solution obtained by solving the inverse map and using the decoder to reconstruct the full solution.In Figure 10b the three leading sine coefficients and (ii) the solution obtained after implementing the optimization step.
Diffusion Maps and their data-driven AIMs
Diffusion Maps is implemented to encode the high dimensional data to a low dimensional manifold parametrized by three Diffusion Maps coordinates shown in Fig. 20 of the Appendix.The Diffusion Maps coordinates ϕ 1 , ϕ 2 and ϕ 3 are one-to-one with the coefficients of the first three sine terms.This is shown in Fig. 20, of the Appendix, by the smooth color transition in the diffusion maps plot when colored by α 1 , α 2 and α 3 .
The sine coefficients, α i , are reconstructed with the help of Double DMAPS and by the decoder of the autoencoder as discussed in Secs.3.2.1 and 3.2.2respectively.The MSE for the Double DMAPs approach is 0.00492, whereas for the autoencoder it is 0.0155.The precision of the autoencoder decreases for higher harmonics, which leads to the overall drop in accuracy of the reconstruction (this comparison is shown in the Sec.A.3). Double DMAPs predicts accurately all the coefficients (this comparison is shown in the Sec.A.3).
Data-driven post-processing Galerkin
Having established that the first three sine coefficients are one-to-one with the data-driven latent variables, the next step is to learn a data-driven ODE of the time evolution of the first three reconstructed sine coefficients as described in Sec.3.1.2.
The feedforward neural network is trained using as input the values of α1 , α2 and α3 , that are reconstructed by the latent space learning methods, i.e. autoencoders and DMAPs (the results of latent space identification and reconstruction of the hight-dimensional variables is presented in the SI).The predicted time-derivatives, the right-hand-side of the learned ODE, for each one of the sine coefficients are pictured in Fig. 22 (of the SI) versus the actual values of that components time derivative, α.The top row shows the predicted right-hand-side from the three sine coefficients resulting from the autoencoder, with M SE = 9.5.The respective predictions from the Double DMAPs reconstruction are shown on the bottom row, with M SE = 2.2.
The neural network-derived approximation is then used in conjunction with an ODE solver, such as the Runge-Kutta, in order to integrate in time.The outcome of integration is reconstructed in physical space and compared to the outcome of the ground truth integration (in 8D) and also the reconstructed solution using only the first three modes of the ground truth.This is shown in Fig. 11, alongside the error between the learned 3D ODE and the actual 3 modes of the Galerkin approximation, which demonstrates that the learned ODE predicts accurately the low dimensional time evolution of the first three modes.
When post-processing Galerkin works, when it does not work, and how to fix it
It is worth looking into the time evolution of the first three modes, α 1 , α 2 and α 3 that result from the truncated 3D dynamics and compare it to the evolution of the first three terms of the full 8D dynamics and those of the learned AIF ODE.This is shown in Fig. 12, where it becomes evident that the first three terms of the learned 3D ODE are close to the trajectory of the first three terms of the 8D Galerkin.In contrast, the truncated 3D dynamics deviate significantly, past a certain point in time, from the ground truth dynamics.This observation suggests that using a post-processing scheme directly on the truncated equations won't be able to correct the dynamics.This motivates us to use an ML correction so-called "Gray-Box" model discussed further in the next section.
In essence, the post-processing Galerkin method relies on the premise that the solution of the truncated problem is reasonably close to the projection of the ground truth solution.Here it is demonstrated that even though the AIM is indeed three-dimensional, in the range of physical parameters examined, the truncated long term dynamics in three dimensional space are not accurate.This is demonstrated further in Fig. 13, where the solution in physical space is reconstructed from the 8D Galerkin, the 3D Galerkin and the learned 3D AIF ODEs, in different instances along the same trajectory.At initial stages of the trajectory, the solutions of the three methods are reasonably close, as shown in Fig. 13a.Later in time, the truncated 3D solution is growing quantitatively further apart from the ground truth, whereas the learned 3D AIF ODE follows closely the 8D dynamics (Fig. 13b).This is made clear in Fig. 13c, where the percent error between the learned and the truncated 3D solution is plotted along the trajectory.This can also be observed in phase space shown in Fig. 13d, on the right.The red point corresponds to the initial condition.The values of sine coefficients initially evolve in a similar manner but eventually, the truncated 3D dynamics deviate.
Data-driven post-processing Galerkin
To recover the values of all the α i s, necessary for accurate reconstruction, the first step involves predicting the latent variables, either the bottleneck variables from the autoencoder or alternatively the DMAPs latent coordinates.One way to achieve this, is with a feedforward neural network with three inputs (the the first three sine coefficients) and three outputs (the latent variables).In this implementation, the neural network consists of 5 hidden layers with 80 neurons each and a tanh activation function, implemented in tensorflow [Abadi et al., 2015].The mean squared error is used as the loss function along with the Adam optimizer.
It is then possible to employ either Double DMAPs, in the case of DMAPs, or the decoder of the autoencoder, and predict the corresponding α i s with M SE = 0.09 for both cases.The reconstructed solution in physical space is compared to the ground truth in Fig. 14.If we have a accurate low-dimensional observation we can correct, in principle theoretically with Euler-Galerkin, or in practice with machine learning approaches as described above.If this is not available, then we proceed to improve the AIF itself though the Gray-Box approach.
The method's performance is demonstrated in Fig. 15 for two cases.In the first case (cf Fig. 15a), the 3D, 8D, and corrected Gray-Box dynamics are shown for the reconstructed physical space solution, at T 1 = 0.02, when the truncated 3D dynamics are close to the ground truth.At a later time-step, at T 2 = 0.05 (cf Fig. 15b), the truncated dynamics have deviated far from the truth.The Gray-Box model corrects the deviation in both cases and accurately captures the ground truth with the addition of post-processing terms, as seen in Fig. 15e.
Using POD coefficients to parametrize the IM/AIM
Here, the implementation of the proposed workflow is presented in the case where the manifold is parametrized by data-driven POD coefficients, rather than sine coefficients, for the Chafee-Infante equation.To start with, the POD modes that contain the greatest percentage of variance of an ensemble of solutions in physical space, are identified.Three POD modes represent 99.99% of the energy of the dataset (cf.Fig. 16a), defined as the percentage of the cumulative sum of the leading three eigenvalues over the sum of all the eigenvalues.
The original dataset is then projected on the first three modes, leading to each solution vector, being represented by three coefficients.The mean absolute percent error of the dataset, projected on a basis consisting of 3 POD vectors, is 0.06% (cf.Fig. 16b).We use this collection of POD coefficients, to discover the latent variables, here with an autoencoder with a 2-neuron bottleneck layer.The mean absolute percentage error achieved for the autoencoderreconstructed POD coefficients is 1.2%.The latent variables are one-to-one with the two leading POD coefficients, as is evident in Fig. 17, where they are plotted and coloured according to the values of the coefficients.The smooth colour transition is indicative of the one-to-one relationship.
The time-evolution law of the ODE for the two leading POD coefficients, is then learned from data.This is achieved using a feed forward neural network consisting of two hidden layers with 20 neurons each.The tanh activation function is implemented and the mean squared error is used as a loss function.The learned ODE is integrated with a Runge-Kutta solver over time T = 5.From the values of the two POD coefficients at the final time-step, the latent variables are then inferred using an appropriately trained neural network.Then, the decoder of the autoencoder is used to recover the entire set of POD coefficients.It is then possible to "lift" from POD space to the sine coefficients and reconstruct the solution: the solution reconstructed using 3 ML-derived terms compares very favorably to the ground Figure 19: (a) A data set sampled from the singularly perturbed system of ODEs is shown with a black solid line.The span of the first POD mode (P OD 1 ) is shown with a red vector and the span of the second POD mode (P OD 2 ) is shown with a blue vector.The projection of a data point (black solid circle) to P OD 1 and P OD 2 is depicted.(b) The components of the first POD vector (P OD 1 ) versus the components of the second POD vector (P OD 2 ).P OD 2 can be seen as a quadratic function of P OD 1 .
Conclusions
In conclusion, this study has attempted to bridge theoretical approaches to reduced order modeling of dynamical systems (theoretically, closed form, approximations of AIMs and AIFs) with appropriately derived data-driven workflows.The data in question may consist of either (a) theoretical parametrizations of the IM (here sine coefficients) or (b) equally possibly, data-driven parametrizations (POD coefficients, autoencoder latent variables, manifold learning Diffusion Map coordinates).The use of machine learning techniques, specifically autoencoders and Diffusion Maps, allows for accurate and efficient modeling of high-dimensional systems while overcoming the limitations of traditional post-processing Galerkin methods.
Moreover, the proposed approach has demonstrated promising results in scenarios where the low-dimensional ROM significantly deviates from the correct long-term dynamics, which was previously challenging to address with postprocessing Galerkin techniques.The introduction of a "Gray-Box" model that adds a correction to the truncated Galerkin helps it regain its accuracy; it then allows for post-processing steps to recover even higher levels of accuracy, in ambient space.
Overall, this work contributes to the growing body of literature on data-driven reduced order modeling techniques for dynamical systems and provides a valuable alternative to traditional post-processing Galerkin methods.The proposed workflows have the potential to significantly improve the accuracy and efficiency of reduced order models, which has important implications for a wide range of applications, including but not limited to, aerospace engineering, biomedical engineering, and climate modeling.
A promising future direction of our current work for the construction of reduced-order models is the combination of data-driven techniques with physics-based techniques.The work of R. Geelen et al. Geelen et al. [2023], in which the parameterization of the data is achieved by combining linear subspaces -spanned by the first few POD vectors -and quadratic components, is the most pertinent to this direction.One could express the dynamics in terms of the first few POD vectors and use the quadratic correction only as a post-processing step to obtain a more accurate reconstruction at the end of the integration.The ability to find a quadratic correction could provide improved explainability to the post-processing step, that we lose by learning a black-box post-processing step in our current work.A visualizable example is shown in Figure 19 where the 2-dimensional singularly perturbed system ( ẋ = 2 − x − y; ẏ = 1/ϵ(x − y)) was used to sample data.For this example one could write the dynamics in terms of P OD 1 and express the correction from P OD 2 = f (P OD 1 ) through a quadratic correction since P OD 2 can be seen as a quadratic function of P OD 1 (Figure 19 Therefore, for a given Galerkin solution u H and its time-derivative du H dt over the interval [0, T ], the post-processing Galekin solution is a function v ∈ X (notice it is not in the complement of X H , i.e. not in X ⊖ X H , but in X, so v involves both coarse as well as fine spatial scales), such that v satisfies The right-hand side is a given function at time t = T , and v solves a linear elliptic equation.However, in practice we solve an approximation of v say ṽ ∈ X h , where h ≪ H, and X h is a finer finite element space A.0.2 Euler-Galerkin algorithm applied to Chafee-Infante PDE The implementation of the Euler-Galerkin algorithm described in Sec.3.1.1 is shown here for the Chafee-Infante reaction-diffusion equation.For this PDE, as discussed in Sec.4.1, a two-dimensional inertial manifold exists (n = 2) parameterized by the first two sine Fourier modes α 1 , α 2 .By using the Galerkin projection u(x, t) ≈ m=3 i=1 a i (t)sin(ix) a system of three coupled ordinary differential equations is derived.The derived system of equations reads The term −9α 3 ν of the right-hand side of Equation ( 18) corresponds to the diffusion term and all the other terms of the right-hand side to the reaction terms.We take an implicit Euler step of Equation ( 18) of length τ by using as initial condition α 3 (t = 0) = 0.This gives us the expression α 3 (τ ) = α 3 (0) + τ ȧ3 By moving the diffusive term to the left-hand side and solving in terms of α 3 we get the expression We then perform one fixed point iteration by considering a 3 = 0 and τ = 1.This leads to the Euler-Galerkin approximation In our case, the Euler-Galerkin approximation in Equation ( 21) was used as one of the post-processing schemes to correct the solution of û(x, T ) computed from the truncated dynamics.
basis in which we project and subsequently extend the function f .The projection of f in this truncated step is given as where ⟨•, •⟩ denotes the inner product.For ϕ new / ∈ Φ we obtain (Ef )(ϕ new ) by firstly extending each eigenvector ψ i ∈ Ψ, where σ i is the i th eigenvalue and ψ i (ϕ j ) is the j th component of the eigenvector ψ i .The extended eigenvectors can then used to estimate (Ef )(ϕ new ) as, A.3 Inverse Function Theorem Consider the vector function F (x) = y and assume that x ∈ R n is a solution of F and that F : R n → R n is differentiable .The Inverse Function Theorem Marsden et al. [1993] states that, if the Jacobian matrix
Figure 1 :
Figure 1: Flowchart of the proposed workflow
Figure 3 :
Figure 3: An example of a feed-forward neural network architecture for the approximation of the right-hand-side of an evolution ODE.
Figure 4 :
Figure 4: Learning a low dimensional embedding of data: (a) Manifold learning with Diffusion Maps and inverse transformation with Double Diffusion Maps [Evangelou et al., 2022]; (b) Representative autoencoder structure, including encoder/decoder and the bottleneck layer.
Figure 5 :
Figure 5: Schematic representation of computational workflow: First step includes learning the a minimum representation of the Approximate Inertial Manifold either with DMAPs or an autoencoder, as well as the inverse transformation, i.e. from the latent variables to the sine coefficients.Secondly, the latent variables are learned as a function of the leading three sines or POD coefficients, and finally, the full coefficients are predicted either with the decoder or Double DMAPS.
Figure6: (a) A schematic illustrating the benefits of the post-processing Galerkin methodology.A trajectory of the exact solution is shown on the manifold in a 1 , a 2 , a 3 as a red solid trajectory, its final state denoted with an x marker and (1).The projection of the exact solution in a 1 , a 2 is shown with a red dashed line, its final state denoted as (3), and a blue square.The trajectory integrated by using the approximate inertial form is shown with a black dashed line, and its final state is shown with a black square and denoted as (4).The trajectory integrated by using the truncated Galerkin is shown with a blue dashed line, its final state denoted as (3) and a blue square, and its post-processing (mapping) on the manifold, denoted as (2), and indicated by a blue x marker.The dotted line ∆ 1 shows the distance between (1) and (5), the dotted line ∆ 2 shows the distance between (2) and (3), the dotted line ∆ 3 shows the distance between (1) and (4).The main premise of post-processing Galerkin is that ∆ 1 and ∆ 4 are much smaller than ∆ 2 .(b) The same components used in (a) are shown for the Chafee-Infante PDE.(c) The reconstructed solution in u(x, T ) for all possible options.(d) A blow-up of the reconstruction in u(x, T ).
Figure 7 :
Figure 7: Solution of Chafee Infante reconstructed in physical space; (a) The result of the 3D and the 2D Galerkin are shown in red and dashed black line respectively; (b) Percent error of reconstructed 2D Galerkin solution at each time-step; (c) Comparison of 3D Galerkin (red line) to the 2D Galerkin corrected with the neural network-derived term (dashed blue line); and the 2D Galerkin corrected with the theoretically derived α 3 ; (d) Percent error of reconstructed solution of the 2D ODE, corrected with the ML-derived α 3 , at each time-step; (e) Histogram of the mean absolute percent error of the 2D and ML-corrected 2D model at time T = 5, for 100 randomly selected initial conditions.
Figure 8 :
Figure 8: Latent variables of the autoencoder bottleneck layer; The three latent variables colored by the value of the first three sine coefficients, α 1 (left), α 2 (center) and α 3 (right).Smoothness in color gradation suggests a one-to-one relation.
Figure 9 :Figure 10 :
Figure 9: The histogram of the determinant of the Jacobian det(J f (L)) computed along the training and test sets with automatic differentiation of the decoder.
Figure 11: (a) Reconstructed solution of the learned 3D equation (broken line) and the actual 3D dynamics (solid line), showing almost perfect agreement.(b) Percent error of the learned and true 3D solutions along the integration time.(c) Histogram of the mean absolute percent error of the learned 3D model at time T =0.06, for 100 randomly selected initial conditions.
Figure 12 :
Figure 12: Left: Comparison of time evolution of the first three sine coefficients of the Galerkin discretization; 8dimensional (red), learned 3-dimensional (black) the 3-dimensional (blue) Galerkin discretization
Figure 13 :
Figure 13: Comparison of the solution, in physical space, in two time instances (a) T 1 , where the 3D and 8D dynamics are sufficiently close and (b) T 2 , when they are quite far apart; (c) percent error between the truncated and the learned 3D AIF dynamics; (d) Comparison of time evolution of the first three sine coefficients of the Galerkin discretization : 8-dimensional (solid line), learned 3-dimensional (broken line) the 3-dimensional AIF (dotted line) Galerkin discretization Figure 14: (a) Comparison, at α=33 and T = 0.5 of the reconstructed Kuramoto-Sivashinsky solution, between the 8-dimensional Galerkin (solid red line) and the 3-dimensional learned AIF ODE corrected with the decoder-derived higher harmonics terms (broken blue line); (b) percent error over the time integration interval.(c) Comparison, at α=33 and T = 0.5 of the reconstructed Kuramoto-Sivashinsky solution, between the 8-dimensional Galerkin (solid red line) and the 3-dimensional learned AIF ODE corrected with the DMAPs-derived higher harmonics terms (broken blue line); (d) Percent error along the time-span of integration.(e) Histograms of MAPE of the autoencoder-corrected and DMAPs-corrected solution at time T =0.5, for 100 randomly selected initial conditions.
Figure 15 :
Figure 15: Gray-Box correction of 3D Galerkin dynamics: Comparison of the solution, in physical space, in two time instances (a) T 1 = 0.02, where the 3D and the 8D Galerkin dynamics are sufficiently close and (b) T 2 = 0.05, when they are quite far apart; (c) Percent error between the truncated and the Gray-Box 3D dynamics; (d) Comparison of time evolution of the first three sine coefficients of the Galerkin discretization : 8-dimensional (solid red line), Gray-Box 3-dimensional (broken blue line) truncated 3-dimensional (blue solid line) Galerkin discretization (e) Comparison of the solution, in physical space, of the corrected Gray-Box with the 8D Galerkin at T 2 = 0.05 Figure 16: (a) % energy of the data contained by progressively increasing POD basis size.3 POD modes represent 99.99 % of the variance (b) MAPE of the dataset projected on progressively increasing POD basis, with respect to the original.When condidering 3 POD modes, the MAPE drops to 0.06%.
Figure 17 :
Figure 17: Latent variables discovered by the autoencoder, whose inputs are POD coefficients values.The smooth color transition implies that the latent variables are 1-to-1 with the leading POD coefficients.
Figure 18 :
Figure 18: Reconstructed solution of the Chafee-Infante equation, at time T = 5.The red line represents the ground truth, 8D Galerkin solution, which corresponds to 3 POD coefficients.The black broken line corresponds to the datadriven post-processed solution of the evolution of 2 POD coefficients.The uncorrected solution derived by 2 POD coefficients, is also depicted with a blue broken line.
then in a neighborhood of x and y the function f −1 exists.This suggests a unique local solution close to any y.The Jacobian matrix is invertible if and only if its determinant is nonzero, therefore, showing that the det(J f (x)) has values of a single sign guarantees that the mapping is locally invertible and thus one-to-one.
Figure 22 :
Figure 22: Performance of the neural network predicting the right-hand-side of the learned ODE of the first three coefficients.Actual versus learned α1 (left), α2 (center) and α3 (right), from the autoencoder (top) and Double DMAPs (bottom) derived values of sine coefficients.
(b)).KevinZeng, Alec J Linot, and Michael D Graham.Data-drivencontrol of spatiotemporal chaos with reduced-order neural ode-based models and reinforcement learning.Proceedings of the Royal Society A, 478(2267):20220297, 2022.Post-Processing Galerkin for the finite element Method Let X be the phase space of a nonlinear dissipative evolution equation of the form du dt + νAu = F (u).Let X H be a finite dimensional (e.g.finite element) space of spatial scale H with P H : X → X H an orthogonal projection.The Galerkin approximate solution u H ∈ X H satisfies the equation du H dt + νP H Au H = P H F (u H ), for t ∈ [0, T ].
|
2023-10-25T06:42:59.039Z
|
2023-10-24T00:00:00.000
|
{
"year": 2023,
"sha1": "187caece9a7042263e33db7bbc0c960a69f59ad4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "187caece9a7042263e33db7bbc0c960a69f59ad4",
"s2fieldsofstudy": [
"Engineering",
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
244401639
|
pes2o/s2orc
|
v3-fos-license
|
Endometrial osseous metaplasia complicated by secondary infertility: a case report
Endometrial osseous metaplasia is a rare condition in which there is abnormal presence of bone in the endometrium. There is paucity of reported cases of this pathological condition in Africa and it is usually overlooked as a cause of infertility. Pathogenesis is not well understood but mostly occurs following pregnancy. Pathology may be suspected on ultrasound scan where a linear echogenic substance is seen in the endometrium but diagnosis is confirmed by and treated with hysteroscopy. We present a case of a 43-year-old woman with 2 previous miscarriages who presented with secondary infertility. She had an ultrasound scan done which revealed features suggestive of an intra-uterine copper device. She subsequently had hysteroscopy and a bone-like foreign body was found in the endometrium which was removed with the aid of a grasper and later sent for histopathological evaluation for which a subsequent diagnosis of endometrial osseous metaplasia was made. Indeed, endometrial intraosseous metaplasia should be considered an important differential cause of secondary infertility especially in patients with history of previous miscarriage. Confirmatory diagnosis can be made through hysteroscopy and/or histopathologically although the former is now used.
Introduction
Endometrial osseous metaplasia is an infrequent entity involving the presence of mature or immature bone in the endometrium [1]. The incidence is about 3 in 10,000 and most women are of the reproductive age [2,3]. Majority of the reported cases, the women presented with infertility. However, they may present with other gynecological symptoms. There seems to be an association with previous spontaneous or induced abortion and the disease entity [4][5][6]. Different theories have been postulated to describe the pathogenesis of the endometrial osseous metaplasia, but metaplasia of the endometrial stromal cells, particularly fibroblasts, into bone forming osteoblasts is the most accepted [4,7]. Although pelvic ultrasound scan can aid in the diagnosis, the gold standard for treatment is hysteroscopy [6,[8][9][10]. In most reported cases, fertility is restored after treatment. Here, we present a case report of 43-year-old woman with secondary infertility and was managed hysteroscopically.
Patient and observation
Patient information: a 43-year-old woman presented to our gynecological clinic for hysteroscopy in the month of March, 2021 on referral basis as part of her evaluation for secondary infertility. She was being managed for infertility of 2 years´ duration. She has had 2 previous miscarriages. First was a voluntary termination of pregnancy 25 years ago at 12 weeks of gestation using manual vacuum aspiration. She also had a spontaneous termination of pregnancy a year ago at 8 weeks´ gestation. Retained products of conception were evacuated using manual vacuum aspiration. In both cases, there were no immediate post abortal complications. She had a regular menstrual period. The family and psychosocial history revealed that she is married in a monogamous setting. Her husband is 45-year-old and they both have tertiary level of education. She does not smoke or take alcohol. Also, she does not use recreational drugs. There was no family history of infertility known to her. However, there was history of galactorrhea and she was been treated with bromocriptine.
Clinical findings: at presentation to the obstetrics and gynecology clinic, physical examination showed a young woman, not ill looking or pale, anicteric, not dehydrated with no palpable peripheral lymph node and no pedal oedema. On speculum examination, a healthy looking cervix and vaginal walls were seen. Furthermore, on bimanual examination, normal-sized anteverted uterus with no adnexal mass or tenderness were found. Additionally, there was no cervical excitation tenderness.
Diagnostic assessment: she had transvaginal ultrasonography and hysterosalpingography sequentially 3 days post her first day of presentation while her husband´s seminal fluid was collected for analysis and culture 1 week later. The transvaginal ultrasound scan done revealed a linear echogenic structure with the median aspect of the mass in the posterior endometrium. Also, the performed hysterosalpingography showed bilateral tubal patency. In addition, the husband´s seminal fluid analysis and culture results which became available 3 days following collection showed normal findings. Indeed, there were no diagnostic challenges.
Diagnosis:
a presumptive diagnosis of malpositioned intrauterine device was made following transvaginal ultrasonography findings.
Therapeutic interventions: furthermore, 7 days following hysterosalpingography, hysteroscopy was performed on her using saline as a distension medium. It revealed a spicule of bone-like foreign body about 2cm long in the posterior wall of the uterus. The other parts of the endometrial cavity looked grossly normal.
Follow-up and outcome of interventions: patient was clinically stable post procedure. There were no adverse effects. Also, the specimen discovered on hysteroscopy was subsequently sent to the laboratory for histology on same day of the procedure. Additionally, the histology result which became available 14 days from the day of request revealed a hard cord-like greyish white tissue measuring 1.5 x 0.2 x 0.2cm in the antero-posterior diameter. Microscopically, section showed mature bony trabeculae with non-hemopoietic bone marrow within a bland fibrocollagenous stroma. Occasional chronic inflammatory cells were also seen ( Figure 1A, B, C). The diagnosis of endometrial osseous metaplasia was made. She was subsequently referred back to her primary place of care 24 days from her first day of presentation. Patient perspective: I had a reassuring experience receiving treatments at Babcock University Teaching Hospital. Treatments and laboratory test were prompt and my condition was explained to me in language I understood. I understood that although my condition is associated with inability to conceive, there may be other reasons why there is delay. I am hopeful of getting pregnant soon. Thank you Babcock.
Informed consent: written consent was obtained from the patient to publish images and clinical information relating to the case in any medical publication of choice for the purpose of knowledge sharing and educating the public. The patient was made to understand that no identifying information will be collected or published.
Discussion
Endometrial osseous metaplasia is an endogenous non-neoplastic pathological disorder [11]. It has also been termed endometrial ossification, ectopic intrauterine bone and heterotopic intrauterine bone. It can be found in the ovaries, cervix and vagina [9]. Endometrial osseous metaplasia is a rare clinical entity with an incidence of 3/10,000 and less than 100 cases has been reported in literature [12]. Okohue et al. had reported an incidence of 0.26% in 1002 hysteroscopies in a study done in Nigeria [13]. Even though women in the reproductive age group are mostly affected, a case has been reported in a post-menopausal woman [7,9]. Presence of bony tissue has been linked to abortion since 1923, with more than 80% of affected women said to have an antecedent history of first trimester abortion, either spontaneous or induced [8]. However, it has also been reported in women with no history of prior pregnancy [14]. In line with literature, index patient was in the reproductive age group, had a bony-like foreign body discovered at the posterior aspect of her uterus during hysteroscopy which was confirmed histologically to be a bone tissue. Additionally, prior to presentation, she has had two previous miscarriages, one induced and the other spontaneous, while both occurred within the first trimester. The time gap between previous abortion and diagnosis of endometrial osseous metaplasia varies between 8 weeks and 14 years in most cases, though an interval of 37 years has been reported [3]. The time differences in our patient were 25 years and one year, respectively, for the two previous miscarriages. However, delayed diagnosis may be due to late presentation for the management of infertility.
Patient typically presents with secondary infertility from cases reported but may also present with oligomenorrhea, menorrhagia, dyspareunia, pelvic pain and vaginal discharge [6,10]. They may also be asymptomatic [5]. In our case, our patient presented with infertility. Indeed, presence of bony tissue in the endometrium may act as an intrauterine contraceptive device elevating the levels of prostaglandins thus preventing implantation of the blastocyst [15]. However, our patient´s infertility history may not only be due to endometrial osseous metaplasia, as her age and galactorrhea may also be contributing factors. Theories for the development of heterotopic bone in the endometrium are controversial, but the most favorable mechanism is osseous metaplasia from multipotential stromal cells, usually fibroblasts, which subsequently become osteoblasts [3]. Other suggested mechanisms include: retention of fetal bones that secondarily promote osteogenesis in the surrounding endometrium [8] which may be the mechanism in index case considering she had an induced abortion at 13 weeks of gestation when osteogenesis would have started, continuous and strong endometrial estrogenic stimulation, implantation of embryonic parts without preexisting bone after abortions at an early stage; dystrophic calcification of retained and necrotic tissues, usually after an abortion [8]. Additionally, chronic endometrial inflammation such as endometritis or pyometra can also play a role by stimulating mesenchymal cells which have the capacity to undergo metaplasia and differentiate into chondroblast or osteoblast [8,9]. Furthermore, chronic inflammation may stimulate mononuclear phagocytes to release tumor necrosis factor and superoxide radicals and this will lead to long-lasting insult to the pluripotent stromal cells transforming them to osteoblast in endometrium deficient of superoxide dismutase [8]. In addition, a case of endometrial osseous metaplasia due to metabolic disorders such as hypercalcemia, hypervitaminosis D or hyperphosphatemia has been reported [2].
The differential diagnosis of endometrial osseous metaplasia includes intra-uterine contraceptive device, malignant mixed mullerian tumour, endometrial tuberculosis and retained fetal bone in the uterus [1]. The incidence of retained fetal bone in Nigeria is 0.15% [16]. Although, the presence of retained bone in the uterus will have similar history and symptom, absence of endochondral ossification and surrounding tissue reaction may differentiate it from endometrial osseous metaplasia [8]. Even though in our case there was no endochondral ossification, but there was presence of non-hemopoietic bone marrow which has been previously reported [17]. Additionally, ultrasound scan plays an important role in the diagnosis of endometrial osseous metaplasia. Here, presence of hyperechogenic pattern in the endometrium is suggestive [8]. In our case, a linear echogenic structure was seen in the endometrium. Modalities of treatment include dilatation and curettage, hysterectomy and hysteroscopy. However, hysteroscopy is being considered the gold standard for treatment [5,9,10]. Furthermore, in cases of widespread ossification of the endometrium, hysteroscopy can be performed under ultrasound guidance or laparoscopy to prevent perforation [18]. Usually after treatment, fertility returns in most reported cases [3,8] but this has not been confirmed in index patient as she has been referred back to her primary place of care.
Conclusion
Endometrial osseous metaplasia is an uncommon but treatable cause of secondary infertility. Physicians must have a high index of suspicion for diagnosis to be made early. Hysteroscopy is now the method of choice for diagnosis and treatment.
|
2021-11-20T05:13:27.592Z
|
2021-09-28T00:00:00.000
|
{
"year": 2021,
"sha1": "589a7d770afe74132352a02f95b9dd6b523022bf",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "589a7d770afe74132352a02f95b9dd6b523022bf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2637615
|
pes2o/s2orc
|
v3-fos-license
|
The Relationship between Phonology and Inflectional Morphology in an Agrammatic Aphasic
The interaction between phonological and morphological breakdown in an agrammatic aphasic was investigated. Three linguistic tasks were constructed which were presented via two modes, reading and repetition. Results revealed that purely phonological consonant clusters were easier than clusters which contain a morphological component, and that these categories could be differentiated in terms of phonological error type. Inflectional omission was conditioned by phonological characteristics of the preceding segment. There was an interaction between the phonological and morphological hierarchies of difficulty in inflections which are homonyms phonologically. Findings suggest an interdependence between phonological and morphological breakdown in the agrammatic aphasic examined. Results were discussed with reference to clinical implications. OPSOMMING Die interaksie tussen fonologiese en morfologiese uitvalle in 'n agrammatiese afasia pasient is ondersoek. Drie linguistiese take is opgestel. Die pasient moes die take ouditief (deur middel van herhaling) en visueel (deur middel van lees) uitvoer. Resultate dui daarop dat suiwer fonologiese konsonant groepe makliker was om uit te voer, as groepe wat 'n morfologiese komponent bevat het en dat hierdie kategoriee gedifferensieer kon word in terme van tipe fonologiese foute. Die voorafgaande segment se fonologiese karakteristieke het inflektiewe weglatings bepaal. Daar was interaksie tussen die fonologiese en morfologiese hierargiese moeilikheidswaarde van infleksies wat fonologiese homonieme is. Bevindings dui op 'n interafhanklikheid tussen fonologiese en morfologiese uitvalle in die pasient. Resultate is bespreek met verwysing na kliniese implikasies. To date, the trend within the psycholinguistic aphasia research has been to focus on the components of language (syntax, semantics and phonology) in isolation, rather than to investigate interrelationships between these levels of linguistic breakdown. The symptomatology of agrammatic aphasics, particularly their tendency to delete inflectional morphemes and their high proportion of phonemic errors, provides a unique opportunity to examine the mutual influence of phonologically i and morphologically impaired systems. Independent research into phonology and inflectional morphology has been well documented. Articulatory investigations have resulted in conflicting opinions as regards the nature of aphasic error performance. Johns and Darley (19j70) and Shankweiler and Harris (1973), for example, support the notion that phonemic substitutions are primarily random, variable and unrelated to the target sound. Other investigators suggest that aphasic articulatory errors reflect systematic, rule-governed variations from the target phonemes (Blumstein, 1973; Marquardt, Reinhart and Peterson, 1979). Studies exploring the performance of agrammatic aphasics on inflectional endings reflect a consistent hierarchy of difficulty for the various morphemes (Goodglass and Berko, 1960; Goodglass, 1976). De Villiers (1974) contends that explanations such as transformational complexity, semantic complexity, stress, redundancy and frequency of occurrence of each morpheme in normal adult speech are insufficient to explain the hierarchical morphemic impairment. This suggests that alternative explanations should be sought. A number of theories have been proposed to account for the underlying deficits in agrammatism. Kean (1977) contends that agrammatism is an " . . . interaction between an impaired phonological capacity and otherwise intact linguistic capacities" (p. 10). This conDie Suid-Afrikaanse Tydskrif vir Kommunikasieafwykings, Vol. 32, 1985 troversial phonological explanation has subsequently been criticized. Garman (1981) suggests that a number of Kean's arguments are based on misinterpretations of the existing literature. Kolk (1978) argues that although a phonological approach may have value with respect to the 'articulation' impairment in agrammatism, it does not provide a convincing argument to explain the syntactic omissions characteristic of these patients. Goodglass and Berko (1960) take an opposing view to Kean (1977) and suggest that grammatical function is more important than phonological structure in determining the difficulty of an inflectional ending. This theory is based on their finding that the plural, possessive and third person singular inflectional morphemes (all of which are homonyms phonologically) are omitted with differential frequency in agrammatic aphasics (Goodglass and Berko, 1960). Martin, Wasserman, Gilden, Gerstman and West (1975) suggest that neither a purely phonological nor a purely morphological breakdown is sufficient to explain aphasic error performance. They propose that " . . . it is the interaction of processes which is affected in aphasia rather than a specific impairment of a particular process or component" (p. 449). This interactional model between phonological and morphological impairment has not been confirmed in the aphasia literature. However, several studies in child language have shown an interaction between syntax and phonology (Menyuk and Looney, 1972; Paul and Shriberg, 1982). The paucity of research into the relationship between linguistic components in aphasia, provided a strong motivation for this study. The broad goal was thus to investigate the inter-relationship between phonological and morphological impairment in the expressive language of an agrammatic aphasic. The specific aims were: 1. To compare the subject's error performance on consonant clusters which are purely phonological constructions (PC); clusters which
To date, the trend within the psycholinguistic aphasia research has been to focus on the components of language (syntax, semantics and phonology) in isolation, rather than to investigate interrelationships between these levels of linguistic breakdown.The symptomatology of agrammatic aphasics, particularly their tendency to delete inflectional morphemes and their high proportion of phonemic errors, provides a unique opportunity to examine the mutual influence of phonologically i and morphologically impaired systems.
Independent research into phonology and inflectional morphology has been well documented.Articulatory investigations have resulted in conflicting opinions as regards the nature of aphasic error performance.Johns and Darley (19j70) and Shankweiler and Harris (1973), for example, support the notion that phonemic substitutions are primarily random, variable and unrelated to the target sound.Other investigators suggest that aphasic articulatory errors reflect systematic, rule-governed variations from the target phonemes (Blumstein, 1973;Marquardt, Reinhart and Peterson, 1979).
Studies exploring the performance of agrammatic aphasics on inflectional endings reflect a consistent hierarchy of difficulty for the various morphemes (Goodglass and Berko, 1960;Goodglass, 1976).De Villiers (1974) contends that explanations such as transformational complexity, semantic complexity, stress, redundancy and frequency of occurrence of each morpheme in normal adult speech are insufficient to explain the hierarchical morphemic impairment.This suggests that alternative explanations should be sought.
A number of theories have been proposed to account for the underlying deficits in agrammatism.Kean (1977) contends that agrammatism is an "... interaction between an impaired phonological capacity and otherwise intact linguistic capacities" (p.10).This con-Die Suid-Afrikaanse Tydskrif vir Kommunikasieafwykings, Vol. 32, 1985 troversial phonological explanation has subsequently been criticized.Garman (1981) suggests that a number of Kean's arguments are based on misinterpretations of the existing literature.Kolk (1978) argues that although a phonological approach may have value with respect to the 'articulation' impairment in agrammatism, it does not provide a convincing argument to explain the syntactic omissions characteristic of these patients.Goodglass and Berko (1960) take an opposing view to Kean (1977) and suggest that grammatical function is more important than phonological structure in determining the difficulty of an inflectional ending.This theory is based on their finding that the plural, possessive and third person singular inflectional morphemes (all of which are homonyms phonologically) are omitted with differential frequency in agrammatic aphasics (Goodglass and Berko, 1960).Martin, Wasserman, Gilden, Gerstman and West (1975) suggest that neither a purely phonological nor a purely morphological breakdown is sufficient to explain aphasic error performance.They propose that "... it is the interaction of processes which is affected in aphasia rather than a specific impairment of a particular process or component " (p. 449).This interactional model between phonological and morphological impairment has not been confirmed in the aphasia literature.However, several studies in child language have shown an interaction between syntax and phonology (Menyuk and Looney, 1972;Paul and Shriberg, 1982).
The paucity of research into the relationship between linguistic components in aphasia, provided a strong motivation for this study.The broad goal was thus to investigate the inter-relationship between phonological and morphological impairment in the expressive language of an agrammatic aphasic.The specific aims were: 1.To compare the subject's error performance on consonant clusters which are purely phonological constructions (PC); clusters which are phonological constructions but with morphological poss talities (PCM); and clusters which are morphological combinations (MC) 2 To establish whether the subject's omission of inflectional morphemes is conditioned by the sonorance hierarchy of the preceding segment, as suggested by Kean (1977).3. To examine the subject's production of three grammatical morphemes which are homonyms phonologically, namely the plural marker, the possessive marker and the third person singular, all of which are realized morphophonemically by the allophones /s,z,az/.
SUBJECT
The subject used in this study was R.P., a white, South African, English speaking female, aged thirty-eight years.In December 1978, she presented with a sudden onset of expressive aphasia.Computerized tomography revealed a left middle cerebral artery infarct, the etiology of which was unknown.No further neurological details were available.Pre-morbidly, she was right handed.R.P. fulfilled the following criteria: 1.She was a moderately impaired agrammatic aphasic as assessed on the Boston Diagnostic Aphasia Examination (BDAE) (Goodglass and Kaplan, 1972).2. R.P. demonstrated phonemic errors, particularly on consonant clusters.3. Her expressive language was characterized by omission of inflectional morphemes.4. Dysarthria and oro-facial apraxia were excluded as being causally related to the phonemic errors.5. Phonemic discrimination abilities were excluded as being etiologically related to phonemic errors.6. R.P. demonstrated a competence for the tasks on which she would be expected to perform.More specifically, reading and auditory comprehension abilities, as assessed on the BDAE were sufficiently intact to enable these modalities to be utilized in testing.7. R.P.'s mother tongue was English.8. Peripheral hearing and vision were within normal limits.9. R.P. was neurologically stable during the test period.
A. Preliminary Investigations
On the BDAE, R.P. obtained a profile representing Broca's (agrammatic) aphasia.Results served to satisfy some of the criteria for subject selection, specifically her relatively intact receptive language and reading abilities and the presence of phonemic and morphological errors.
On the Goldman Fristoe Test of Articulation (Goldman and Fristoe, 1969) R.P. showed several articulation errors on both single phonemes and phonemic sequences, verifying the presence of phonemic errors in meaningful words as elicited on a naming task.
On a test of Ten English Inflectional Morphemes, designed by the authors, R.P. demonstrated inflectional omission.In accordance with the format proposed by Goodglass and Berko (1960) a sentence completion test was constructed to assess the following morphenes: plural /s,z/; plural /sz/; past /t,d/; past /ad/; present singular /s,z/; present singular /sz/; possessive /s,z/; possessive /sz/; comparative /a/; superlative /sst/.The test included six opportunities for the use of each morpheme selected.The following is an example of an item (plural) "I bought a large pot, a medium-sized pot and a small pot.Altogether I bought three -?".
On the Goldman Fristoe Test of Auditory Discrimination (Goldman, Fristoe and Woodcock, 1970), administered in order to verify the subject's competence for discriminating between single consonants, R.P. scored 100%, indicating no errors on this standardized test of auditory discrimination.R.P. responded adequately at all frequencies on a screening pure tone audiometric test, indicating that hearing was within normal limits.
B. Tasks
All tasks designed for the purpose of this study were evaluated by means of a pilot study on three normal adults.
CCVCC word list
A list of 150 CCVCC words (Appendix I) was devised in accordance with the format proposed by Martin et al,. (1975).The stimuli were divided into three groups of fifty words each.In the first group, the final cluster was a purely phonological construction (PC) such as /mp/ in 'cramp'.In the second group, the final consonant cluster was a phonological construction, but the final segment belonged to the group /s,z,t,d/ and therefore suggested the possibility of a morpheme (Martin et al., 1975).An example of a phonological construction with the morphological possibilities (PCM) is /st/ in 'breast'.The third group contained final consonant clusters which were morphological combinations (MC), such as /st/ in 'dressed'.The inflections included in the (MC) list in the present study were limited to the plural /s,z/ and past /t,d/, and in order to maintain uniformity, words in the PCM group were limited to the phonemes (s,z,t,d/ in final consonant position.
Sonorance-Inflection word list
A list of 150 words was composed (Appendix II).Each word was a combination of a stem morpheme (CV or CVC) and an inflectional morpheme (past /d/ or plural (z(), for example (bees, called).The stem morphemes were divided into five groups of thirty words each, according to the sonorance hierarchy of the final segment of the stem.Sonorance was used to refer to the extent to which the airflow is impeded during the articulation of a segment (Kean, 1977).The five categories of final stem segments arranged hierarchically from the most sonorant (least impeded airflow) to the least sonorant (most impeded airflow) were: vowels and diphthongs, liquids, nasals, fricatives and stops respectively.Within each group fifteen words were combined with the plural inflectional allophone /z/ and fifteen words with the past inflectional allophone /d/.I Rationale for selecting the allophones /z/ and /d/ I -I Since stems ending in a vowel are constrained by morphophonemic rules to take a voice allophone, voiced allophones were used throughout.Several studies in the aphasic literature have shown that' the plural is a relatively well retained morpheme whereas the past regular is a frequently omitted morpheme (Goodglass and Berko, 1960;de Villiers, 1974).These two morphemes were assessed in an attempt to control for the possibility of obtaining too few omissions (exclusive use of plural) or too many omissions (exclusive use of past), for between group com parison.Martin et al., (1975) contend that the number of phonemes within a syllable is not significant in aphasic error performance on a given phoneme.Johns & Darley (1970) suggest that the number of syllables is an important factor in error performance.For these reasons all stem and morpheme combinations were restricted to monosyllabic words of the structure CVC or CVCC.No initial clusters were included an an attempt was made to randomly vary the consonants utilized in initial position.
The South African Journal of Communication Disorders, Vol. 32, 1985 Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012)
Phrase/Sentence list of plural, possessive and third person singular
In order to compare R.P.'s production of the plural marker, the possessive marker and the third person singular morpheme, a list of 135 sentences/phrases was compiled (Appendix III).The stimuli were divided into nine groups, so that each allophone /s,z,az/ of each morpheme was tested fifteen times.Phrases were constructed since the possessive nature of a stimulus cannot be inferred from a single word.For example horse's in a repetition task would be interpreted as a plural.It was felt that a minimum of four syllables was necessary to convey the possessive nature of a stimulus, for example, 'the horse's mouth'.All stimuli therefore comprised four syllables.
C. Administration of Tasks
Each list was administered using two modes of presentation.
1.An auditory mode -repetition 2. A visual mode -reading Two modes of presentation were selected because the stringent criteria adopted in test construction limited the number of stimuli available in certain groups.Due to the specific nature of the areas being investigated, a spontaneous sample, which may be considered as an ideal medium for linguistic investigation, would not have enabled sufficient sampling of all aspects under study.
For repetition tasks, R.P. was instructed to repeat each item after the experimenter.If no response was given the item was repeated.
For reading tasks each item was printed clearly and individually in 10mm capital letters.Word items were printed on 7cm by 9cm cards and phrase/sentence items on 14cm by 9cm cards.Each card was presented singly to R.P. and she was instructed to read it aloud.
Testing was carried out on two different days for approximately forty-five minute periods in order to control for fatigue.
D. Analysis Procedure and Scoring
All responses were recorded on a Revox Tape Recorder (model 375 Dolby Version) and subsequently transcribed in broad phonetic script by three independent transcribers.A two out of three consensus was accepted for each word.
Analysis procedure specific to particular tasks j I. CCVCC word list j a) A frequency count of correct versus incorrect initial and final clusters in the three categories was carried out.b) Phonological errors occurring in final clusters were differentiated according to type, on the basis of two broad categories, namely sequencing and substitution errors.Sequencing errors for the purposes of this study included additions, omissions and metatheses.In instances where a number of phonological errors occurred in one cluster, each was tabulated separately.For example, /st/ -» /tz/ was scored as both a sequencing and a substitution error.
Sonorance-Inflection word list
A frequency count of morphemes omitted, retained and incorrectly produced was carried out.The incorrect category included instances where R.P. retained a morpheme, but not the particular morpheme under stimulation; for example the allophone /z/ instead of /d/.Results were expressed as percentages.
Phrase/Sentence list of plural, possessive and third person singular
A frequency count of retained morphemes was carried out.A morpheme was considered as retained even if R.P.'s allophonic realization was not entirely accurate.For example, 'wishes' (third person singular) was realized by R.P. as /wijs/ and this was scored as a retained inflection.
RESULTS AND DISCUSSION
Results of R.P.'s performance on each task will be presented individually and overall trends will be discussed in relation to the stated aims of this study.Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012)
A comparison of R.P. 's error performance on PC, PCM and MC consonant clusters a) Frequency count of correct versus incorrect consonant clusters
irrespective of the number of phonemic errors occurring in each.
Results of the present study were not entirely consistent with Martin et al. 's, (1975) prediction of increased difficulty in the progression PC PCM -MC.However, the fact that incorrect final clusters increased in the direction PC MC, supports the contention that due to the added cognitive decision component, a CCVCC word with two morphemes (e.g.dressed) would be more difficult for an aphasic to process than a CCVCC word which has one morpheme (e.g.trump) (Martin et al., 1975).
The feet that the PCM category reflected the highest frequency of incorrect clusters is difficult to explain.It is felt that the PCM category as proposed by Martin et al, (1975) needs careful consideration.Whether in feet the /st/ cluster in a word such as 'breast', for example, is interpreted as a possible morphological combination by the aphasic, is open to speculation.However, results of the present study, suggest that further research into whether the PCM category is conceptualized as a phonological or morphological construction, and whether such a category is in fact valid, could be of value in providing insight into the interaction between these two linguistic components.
The finding that initial clusters are more likely to be correctly produced than final clusters is consistent with that of Martin et al., (1975) who contend that the final cluster position may suggest the possibility of a morphological component which would thus pose a more difficult processing task to the aphasic.The distribution of error types supports the contention that "... the substitution error is more indicative of a basic phonological impairment, while sequencing errors are more indicative of interactions between the phonological and morphological components" (Martin et al., 1975, p. 446).The approximately equal error distribution in the PCM category, seems to suggest the need for further research into the aphasics' conceptualization of this group as discussed above.
Frequency count of omitted inflections as a function of the sonorance hierarchy of the preceding segment
Table 4 represents a summary of morphemes retained, omitted and incorrectly produced, expressed in relation to N. Omission of the morpheme increased in the order V (least omitted) S L Ν ^ F (most omitted), where (V), (S), (L), (N) and (F), represent the sonorance category of the final segment of the stem!More inflections were retained following vowels than consonants.Within the consonantal group, the morpheme was most likely to be omitted when preceded by a fricative and least likely to be omitted when preceded by a stop.Kean (1977) hypothesized that omission of the morpheme would increase as the airflow in the articulation of a segment became more impeded, that is in the order V (least omitted) -^-L-»N-»F-»S (most omitted).This contention was not supported by the present results.A possible explanation for the finding that the morpheme is more likely to be retained following a vowel than a consonant may be related to the syllable structure of words included in this task.Stem morphemes ending in vowels were of the construction CV (e.g.bee): while those ending in consonants were of the construction CVC (e.g.dog).Addition of the morpheme resulted in CVC stimuli for the vowel category (e.g.bees) and CVCC stimuli for the consonant category (e.g.dogs).Therefore retention of the inflection when the stem ends in a vowel, and omission when it ends in a consonant, may reflect a strategy to maintain the CVC syllable structure form.'There is thus clear evidence to suggest that this subject has a tendency to employ simplification processes.j I Shankweiler and Harris (1973) suggest that vowels are easier for aphasics to produce than consonants and that within the consonan- Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) tal group, fricatives and affricates are more susceptible to error than other phonemes.The present findings suggest that the omission of inflections may be conditioned by the susceptibility to error or 'complexity of articulation' of the preceding segment.
It appears that although R.P.'s inflectional omission was not conditioned strictly by the sonorance hierarchy of the preceding segment, omission and retention were influenced by certain phonological characteristics of this segment as well as the overall syllable structure of the word.If her inflectional deletions were solely attributable to a syntactic impairment, an equal percentage of omissions would have been expected across all groups.Verification of the present trends on a large group of agrammatic aphasics, assessing a variety of inflectional morphemes, may provide strong evidence for an interaction between phonological and morphological breakdown.
Frequency count of retained plural, possessive and third person singular morphemes as a function of their stimulus allophonic realization
correct.However, the fact that R.P. retained the syllabic allophone /az/ with greater frequency than the non-syllabic form /s,z/ is consistent with the findings of Goodglass (1976) and in opposition to those of De Villiers (1974).Goodglass (1976, p. 250) ascribes the greater retention of the syllabic form /az/ to the added 'saliency' of the extra syllable.He states that "there is no basis at present for anything but a first order intuitive definition of saliency as the resultant of information, load, affective tone, increased amplitude and intonational stress" (Goodglass, 1976, p. 253).It is clear that this definition of saliency, includes both receptive and expressive components.Therefore, if saliency, as delineated above by Goodglass (1976), were the sole explanation for the present findings, greater retention of the voiced /z/ as opposed to the unvoiced /s/ would have been expected, particularly on repetition tasks.However, the fact that R.P. showed greater retention of /s/ as opposed to /z/, suggests that alternative explanations, possibly with phonological implications should be sought.Wolk (1978) reported that voiced fricatives may be more susceptible to error in aphasics than their voiceless cognates, which may explain R.P.'s greater retention of the stimulus allophone /s/ as opposed to /z/.Table 5 clearly illustrates that the frequency of morpheme retention increases in the progression: possessive (least retained) -> third person singular -> plural (most J retained).The frequency of allophonic retention increases in the progression /z/ (least retained) -> /s/ -> /az/ (most retained).This pattern is maintained for each individual inflection, with the exception of plurals where /s/ = /z/.
An interactional analysis reveals that: -Third person singular /az/ is better retained than plurals /s/ and /z/.
-Possessive /az/ is better retained than third person singular /s/ and /z/.
-Possessive /s/ is better retained than third person singular /z/.
Morphological Complexity
The hierarchy of grammatical difficulty exhibited by R.P. is consistent with reports in the literature (de Villiers, 1974;' Goodglass, 1976).
Phonological Complexity
For the. purposes of the present study, any realization of the allophone was tabulated as a retention of the stimulus allophone.This phonological scoring procedure precluded strict comparison with other writers, who considered the allophone as either correct or in-Whilst some explanations have been provided, a more complete account of the above findings would involve detailed consideration of receptive language and perceptual factors, which is felt to go beyond the scope of this study.However, R.P.'s differential retention of the stimulus allophones /s/ and /z/, suggests that further research into receptive language and phonemic perception in agrammatic aphasics, may provide valuable information.
Interactional Analysis
The finding that syllabic forms of more complex morphemes are more likely to be retained than non-syllabic forms of less complex morphemes, provides strong evidence for an interaction between apparent phonological and morphological hierarchies of difficulty.
MAJOR TRENDS
Overall, the following trends exhibited by R.P. in this study, suggest an interdependence between the phonological and morphological levels of breakdown for this case: la.Consonant clusters of purely phonological construction were more likely to be correctly produced than clusters containing a morphological component or suggesting the possibility thereof.
b.The cluster categories PC, PCM and MC were clearly differentiated in terms of the proportion of sequencing versus substitution errors.PC reflected a greater proportion of substitution errors, MC a greater proportion of sequencing errors and PCM an approximately equal distribution of both.
2. Inflectional deletion appeared to be conditioned by phonological characteristics of the preceding segment as well as the syllabic structure of the word.
3. There was an apparent interaction between the grammatical and phonological hierarchies of difficulty in three morphemes which are homonyms phonologically.
CONCLUSIONS
Results of this study reflect a mutual interdependence between the phonologically and morphologically impaired systems of this agrammatic aphasic patient.Such findings contradict the notions that agrammatism is a uniquely phonological deficit (Kean, 1977) or that it is a disruption of the syntactic component of language co-occurring with an independent disorder of articulation (Berndt and Caramazza, 1981, p. 171).An interactional model between phonology and morphology, suggesting a unitary linguistic representation is strongly Die Suid-Afrikaanse Tydskrif vir Kommunikasieafwykings, Vol. 32, 1985 Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) indicated.Verification of the present trends on a large group of agrammatic aphasics may support the contention that there is no single impairment at a specific level in agrammatism.Rather, a complex interaction of linguistic processes, all of which are operating at a reduced level of efficiency would be indicated (Martin et al., 1975).Such a model highlights the inherent limitations of fragmenting the linguistic components in the treatment of agrammatism and suggests a number of clinical implications for the aphasiologist.Firstly, diagnostic procedures could possibly include a description of morphological breakdown in the context of phonological breakdown, rather than two detailed but separate analyses.Secondly, phonological environments conditioning the omission of inflectional morphemes should be evaluated for each patient and therapy could proceed from phonologically simpler to more complex contexts.
Further research into the relationship between linguistic components in both aphasia and child language disorders is indicated.This would not only facilitate a more holistic approach to the management of these patients, but would provide greater insight into the organization of language components in a linguistically intact system.
Table 2 Breakdown of correct initial and final clusters Category Correct Clusters Category Initial Cluster Final Cluster Category No. No.
, (1975), in a similar study, conceptualized difficulty as the number of phonemic errors in a particular category.For example 'drink' / 'glink' contains two phonemic errors, whereas 'drink' / 'grink' contains one phonemic error.Martin et al. contend that the former production reflects greater difficulty than the latter.In this study, any two incorrect clusters were considered as being equally 'difficult', Die Suid-Afrikaanse Tydskrif virKommunikasieafwykings, Vol.32, 1985
Table 4 Distribution of morphemes retained, omitted or incorrectly produced, expressed as a function of the sonorance hierarchy of the stem final segment
-The number of words in which the final segment produced by R.P. corresponded to the category under investigation.The South African Journal of Communication Disorders, Vol.32, 1985
The Relationship between Phonology and Infectional Morphology in an Agrammatic Aphasic APPENDIX II SONORANCE -INFLECTION WORD LIST
XThe South African Journal of Communication Disorders,Vol.32,1985Reproducedby Sabinet Gateway under licence granted by the Publisher(dated 2012)
|
2018-04-03T03:21:27.999Z
|
1985-12-31T00:00:00.000
|
{
"year": 1985,
"sha1": "c4f8f897fa650e5d4f1c58e3ea9a0a82aad6ef0e",
"oa_license": "CCBY",
"oa_url": "https://sajcd.org.za/index.php/sajcd/article/download/327/439",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "922884da5f5f122b941c08b043859f3b9f69db33",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
250671397
|
pes2o/s2orc
|
v3-fos-license
|
Charge Injection and Transport in Organic Nanofibers
We investigate the carrier injection and transport in individual para-hexaphenylene nanofibers by electrical transport measurements at different temperatures. The injected current shows much weaker temperature dependence than what would be anticipated from a simplistic model that considers the injection barrier height equal to the difference between the metal electrode work function and the HOMO energy level of the organic semiconductor. Semiquantitative modeling suggests that the weak temperature dependence is due to injection into a distribution of states rather than into a single energy level. This disorder induced energy level broadening could be caused by the electrode deposition process.
Introduction
Self-assembled para-hexaphenylene (p6P) nanofibers display a range of appealing features that point towards their potential application in future nanophotonic and optoelectronic systems. Their photoluminescence output is in the blue range of the spectrum and is highly polarized [1], thus indicating a high degree of crystallinity. A similar nanofiber electroluminescence spectrum is expected from thin-film measurements [2]. In addition, these sub-wavelength nanofiber structures show waveguiding capabilities [3]. We have recently shown how electrical contacts to nanofibers can be made based on a shadow masked evaporation technique [4], and how mechanical manipulation allows the generation of artificial structures [5].
However, the details of carrier injection and transport in these nanofibers are not completely understood. Recently we showed that the current-voltage (I-V) characteristics of gold contacted nanofiber devices appear to be limited by hole injection from the gold electrode to the p6P nanofiber [6].
Experimental
The crystalline p6P nanofibers are made by hetero-epitaxial growth on a heated muscovite mica substrate under high vacuum conditions, where the oligomers self-assemble into fiber-like structures [7]. These nanofibers have a typical height of a few ten nanometers, a width of a few hundred nanometers, and an as-grown length of several 100 micrometers. The molecular structure and the nanofiber crystal structure are shown in figure 1. 3 To whom any correspondence should be addressed The molecules are stacked in a layered herring-bone crystal structure at an angle of 77° with respect to the long nanofiber axis. c) Scanning electron microscope (SEM) image of a nanofiber device. Electrical characterization can be performed on a single nanofiber which has been electrically contacted with two metal electrodes by the shadow mask method [4].
The nanofiber devices are fabricated by dispersing nanofibers in a suitable liquid and spreading a small amount of the dispersion on an elevated silicon dioxide structure. After evaporation of the liquid, contacts are attached to each end of an individual nanofiber by electron-beam evaporation of the required electrode material using a special shadow mask method as described in [4]. Figure 1c shows an SEM image of an individual nanofiber contacted with two metal electrodes.
After sample preparation, the sample is mounted in a small compartment with electrical feedtroughs where the temperature can be controlled either by lowering the sample into a nitrogen cryostat or by heating it with an attached resistor through Joule heating. The electrical characterization is carried out by slowly sweeping the bias voltage while simultaneously recording the resulting current via a custom-made, Labview-based measurement set-up employing a National Instruments data acquisition card and a Stanford Research current preamplifier. Figure 2a shows I-V characteristics of an individual nanofiber measured at three different temperatures. As expected for the injection-limited case, the current decreases with decreasing temperature. Figure 2b shows the current at a bias voltage of 3 volts versus reciprocal temperature. The injected current has a clear temperature dependence at higher temperatures and stabilizes below ~ 200 K. Several theoretical approaches have been used in the interpretation of carrier injection into organic semiconductors [8] including thermionic emission across the interface barrier or a tunneling process. In contrast to conventional inorganic semiconductors, injection in this case occurs into a localized state which must be taken into account. A model based on thermally assisted tunneling into localized states has been developed [9]. This model considers carrier injection from a metal electrode into a localized state which requires a consideration of the energetic structure of the metal/semiconductor interface. Here, we look at the injection of holes from a gold contact into the HOMO level of p6P. The externally applied field F together with the Coulomb interaction with the image charge give rise to an energy barrier
Results and Discussion
measured with respect to the metal Fermi level. is the energy difference between the Fermi level of the gold electrode and the HOMO level of p6P (neglecting a possible interface dipole layer), q is the elementary charge, and 0 r is the dielectric constant. In the ideal case, the interface should be sharp and the energy levels in the organic semiconductor should remain well-defined all the way towards the interface. In a practical device, however, the deposition of metal contacts on top of the organic material can create some disorder in the region close to the interface [10]. A possible model for the density of states close to the interface is a Gaussian energy distribution g(E) characterized by an energy width [9] with N 0 being the density of traps. A schematic of the energy levels associated with the injection process is shown in figure 3. In the model by Arkhipov and co-workers [9], the injected current is found by considering injection into the distribution of states in the organic semiconductor followed by either the return of the carrier to the electrode or its diffusive escape into the bulk. The injected current I inj is therefore found as a product of the tunneling probability exp(-2 x 0 ) (i.e. the probability of the carrier reaching the position x 0 in the first jump), and the escape probability w esc (x 0 ) where is the inverse localization radius and Bol(E) is a Boltzmann factor The escape probability w esc (x 0 ) is determined by with the thermal energy k B T and the distance a from the electrode to the first site in the semiconductor. This model has been used to investigate the temperature dependence of the carrier injection. Considering to be equal to the difference between the gold Fermi level (~5.1 eV) and the HOMO level of p6P (~6.0 eV) [11] and considering injection into a single energy level, the current will be strongly dependent on temperature as shown with the full squares in figure 2b. Apparently the actual temperature dependence is weaker than this prediction. A possible explanation for this deviation could be a barrier lowering due to an interface dipole layer [12]. Photoelectron spectroscopy studies of the interface between gold and p6P have shown a significant vacuum layer shift [13]. However, these studies were carried out at ultra-high vacuum conditions. Recently it has been argued that more realistic device fabrication conditions can cause water molecules to be integrated in the interface and cancel the interface dipole [14]. Thus, the results from [13] do presumably not apply here. Since the barrier is unknown, we have tried to model the injection current assuming different values of barrier height: 0.9 eV, 0.7 eV, and 0.5 eV. We have then modified the width of the energy distribution until a reasonable agreement between injected current and temperature (in the interval above ~200 K) was observed. These three model predictions are shown in figure 2b (with r = 1.9 [3], N 0 = 1.7 x 10 21 cm -3 , = (3 Å) -1 , and a = 5.6 Å). The model describes a dependence of the injected current on temperature that is weaker when the carriers are injected into a distribution of states rather than a single energy level: the wider the distribution, the weaker the temperature dependence. Qualitatively, this could explain part of the observed temperature dependence: that the organic layer close to the interface has a disorder induced level broadening that causes more carriers to be injected at lower temperatures. This disorder could be created during the electrode deposition where hot gold atoms or clusters collide against the organic material. It has been shown that this process can cause a less 'sharp' interface [13]. However, since neither the exact barrier height nor the width of the energy distribution is known, we cannot make an exact estimate. All three model calculations give very similar results that only deviate slightly at low temperatures where the model is not appropriate at all. Figure 2a includes the calculated I-V characteristics at three different temperatures together with the measured I-V curves. In these model calculations ( , ) = (0.7 eV, 0.13 eV) have been used. A decent agreement is observed at high temperatures although the model predicts the current to increase steeper than observed. At low temperatures the model significantly underestimates the injected current as is also evident from figure 2b. A similar temperature dependence has also been observed experimentally on p6P films [15] even with the temperature dependence reversing sign at low temperatures. It thus appears that some additional factor is in play causing a high current injection even at low temperatures.
It should be noted that in our device geometry the exact electric field distribution is not obvious at all. For simplicity we have assumed here that the electric field was equal to the applied voltage divided by gap distance. However, strictly speaking this only applies to a geometry relating that of a parallelplate capacitor. For a more accurate determination of the field, a simulation of the field distribution in this device architecture is necessary, and this may influence the agreement between the model and the measured data.
Conclusion and Outlook
We have measured the temperature dependence of the injected current from a gold electrode into a p6P nanofiber. The injection current shows a weaker temperature dependence than what would be anticipated from a simple estimation of barrier height based on the electrode work function and p6P energy levels. A theoretical model [9] that considers charge injection as carrier hopping into a localized state followed by a diffusive escape into the bulk shows that this weak temperature dependence can be explained if one considers injection into a distribution of states rather than a single energy level. This rather simple model is partly in agreement with the measured data, in particular at temperatures above ~200 K, and suggests that some disorder related level broadening is present in the organic material near the electrode interface. Modeling this by a Gaussian distribution of states suggests an energy width of ~0.1 eV.
The result implies that although the organic nanofiber is near-perfect crystalline, an increaseddisorder region exists near the electrode interface. This could for instance be due to the load during electrode deposition where hot gold atoms or clusters collide with the organic material. In that case, creating a good contact to organic nanofibers may be far from straight-forward, requiring a high degree of control of the electrode deposition process. Decreasing the metal evaporation rate could be one method of obtaining a better interface. Another option could be to attach the electrodes by some other means than evaporation, e.g. by prefabricating the electrodes on a carrier substrate and placing the organic crystal on top as demonstrated with macroscopic tetracene crystals [16]. This method should also be possible for the p6P nanofibers either by dispersing these on a prefabricated electrode array or by a more controlled positioning through mechanical manipulation [17]. It should be pointed out, that a well-defined interface with 'good' contacts for some applications may not be the optimal situation since level broadening increases injection [10].
|
2022-06-28T03:29:56.156Z
|
2007-01-01T00:00:00.000
|
{
"year": 2007,
"sha1": "21a0a79f480516e48253a9e00d09d391f8cfe553",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/61/1/114/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "21a0a79f480516e48253a9e00d09d391f8cfe553",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
72675314
|
pes2o/s2orc
|
v3-fos-license
|
Outcome of Multimodality Therapy for Elderly Colorectal Cancer Patients
The aim of this study was to analyze patterns of multimodality therapy in elderly patients with advanced colorectal cancer. We enrolled 272 patients with colorectal cancer. All patients received chemotherapy and some patients received secondary cytoreductive surgery and/or radiofrequency ablation. We compared differences between elderly patients (age ≥75 years) and non-elderly patients (age <75 years), especially in relation to multimodality therapy. There were no significant differences in cancer-specific survival between elderly (n = 37) and non-elderly patients (n = 235).Twenty-seven percent of elderly and 35% of non-elderly patients received multimodality therapy, which resulted in prolonged survival. Although the main chemotherapy regimen was the same in both groups who received multimodality therapy, elderly patients who received chemotherapy alone seemed to be under-treated. For elderly patients, prognostic factors were host-related, such as comorbidities, whereas for non-elderly patients prognostic factors were tumor-related. Comorbidities and modified Glasgow Prognostic Score may be prognostic indicators in elderly patients receiving multimodality therapy. In conclusion, chronological age alone should not contraindicate multimodality therapy of colorectal cancer in elderly patients. Appropriate selection criteria for multimodality therapy in elderly patients should include not only tumor characteristics, but also hostand treatment-related factors.
Introduction
Colorectal cancer (CRC), the commonest malignancy worldwide, mainly affects the elderly.The mean age at diagnosis is under 72 years, 40% of cases occurring in patients aged over 75 years (Köhne et al., 2008;Yang et al., 2004;Boyle et al., 2005;Christensen et al., 2009).The geriatric CRC population is a very heterogeneous group that includes patients with excellent health status and those with comorbid conditions, functional dependency, and limited life expectancy (Sanoff et al., 2007), all of which may considerably influence the outcome of CRC treatments.
The mainstay of CRC treatment is surgery, however, the role of chemotherapy has expanded considerably over the past 10 years.Modern chemotherapy, including molecular-targeted agents, has increased the survival time of patients with metastatic CRC to more than 2 years (Hurwitz et al., 2004;van Cutsem et al., 2009;Fuchs et al., 2007;Douillard et al., 2013).In addition, cytoreductive surgery for liver, lung, and other metastases has been widely used to achieve cure; however, metastasectomy is appropriate for only a few patients.Multimodality approaches, including surgery, radiotherapy, and chemotherapy, alone or in combination, have been proposed to further prolong survival of patients with recurrent or metastatic CRC.There have been efforts to increase the small proportion of patients receiving multimodality therapy by expanding the indications for it.Many studies have confirmed that chemotherapy can render some originally inoperable liver metastases resectable (Kusunoki et al., 1997;Kopetz et al., 2009;Kozloff et al., 2009;Folprecht et al., 2010;Wong et al., 2011).However, the indications for multimodality therapy in elderly patients with CRC have not been well defined.Elderly patients are more likely to have comorbidities and age-specific deteriorating organ function, which can reduce their tolerance of multimodality therapy, including surgery and modern chemotherapy.Published results for surgical morbidity and mortality rates are conflicting.Some studies show a correlation between age and postoperative complications (Colorectal Cancer Collaborative Group, 2000;van Leeuwen et al., 2008;Lee et al., 2007;Grosso et al., 2012), whereas others do not (Schiffmann et al., 2008;She et al., 2013).Whether elderly patients can tolerate multimodality cancer treatment and benefit from it in the same way as younger patients is controversial.The aim of this study was to describe patterns of multimodality therapy in patients with CRC aged ≥75 years, and to compare the outcomes of elderly (≥75 years) and non-elderly (<75 years) groups.
Material and Methods
This was a retrospective study of all patients (n = 272) who received therapeutic chemotherapy for advanced or recurrent CRC in our Department of Gastrointestinal Surgery, from March 2000 to December 2012.This study included patients with histologically proven unresectable primary CRC, synchronous metastatic CRC, and metachronous metastatic or recurrent CRC.Patients who underwent initial simultaneous primary tumor resection and metastasectomy (e.g., lung and liver) were excluded.Exclusion criteria were any serious major organ dysfunction, a survival expectation of less than 3 months, and any other contraindication to enrollment in the study in the view of the patient's physician.
Multimodality Therapy
According to our institutional policy for the treatment of metastatic CRC with an unresectable primary tumor, all patients enrolled in the study received 4-5 months of initial chemotherapy.All patients were informed about the possibility of secondary multimodality therapy using cytoreductive surgery and/or radiofrequency ablation (RFA) before their initial chemotherapy.Cytoreductive therapy was defined as surgery and/or RFA aimed at reducing tumor volume.Whether to proceed with multimodality therapy was determined by the response to chemotherapy.Cytoreductive surgery and/or RFA were considered for patients with partial responses or stable disease after systemic chemotherapy.Multidisciplinary discussions during chemotherapy determined the nature and timing of cytoreductive therapy for each patient.Our institutional Ethics Committee approved the study, and written informed consent was obtained from all patients who entered the study.
Chemotherapy
Approval for drugs takes much longer in Japan than in the west.Because Japanese national insurance did not allow treatment of CRC with oxaliplatin between 2000 and 2005, first-line chemotherapy for advanced CRC was 5-FU with or without irinotecan in the first five years of the study.For the remaining seven years, the study patients received the then first-line chemotherapy for advanced or recurrent CRC triple-drug chemotherapy, namely 5-fluorouracil (5-FU), folinic acid and oxaliplatin or irinotecan (FOLFOX or FOLFIRI), with or without bevacizumab or cetuximab.The molecular-targeted agents bevacizumab, cetuximab, and panitumumab were approved for use in 2007, 2008, and 2010, respectively. From 2007, bevacizumab , bevacizumab with FOLFOX or FOLFIRI was used as first-line chemotherapy for advanced or recurrent CRC.From 2008, cetuximab with or without irinotecan was used as second-or third-line chemotherapy.From 2010, cetuximab or panitumumab with FOLFIRI or FOLFOX have been available in Japan as first-line chemotherapy in patients with wild-type KRAS.In patients with incomplete cytoreduction, chemotherapy was reintroduced depending on their performance status (PS).Patients with complete secondary cytoreduction received 5-FU-based adjuvant chemotherapy.Those with no extrahepatic metastases but with unresectable hepatic metastases underwent hepatic arterial infusion chemotherapy with 5-FU followed by secondary surgery (Kusunoki et al., 1997).Radiotherapy with concurrent 5-FU-based chemotherapy was used to improve the resectability of locally inoperable rectal cancer.
Study Variables
Patients were categorized into two groups based on age: an elderly group, aged ≥ 75 years and a non-elderly group aged <75 years.The age of 75 years was selected to divide the sample because approximately 40% of cases occurring in CRC patients aged over 75 years and the incidence increases with advancing age (Köhne et al., 2008;Yang et al., 2004;Boyle et al., 2005;Christensen et al., 2009).The comorbidity status was assessed by the Charlson index (Birim et al., 2003), which is a partially modified score including neither cancer nor age (Charlson et al., 1987).No patients had clinical evidence of infection or inflammatory conditions such as obstructive colitis or obstructive jaundice at that time.Routine laboratory tests, including serum C-reactive protein (CRP) and albumin concentrations and tumor markers such as carcinoembryonic antigen (CEA) (cut-off value, 6 ng/mL) were performed on the day of the first medical examination.Serum CRP serum concentrations were measured by turbidimetric immunoassay using an N-Assay TIA CRP-S kit (Nittobo Medical, Tokyo, Japan).Because this CRP assay has a lower detection limit than assays used in other studies (0.2 mg/dL vs >0.5 mg/dL) (McMillan et al., 2003, Crozier et al., 2006), the cut-off value for abnormal serum CRP was set at 0.5 mg/dL.As previously described, the original Glasgow Prognostic Score (GPS) (Forrest et al., 2003) was modified by the present authors according to the best predictive values calculated by receiver operating characteristic analysis to create the instrument used in this study: the mGPS (Toiyama et al., 2011, Inoue et al., 2013).Briefly, patients with high CRP concentrations (>0.5 mg/dL) plus hypoalbuminemia (<3.5g/dL) were allocated a score of 2, patients with only one of these factors a score of 1, and patients with neither of these factors a score of 0.
Statistical Analysis
JMP version 7 software (SAS Institute, Cary, NC, USA) was used to perform statistical analyses.Data are presented as the mean ± standard deviation.Contingency tables were analyzed using Fisher's exact test or the χ 2 test with Yates' correction.Correlations between continuous and categorical variables were evaluated by the Mann-Whitney U test.Survival curves were constructed according to the Kaplan-Meier method and differences analyzed using the log-rank test.Each significant predictor identified was assessed by multivariate analysis using Cox's proportional hazards model.A P value of < 0.05 was considered significant.
Relevant Patient Variables and Comorbidities
We performed a retrospective review of 272 patients treated in our department for unresectable primary, synchronous metastatic, and metachronous metastatic or recurrent CRC.There were 163 men (60%) and 109 women (40%), with a mean age of 64 years (range 29-85 years).Of these, we analyzed 37 elderly (age ≥75 years: 79 ± 3 years; range 75-85 years) and 235 non-elderly (age <75 years: 60 ± 10 years; range, 29-74 years) patients over the 12 years of the study.Table 1 summarizes the background characteristics of the 272 patients by age group.We found no differences between groups in tumor characteristics such as extent or tumor.However, there were significant correlations between age and PS (P = 0.0122) and comorbidity (P = 0.0003).To assess comorbid condition, we also measured the Charlson comorbidity index (CCI).Although this index was not significantly different for elderly (mean 0.270) than for non-elderly patients (mean 0.149), hypertension (P =0.0077) and cardiovascular disease (P = 0.0324) were present significantly more frequently in elderly than in non-elderly patients.
Therapeutic Approach
All patients received initial therapeutic chemotherapy; 92/272 (34%) also received chemotherapy as part of multimodality therapy following secondary cytoreductive surgery and/or RFA.A greater proportion of non-elderly patients than elderly patients received multimodality therapy, (82/235 [35%] and 10/37 [27%]), respectively; this difference was not significant.We also assessed the proportion of patients receiving each of four main chemotherapy regimens, the categories being 5-FU-based, irinotecan-based, oxaliplatin-based, and combinations including molecular agents.The main regimens of patients who received chemotherapy alone were significantly different: non-elderly patients most commonly received irinotecan-based or combinations including molecular agents, whereas elderly patients most often received oxaliplatin-based or 5-FU-based chemotherapy.
As to the main initial regimen received by patients who subsequently underwent multimodality therapy, there were no significant differences between the two groups.
Cancer-specific Survival
We assessed cancer-related mortality by Kaplan-Meier survival analysis.After a median follow up of 21 months, there were no significant differences in median survival time (MST) between elderly (32 months, 95% CI 24-39 months) and non-elderly patients (27 months, 95% CI 23-31 months).Multimodality therapy was associated with significantly better survival than chemotherapy alone in both elderly (P =0.0423) and non-elderly patients (P < 0.0001) (Fig. 1).Multimodality therapy resulted in longer overall survival than did chemotherapy alone for both elderly (median 53 months [95% CI 20-85 months] and 30 months [95% CI 20-40 months], respectively), and non-elderly patients (median 44 months [95% CI 32-55 months] and 20 months [95% CI 17-23 months, respectively).Cox univariate regression analyses identified different factors affecting cancer-specific survival in elderly than in non-elderly patients.In non-elderly patients, timing of metastasis (synchronous vs. metachronous), pathology (undifferentiated vs. differentiated), CEA concentration (≥ 12 vs.< 12 ng/mL), extent of dissemination, mGPS, PS, rate of response to chemotherapy, and multimodality therapy correlated significantly with cancer-specific survival (Table 2a).Conversely, in elderly patients there were no correlations between tumor characteristics and cancer-specific survival; however, patient characteristics, including comorbidity and therapeutic factors such as response rate, did correlate significantly with cancer-specific survival (Table 2b).Multivariate analysis using these characteristics showed that pathology (undifferentiated vs. differentiated), CEA concentration (> 12 vs.< 12 ng/mL), PS, response rate of chemotherapy, and multimodality therapy all had significant independent correlations with cancer specific survival time in non-elderly patients, whereas response rate was the only significant independent prognostic factor in elderly patients (Table 3).
Multimodality Therapy in Elderly Patients
Having confirmed that multimodality therapy prolonged survival in both non-elderly and elderly patients, we evaluated the correlation between comorbidities and the use of multimodality therapy.The CCI index correlated significantly with the use of multimodality therapy in the overall group of 272 patients (P = 0.0463).Of the patients who underwent multimodality therapy, 85/92 (92%) were in the category CCI 0, 7/92 (8%) CCI 1 and none in CCI 2. In contrast, of the patients who received chemotherapy alone, 148/180 (82%) were in the category CCI 0, 26/180 (14%) in CCI 1 and 6/180 (4%) in CCI 2. This trend differed between elderly and non-elderly patients.All comorbidities and the CCI category were significantly correlated with contraindication to multimodality therapy in non-elderly patients; however, this was not so in elderly patients (P = 0.0234, P = 0.0166, respectively).In the latter group, only hypertension tended toward correlating with contraindications to multimodality therapy (P = 0.0513).
To explorer useful predictors of indications for multimodality therapy in elderly patients, we also evaluated the prognostic significance of several clinical factors in 10 elderly patients who underwent multimodality therapy.In these elderly patients, the disease sites were initially unresectable primary, local recurrence, liver, bladder and peritoneal metastases.We performed reductive surgery in eight patients; three with liver metastases, two peritoneal metastases, one primary, one local recurrence, and one bladder involvement.We administered RFA for liver metastases to the other two patients.Kaplan-Meier survival analysis showed that patients with comorbidity (n = 4) had poorer survival than did those without comorbidity (n = 6) (3-year survival 83% vs. 25%, P = 0.0649) (Fig. 2).However, there were no significant differences in survival according to tumor characteristics such as timing of metastasis, pathology, and CEA concentration.Furthermore, we found no difference in survival between elderly and non-elderly patients who received chemotherapy alone.Interestingly, there were significant differences in survival according to mGPS (0 vs.1-2), with MST of 53 months (95% CI 20-85 months) for mGPS 0 (n = 8), and 12 months (95% CI 7-31 months) for mGPS1-2 (n = 2) (P = 0.0012) (Fig. 3).In contrast, there was no significant correlation between mGPS and survival in the elderly patients who received chemotherapy alone.
Discussion
Colorectal cancer is a major cause of cancer deaths in developed countries.Because the populations of these nations, including Japan, are rapidly aging, clinicians will be faced with treating many patients with advanced cancer, including those with metastatic or recurrent CRC.Recent advances in CRC treatment have resulted in prolongation of survival even in patients with advanced stages of disease.However, clinical decision-making regarding modern multimodality therapy for advanced CRC in elderly patients is complex, because of the lack of data regarding optimal chemotherapy regimens, timing of cytoreductive therapy and the use of multimodality therapy in these patients.In a systematic review of surgery for CRC in 34,194 elderly patients, researchers found the incidence of postoperative morbidity and mortality increased progressively with advancing age.However, although overall survival was less in elderly than non-elderly patients, age-related differences in cancer-specific survival were much less pronounced (Colorectal Cancer Collaborative Group., 2000).Because most of the definitive clinical trials have excluded subjects of advanced age or with a poor PS, there is still uncertainly regarding the optimal use of systemic chemotherapy in elderly patients with CRC.Many of the clinical trials that have included elderly patients have documented similar survival rates and toxicity profiles for elderly and younger patients (Köhne et al., 2008).Most researchers agree that age alone should not be a contraindication to the use of standard chemotherapy and that fit elderly patients can receive similar aggressive chemotherapy as younger patients; however, identification of the elderly patients who are most likely to benefit from chemotherapy warrants further investigation (Colorectal Cancer Collaborative Group., 2000).Indications for modern chemotherapy followed by cytoreductive therapy in elderly patients remain largely unknown.In addition, because of the potential for worsening comorbidities to cause poorer outcomes, the utility of metastasectomy, including hepatectomy and pulmonary resection, in elderly patients is also controversial.There have been recent reports of decreased survival and higher postoperative morbidity associated with this procedure in the elderly (Adam et al., 2010;Nagano et al., 2005;Endoh et al., 2013).
In our study, although elderly patients were more likely to have a poor PS and comorbidities, there were no significant differences in cancer-specific survival time between elderly and non-elderly patients.Interestingly, factors affecting cancer-specific survival were different in the elderly and non-elderly groups.Treatment-related prognostic factors were common to both groups.However, elderly patients had various host-related prognostic factors such as comorbidities, whereas tumor-related prognostic factors were characteristic in the non-elderly group.Multivariate analysis revealed that various tumor-related (pathology, serum CEA concentration), host-related (PS) and treatment-related factors (response rate and multimodality therapy) all had significant independent correlations with survival in non-elderly patients, whereas only treatment-related factors (response rate) was an independent prognostic factor in elderly patients.Although multimodality therapy was an independent prognostic factor in only non-elderly patients, we found that the survival benefit of multimodality therapy in both elderly and non-elderly patients was comparable to that of chemotherapy alone.As a consequence of differences in the main chemotherapy regimen that each group received, elderly patients were less likely than non-elderly patients to receive multimodality therapy (27 vs. 35%).
It seems likely that elderly patients were under-treated compared with non-elderly patients, especially those who received chemotherapy alone.Of the patients who received chemotherapy alone, elderly patients were more likely to receive oxaliplatin-based or 5-FU-based chemotherapy, probably because these types of chemotherapy are less aggressive and more readily tolerated than modern first line chemotherapy that includes molecular agents.These findings imply that under-treatment occurred because of physician preferences; however, cancer-specific survival of elderly and non-elderly patients who received chemotherapy alone was not significantly different.Elderly and non-elderly groups who went on to undergo multimodality therapy did not differ significantly in the initial main chemotherapy regimen they received.In other words, elderly patients who were able to undergo the same aggressive chemotherapy as non-elderly patients were more likely to continue on to multimodality therapy.In elderly patients, therapeutic decisions concerning palliative chemotherapy versus conversion chemotherapy prior to multimodality therapy must be made on an individual basis, however, the factors that determine the optimal therapeutic approach in elderly patients are not well known.In this regard, one important consideration is overall survival time after secondary cytoreductive therapy.In the current study, the survival of elderly patients with comorbidity was inferior to that of those without comorbidity, even after multimodality therapy.Our findings suggest that both tumor-related factors such as treatment markers and host-related factors may be more reliable prognostic indicators in elderly patients undergoing multimodality therapy than they are in non-elderly patients.Because GPS is a well-known surrogate marker for response to treatment and can be used to predict tumor recurrence in a variety of cancers (McMillan et al., 2003;Crozier et al., 2006;Forrest et al., 2003;Proctor et al., 2011), and because we recently reported the usefulness of a modified GPS in patients undergoing multimodality therapy for advanced CRC (Toiyama et al., 2011;Inoue et al., 2013), we assessed its value in the current study.Our findings suggest that modified GPS is a potential prognostic marker for elderly patients receiving multimodality therapy.
Limitations of our study included the small number of patients with huge difference considering its dimension (37 versus 235) and that it was a single-site study; consequently, the findings may not be applicable to all elderly patients with CRC, and more patients should be involved to explore useful predictors of indications for multimodality therapy in elderly patients.However, we were able to identify patterns associated with multimodality therapy in patients aged >75 years with CRC and to compare the outcomes of elderly and non-elderly patients.
Conclusion
Chronological age alone should not be a contraindication to multimodality therapy of CRC in elderly patients.To improve survival in the elderly, selection of palliative chemotherapy versus or active multimodality therapy for patients is very important.Furthermore, appropriate selection criteria for multimodality therapy in elderly patients may include not only tumor characteristics but also host-or treatment-related factors such as comorbidities or surrogate markers, including modified GPS.
Figure 1 .
Figure 1.Multimodality therapy resulted in significantly better survival than did chemotherapy alone in both elderly (a) and non-elderly patients (b)
Figure 2 .
Figure 2. Kaplan-Meier survival curves of elderly patients receiving multimodality therapy showing that those with comorbidity had inferior survival to without comorbidity (a).For elderly patients who received chemotherapy alone, there was no comorbidity-related difference in survival (b)
Table 2a .
Univariate analysis in relation to cancer-specific mortality in non-elderly patients
Table 2b .
Multivariate analysis in relation to cancer-specific mortality in non-elderly patients
Table 3a .
Univariate analysis in relation to cancer-specific mortality in elderly patients CEA, carcinoembryonic antigen; mGPS, modified Glasgow Prognostic Score; PS, performance status
Table 3b .
Multivariate analysis in relation to cancer-specific mortality in elderly patients
|
2017-11-03T08:29:22.872Z
|
2013-10-13T00:00:00.000
|
{
"year": 2013,
"sha1": "ace06e091e52fa09f82a35b881df0d5acdaa307c",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/cco/article/download/29999/18261",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ace06e091e52fa09f82a35b881df0d5acdaa307c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216080496
|
pes2o/s2orc
|
v3-fos-license
|
Dynamic Programming Approach to the Generalized Minimum Manhattan Network Problem
We study the generalized minimum Manhattan network (GMMN) problem: given a set $P$ of pairs of two points in the Euclidean plane $\mathbb{R}^2$, we are required to find a minimum-length geometric network which consists of axis-aligned segments and contains a shortest path in the $L_1$ metric (a so-called Manhattan path) for each pair in $P$. This problem commonly generalizes several NP-hard network design problems that admit constant-factor approximation algorithms, such as the rectilinear Steiner arborescence (RSA) problem, and it is open whether so does the GMMN problem. As a bottom-up exploration, Schnizler (2015) focused on the intersection graphs of the rectangles defined by the pairs in $P$, and gave a polynomial-time dynamic programming algorithm for the GMMN problem whose input is restricted so that both the treewidth and the maximum degree of its intersection graph are bounded by constants. In this paper, as the first attempt to remove the degree bound, we provide a polynomial-time algorithm for the star case, and extend it to the general tree case based on an improved dynamic programming approach.
Introduction
In this paper, we study a geometric network design problem in the Euclidean plane R 2 . For a pair of points s and t in the plane, a path between s and t is called a Manhattan path (or an M-path for short) if it consists of axis-aligned segments whose total length is equal to the Manhattan distance of s and t (in other words, it is a shortest s-t path in the L 1 metric). The minimum Manhattan network (MMN) problem is to find a minimum-length geometric network that contains an M-path for every pair of points in a given terminal set. In the generalized minimum Manhattan network (GMMN) problem, given a set P of pairs of terminals, we are required to find a minimum-length network that contains an M-path for every pair in P . Throughout this paper, let n = |P | denote the number of terminal pairs.
The GMMN problem was introduced by Chepoi, Nouioua, and Vaxès [5], and is known to be NP-hard as so is the MMN problem [6]. The MMN problem and another NP-hard special case named the rectilinear Steiner arborescence (RSA) probelm admit polynomial-time constant-factor approximation algorithms, and in [5] they posed a question whether so does the GMMN problem or not, which is still open.
Das, Fleszar, Kobourov, Spoerhase, Veeramoni, and Wolff [8] gave an O(log d+1 n)-approximation algorithm for the d-dimensional GMMN problem based on a divide-and-conquer approach. They also improved the approximation ratio for d = 2 to O(log n). Funke and Seybold [9] (see also [19]) introduced the scale-diversity measure D for (2-dimensional) GMMN instances, and gave an O(D)-approximation algorithm. Because D = O(log n) is guaranteed, this also implies O(log n)approximation as with Das et al. [8], which is the current best approximation ratio for the GMMN problem in general.
As another approach to the GMMN problem, Schnizler [18] explored tractable cases by focusing on the intersection graphs of GMMN instances. The intersection graph represents for which terminal pairs M-paths can intersect. He showed that, when both the treewidth and the maximum degree of intersection graphs are bounded by constants, the GMMN problem can be solved in polynomial time by dynamic programming (see Table 1). His algorithm heavily depends on the degree bound, and it is natural to ask whether we can remove it, e.g., whether the GMMN problem is difficult even if the intersection graph is restricted to a tree without any degree bound.
In this paper, we give an answer to this question. Specifically, as the first tractable case without any degree bound in the intersection graphs, we provide a polynomial-time algorithm for the star case by reducing it to the longest path problem in directed acyclic graphs. Theorem 1.1. There exists an O(n 2 )-time algorithm for the GMMN problem when the intersection graph is restricted to a star.
Then, we extend it to the general tree case based on a dynamic programming (DP) approach inspired by and improving Schnizler's algorithm [18]. Theorem 1.2. There exists an O(n 5 )-time algorithm for the GMMN problem when the intersection graph is restricted to a tree.
The above algorithm involves two types of DPs, which are nested. We furthermore improve its running time by reducing the computational cost of inner DPs, and obtain the following result. Table 1: Exactly solvable cases classified by the class of intersection graphs, whose treewidth and maximum degree are denoted by tw and ∆, respectively.
Class
Time Furthermore, we show that the cycle case can be solved by solving the tree case O(n) times. This fact is shown as Proposition 6.2 in a generalized form from cycles to triangle-free pseudotrees, where a triangle is a cycle consisting of three vertices and a pseudotree is a connected graph that contains at most one cycle. 1 Combining this with Theorem 1.3, we obtain the following result.
Corollary 1.4. There exists an O(n 4 )-time algorithm for the GMMN problem when the intersection graph is restricted to a cycle (or a triangle-free pseudotree).
We also improve the time complexity for the general case as in Table 1. The dependency on the maximum degree is substantially improved, but it is still exponential. In addition, the approach is apart from the above main results and is also a straightforward improvement from Schnizler's result for the tree case. For these reasons, we just sketch this result in the appendix.
Related work
The MMN problem was first introduced by Gudmundsson, Levcopoulos, and Narashimhan [10]. They gave 4-and 8-approximation algorithms running in O(n 3 ) and O(n log n) time, respectively. The current best approximation ratio is 2, which was obtained independently by Chepoi et al. [5] using an LP-ronding technique, by Nouioua [15] using a primal-dual scheme, and by Guo, Sun, and Zhu [11] using a greedy method.
The RSA problem is another important special case of the GMMN problem. In this problem, given a set of terminals in R 2 , we are required to find a minimum-length network that contains an M-path between the origin and every terminal. The RSA problem was first studied by Nastansky, Selkow, and Stewart [14] in 1974. The complexity of the RSA problem had been open for a long time, and Shi and Su [20] showed that the decision version is strongly NP-complete after three decades. Rao, Sadayappan, Hwang, and Shor [16] proposed a 2-approximation algorithm that runs in O(n log n) time. Lu and Ruan [12] and Zachariasen [21] independently obtained PTASes, which are both based on Arora's technique [3] of building a PTAS for the metric Steiner tree problem.
Organization
The rest of this paper is organized as follows. In Section 2, we describe necessary definitions and notations. In Section 3, we present an algorithm for the star case and prove Theorem 1.1. In Section 4, based on a DP approach, we extend our algorithm to the tree case and prove Theorem 1.2. Then, in Section 5, we improve the algorithm shown in Section 4 by reducing the computational cost of solving subproblems in our DP and prove Theorem 1.3. Finally, in Section 6, we show that any cycle (or triangle-free pseudotree) instance can be reduced to O(n) tree instances, which implies Corollary 1.4. We also discuss an improvement on the general case and another observation in the appendix.
Problem Formulation
For a point p ∈ R 2 , we denote by p x and p y its x-and y-coordinates, respectively, i.e., p = (p x , p y ). Let p, q ∈ R 2 be two points. We write p ≤ q if both p x ≤ q x and p y ≤ q y hold. We define two points We denote by pq the segment whose endpoints are p and q, and by pq its length, i.e., pq and d y (p, q) = |p y − q y |, and denote by d(p, q) the Manhattan distance between p and q, i.e., d(p, q) = d x (p, q) + d y (p, q). Note that pq = d(p, q) if and only if p x = q x or p y = q y , and then the segment pq is said to be vertical or horizontal, respectively, and axis-aligned in either case.
A (geometric) network N in R 2 is a finite simple graph with a vertex set V (N ) ⊆ R 2 and an edge set E(N ) ⊆ V (N ) 2 = {{p, q} | p, q ∈ V (N ), p = q}, where we often identify each edge {p, q} with the corresponding segment pq. The length of N is defined as N = {p,q}∈E(N ) pq . For two points s, t ∈ R 2 , a path π between s and t (or an s-t path) is a network of the following form: where [k] = {1, 2, . . . , k} for a nonnegative integer k. An s-t path π is called a Manhattan path (or an M-path) for a pair (s, t) if every edge {p i−1 , p i } ∈ E(π) is axis-aligned and π = d(s, t) holds.
We are now ready to state our problem formally.
Input: A set P of n pairs of points in R 2 .
Goal: Find a minimum-length network N in R 2 that consists of axis-aligned edges and contains a Manhattan path for every pair (s, t) ∈ P .
Throughout this paper, when we write a pair (p, q) ∈ R 2 × R 2 , we assume p x ≤ q x (by swapping if necessary). A pair (p, q) is said to be regular if p y ≤ q y , and flipped if p y ≥ q y . In addition, if p x = q x or p y = q y , then there exists a unique M-path for (p, q) and we call such a pair degenerate.
Restricting a Feasible Region to the Hanan Grid
For a GMMN instance P , we denote by H(P ) the Hanan grid, which is a grid network in R 2 consisting of vertical and horizontal lines through each point appearing in P . More formally, it is defined as follows (see Figure 1): Note that H(P ) is an at most 2n × 2n grid network. It is not difficult to see that, for any GMMN instance P , at least one optimal solution is contained in the Hanan grid H(P ) as its subgraph (cf. [9]).
For each pair v = (p, q) ∈ V (H(P )) × V (H(P )), we denote by Π P (v) or Π P (p, q) the set of all M-paths for v that are subgraphs of the Hanan grid H(P ). By the problem definition, we associate each n-tuple of M-paths, consisting of an M-path π v ∈ Π P (v) for each v ∈ P , with a feasible solution N = v∈P π v on H(P ), where the union of networks is defined by the set unions of the vertex sets and of the edge sets. Moreover, each minimal feasible (as well as optimal) solution on H(P ) must be represented in this way. Based on this correspondence, we abuse the notation as N = (π v ) v∈P ∈ v∈P Π P (v), and define Feas(P ) and Opt(P ) as the sets of feasible solutions covering all minimal ones and of all optimal solutions, respectively, on H(P ), i.e., Thus, we have restricted a feasible region of a GMMN instance P to the Hanan grid H(P ). In other words, the GMMN problem reduces to finding a network N = (π v ) v∈P ∈ Opt(P ) as an n-tuple of M-paths in Feas(P ).
Specialization Based on Intersection Graphs
The bounding box of a pair v = (p, q) ∈ R 2 × R 2 indicates the rectangle region and we denote it by B(v) or B(p, q). Note that B(p, q) is the region where an M-path for (p, q) can exist. For a GMMN instance P and a pair v ∈ P , we denote by H(P, v) the subgraph of the Hanan grid H(P ) induced by V (H(P )) ∩ B(v). We define the intersection graph IG[P ] of P by The intersection graph IG[P ] intuitively represents how a GMMN instance P is complicated in the sense that, for each u, v ∈ P , an edge {u, v} ∈ E(IG[P ]) exists if and only if two M-paths π u ∈ Π P (u) and π v ∈ Π P (v) can share some segments, which saves the total length of a network in Feas(P ). 2 In particular, if IG[P ] contains no triangle, then no segment can be shared by M-paths for three different pairs in P , and hence N ∈ Feas(P ) is optimal (i.e., N is minimized) if and only if the total length of segments shared by two M-paths in N is maximized.
We denote by GMMN[· · · ] the GMMN problem with restriction on the intersection graph of the input; e.g., IG[P ] is restricted to a tree in GMMN [Tree]. Each restricted problem is formally stated in the relevant section.
An O(n 2 )-Time Algorithm for GMMN[Star]
In this section, as a step to GMMN[Tree], we present an O(n 2 )-time algorithm for GMMN [Star], which is formally state as follows.
Problem (GMMN[Star]).
Input: A set P ⊆ R 2 × R 2 of n pairs whose intersection graph IG[P ] is a star, whose center is denoted by r = (s, t) ∈ P .
A crucial observation for GMMN [Star] is that an M-path π l ∈ Π P (l) for each leaf pair l ∈ P − r can share some segments only with an M-path π r ∈ Π P (r) for the center pair r. Hence, minimizing the length of N = (π v ) v∈P ∈ Feas(P ) is equivalent to maximizing the total length of segments shared by two M-paths π r and π l for l ∈ P − r.
In Section 3.1, we observe that, for each leaf pair l ∈ P − r, once we fix where an M-path π r ∈ Π P (r) for r enters and leaves the bounding box B(l), the maximum length of segments that can be shared by π r and π l ∈ Π P (l) is easily determined. Thus, GMMN[Star] reduces to finding an optimal M-path π r ∈ Π P (r) for the center pair r = (s, t), and in Section 3.2, we formulate this task as the computation of a longest s-t path in an auxiliary directed acyclic graph (DAG), which is constructed from the subgrid H(P, r). As a result, we obtain an exact algorithm that runs in linear time in the size of auxiliary graphs, which are simplified so that it is always O(n 2 ) in Section 3.3. Figure 2: (a) If l = (s l , t l ) is a regular pair, for any π v ∈ Π P (p, q), some π l ∈ Π P (l) completely includes π v . (b) If l = (s l , t l ) is a flipped pair, while any π l ∈ Π P (l) cannot contain both horizontal and vertical segments of any π v ∈ Π P (p, q), one can choose π v ∈ Π P (p, q) so that the whole of either horizontal or vertical segments of π v can be included in some π l ∈ Π P (l).
Observation on Sharable Segments
Without loss of generality, we assume that the center pair r = (s, t) is regular, i.e., s ≤ t. Fix an M-path π r ∈ Π P (r) and a leaf pair l = (s l , t l ) ∈ P − r. Obviously, if π r is disjoint from the bounding box B(l), then any M-path π l ∈ Π P (l) cannot share any segment with π r . Suppose that π r intersects B(l), and let π r [l] denote the intersection π r ∩ H(P, l). Let v = (p, q) be the pair of two vertices on π r such that π r [l] is a p-q path, and we call v the in-out pair of π r for l. As π r ∈ Π P (r), we have π r [l] ∈ Π P (v), and v is also regular (recall the assumption p x ≤ q x ). Moreover, for any M-path π v ∈ Π P (v), the network π ′ r obtained from π r by replacing its subpath π r [l] with π v is also an M-path for r in Π P (r). Since B(v) ⊆ B(l) does not intersect B(l ′ ) for any other leaf pair l ′ ∈ P \ {r, l}, once v = (p, q) is fixed, we can freely choose an M-path π v ∈ Π P (v) instead of π r [l] for maximizing the length of segments shared with some π l ∈ Π P (l). For each possible in-out pair v = (p, q) of M-paths in Π P (r) (the sets of those vertices p and q are formally defined in Section 3.2 as V (r, l) and V (r, l), respectively), we denote by γ(l, p, q) the maximum length of segments shared by two M-paths for l and v = (p, q), i.e., γ(l, p, q) = max { π l ∩ π v | π l ∈ Π P (l), π v ∈ Π P (p, q)} . (3.1) Then, the following lemma is easily observed (see Figure 2).
Lemma 3.1. For every leaf pair l ∈ P − r, the following properties hold.
Reduction to the Longest Path Problem in DAGs
In this section, we reduce GMMN[Star] to the longest path problem in DAGs. Let P be a GMMN[Star] instance and r = (s, t) ∈ P (s ≤ t) be the center of IG[P ], and we construct an auxiliary DAG G from the subgrid H(P, r) as follows (see Figure 3). is γ(l, p, q) for p ∈ V (r, l) and q ∈ V (r, l).
First, for each edge e = {p, q} ∈ E(H(P, r)) with p ≤ q (and p = q), we replace e with an arc (p, q) of length 0. For each leaf pair l ∈ P − r, let s ′ l and t ′ l denote the lower-left and upper-right corners of B(r) ∩ B(l), respectively, so that (s ′ l , t ′ l ) is a regular pair with B(s ′ l , t ′ l ) = B(r) ∩ B(l). If (s ′ l , t ′ l ) is degenerate, then we change the length of each arc (p, q) with p, q ∈ V (H(P, r) ∩ B(l)) from 0 to pq , which clearly reflects the (maximum) sharable length in B(l). Otherwise, the bounding box B(s ′ l , t ′ l ) ⊆ B(l) has a nonempty interior, and we define four subsets of V (H(P, r) ∩ B(l)) as follows: As r is regular, any M-path π r ∈ Π P (r) intersecting B(l) enters it at some p ∈ V (r, l) and leaves it at some q ∈ V (r, l), and the maximum sharable length γ(l, p, q) in B(l) is determined by Lemma 3.1. We remove all the interior vertices in V (r, l) (with all the incident arcs) and all the boundary arcs (p, q) with p, q ∈ V (r, l) ∪ V (r, l). Instead, for each pair (p, q) of p ∈ V (r, l) and q ∈ V (r, l) with p ≤ q and p = q, we add an interior arc (p, q) of length γ(l, p, q). Let E int (l) denote the set of such interior arcs for each nondegenerate pair l ∈ P − r.
Finally, we care about the corner vertices in V• • (r, l), which can be used for cheating if l is flipped as follows. Suppose that p ∈ V• • (r, l) is the upper-left corner of B(l), and consider the situation when the in-out pair (p ′ , q ′ ) of π r ∈ Π P (r) for l satisfies p ′ x = p x < q ′ x and p ′ y < p y = q ′ y . Then, (p ′ , q ′ ) is not degenerate, and by Lemma 3.1, the maximum sharable length in B(l) is as it is represented by an interior arc (p ′ , q ′ ), but one can take another directed p ′ -q ′ path that consists of two arcs (p ′ , p) and (p, q ′ ) in the current graph, whose length is . To avoid such cheating, for each p ∈ V• • (r, l), we divide it into two distinct copies p hor and p vert (which are often identified with its original p unless we need to distinguish them), and replace the endpoint p of each incident arc e with p hor if e is horizontal and with p vert if vertical (see Figure 3 (d)). In addition, when p is not shared by any other leaf pair, 3 we add an arc (p hor , p vert ) of length 0 if p is the upper-left corner of B(s ′ l , t ′ l ) and an arc (p vert , p hor ) of length 0 if the lower-right, which represents the situation when π r ∈ Π P (r) intersects B(l) only at p.
Let G be the constructed directed graph, and denote by ℓ(e) the length of each arc e ∈ E(G). The following two lemmas complete our reduction (see Figure 3 again).
Proof. Almost all arcs are of form (p, q) with p ≤ q and p = q. The only exception is of form (p vert , p hor ) or (p hor , p vert ) for some p ∈ V• • (r, l) with some l ∈ P − r, and at most one direction exists for each p by definition. Thus, G contains no directed cycle.
Proof. Fix a directed s-t path π G in G. By the definition of G and Lemma 3.2, for each nondegenerate pair l ∈ P − r, the path π G uses at most one interior arc in E int (l), and any other arc has a trivially corresponding edge in H(P, r) (including edges in a degenerate pair). For each l with . By the definitions of ℓ and γ, we have and hence one can construct an M-path π r ∈ Π P (r) by replacing each e l with some M-path π e l ∈ Π P (p, q) attaining (3.2) such that To the contrary, for any M-path π r ∈ Π P (r), by the definitions of γ and ℓ, one can construct a directed s-t path π G in G of length at least the right-hand side of (3.3), and we are done.
Computational Time Analysis with Simplified DAGs
A longest path in a DAG G is computed in O(|V (G)| + |E(G)|) time by dynamic programming. Although the subgrid H(P, r) has O(n 2 ) vertices and edges, the auxiliary DAG G constructed in Section 3.2 may have much more arcs due to E int (l), whose size is Θ(|V (r, l)| · |V (r, l)|) and can be Ω(n 2 ). This, however, can be always reduced to linear by modifying the boundary vertices and the incident arcs appropriately in order to avoid creating diagonal arcs in B(l). In this section, we simplify G to G ′ with O(n 2 ) vertices and edges, which completes the proof of Theorem 1.1.
Fix a nondegenerate leaf pair l ∈ P −r, and we modify the relevant part as follows (see Figure 4). We first remove (precisely, avoid creating) the arcs (p, q) ∈ E int (l) for p ∈ V (r, l) and q ∈ V (r, l) with either p x < q x and p y < q y (diagonal) or p ∈ V• • (r, l).
If l is a regular pair, then we keep the boundary vertices as they are. Instead of the removed arcs, we add an boundary arc (q 1 , q 2 ) of length q 1 q 2 for each q 1 , q 2 ∈ V (r, l) with {q 1 , q 2 } ∈ E(H(P, r)) and q 1 ≤ q 2 . Then, for any removed arc e = (p, q) ∈ E int (l), there exists a p-q path in G ′ , whose length is always equal to ℓ(e) = γ(l, p, q) = d(p, q) (cf. Lemma 3.1).
If l is a flipped pair, then we need to care which directional (horizontal or vertical) segments are shared in B(l). For this purpose, we add two copies q hor and q vert of each boundary vertex q ∈ V (r, l)\V• • (r, l) with two arcs (q hor , q) and (q vert , q) of length 0 (recall that, for each p ∈ V• • (r, l), we have already added p hor and p vert , and removed p itself in G). We also replace each remaining axis-aligned arc (p, q) ∈ E int (l) with two arcs (p, q hor ) of length d x (p, q) and (p, q vert ) of length d y (p, q). 4 Instead of the removed diagonal arcs, we add two boundary arcs (q hor 1 , q hor 2 ) of length d x (q 1 , q 2 ) and (q vert 1 , q vert 2 ) of length d y (q 1 , q 2 ) for each q 1 , q 2 ∈ V (r, l) with {q 1 , q 2 } ∈ E(H(P, r)) and q 1 ≤ q 2 . Then, for any removed arc e = (p, q) ∈ E int (l), there exist two p-q path in G ′ , whose lengths are equal to d x (p, q) and d y (p, q). As ℓ(e) = γ(l, p, q) = max {d x (p, q), d y (p, q)} (cf. Lemma 3.1), the longest paths are preserved by this simplification.
As with Lemma 3.2, we can easily confirm that G ′ is acyclic. Thus, we have obtained a simplified auxiliary DAG G ′ , and the following lemma completes the proof of Theorem 1.1.
Proof. For the vertex set, by definition, we see |V (G ′ )| ≤ 3|V (G)| ≤ 6|V (H(P, r))| = O(n 2 ). For the arc set, since all the arcs outside of l∈P −r B(l) directly come from the subgrid H(P, r), it suffices to show that the number of axis-aligned interior arcs and additional boundary arcs is bounded by O(n 2 ) in total. By definition, if H(P, r) ∩ H(P, l) is an a × b grid, then the number of such arcs is at most 3(a + b) in the regular case and at most 6(a + b) in the flipped case. Thus, the total number is at most and we are done.
An O(n 5 )-Time Algorithm for GMMN[Tree]
In this section, we present an O(n 5 )-time algorithm for GMMN[Tree], which is the main target in this paper and stated as follows.
Problem (GMMN[Tree]).
Input: A set P ⊆ R 2 × R 2 of n pairs whose intersection graph IG[P ] is a tree.
For a GMMN[Tree] instance P , we choose an arbitrary pair r ∈ P as the root of the tree IG[P ]; in particular, when IG[P ] is a star, we regard the center as the root. The basic idea of our algorithm is dynamic programming on the tree IG[P ] from the leaves toward r. Each subproblem reduces to the longest path problem in DAGs like the star case, which is summarized as follows.
Fix a pair v = (s v , t v ) ∈ P . If v = r, then there exists a unique parent u = Par(v) in the tree IG[P ] rooted at r, and there are O(n 2 ) possible in-out pairs (p v , q v ) of π u ∈ Π P (u) for v. We virtually define p v = q v = ǫ for the case when we do not care the shared length in B(u), e.g., v = r or π u is disjoint from B(v). Let P v denote the vertex set of the subtree of IG[P ] rooted at v (including v itself). For every possible in-out pair (p v , q v ), as a subproblem, we compute the maximum total length dp(v, By definition, the goal is to compute dp(r, ǫ, ǫ). If v is a leaf in IG[P ], then P v = {v}. In this case, dp(v, p v , q v ) is the maximum length of segments shared by two M-paths π v ∈ Π P (v) and π u ∈ Π P (u) with π u [v] ∈ Π P (p v , q v ), which is easily determined (cf. Lemma 3.1). Otherwise, using the computed values dp(w, p w , q w ) for all children w of v and all possible in-out pairs (p w , q w ), we reduce the task to the computation of a longest s v -t v path in an auxiliary DAG, as with finding an optimal M-path for the center pair in the star case.
Constructing Auxiliary DAGs for Subproblems
Let v = (s v , t v ) ∈ P , which is assumed to be regular without loss of generality. If v = r, then let p v = q v = ǫ; otherwise, let u = Par(v) be its parent, and fix a possible in-out pair respectively, we construct the same auxiliary directed graph, denoted by G[v, p v , q v ]. We then change the length of each interior arc (p w , q w ) ∈ E int (w) for each child w ∈ C v from γ(w, p w , q w ) to dp(w, p w , q w ) − dp(w, ǫ, ǫ), so that it represents the difference of the total sharable length in B(P w ) = w∈Pw B(w ′ ) between the cases when an M-path for v intersects B(w) (enters at p w and leaves at q w ) and when an M-path for v is ignored. As with Lemma 3.2, the graph G[v, p v , q v ] is acyclic. The following lemma completes the reduction of computing dp Proof. If v is a leaf in IG[P ], then C v = ∅, and hence it immediately follows from Lemma 3.3. Suppose that v is not a leaf in IG[P ], and let π G be a directed ℓ(e) + w∈Cv dp(w, ǫ, ǫ). (4.1) By definition, for each w ∈ C v + u ′ , the path π G uses at most one arc in E int (w). For each , and then ℓ(e w ) = dp(w, p w , q w ) − dp(w, ǫ, ǫ). Hence, by defining By the definition of dp, for each w ∈ C v , there exists an M-pathπ v,w ∈ Π P (p w , q w ) appearing as π v [w] = π v ∩ H(P, w) in some feasible solution N = (π w ) w∈P ∈ Feas(P ) such that w ′ ∈Pw π w ′ ∩ π Par(w ′ ) = dp(w, p w , q w ).
To the contrary, we show that, for any feasible solution The proof is done by induction from the leaves to the root in IG Then, by taking π G so that (p w , q w ) ∈ E(π G ) for each w ∈ C v + u ′ unless p w = q w = ǫ, we obtain the following relation from the induction hypothesis (when v is not a leaf) and the definitions of ℓ and dp:
Computational Time Analysis
This section completes the proof of Theorem 1.2. For a pair v ∈ P , suppose that H(P, v) is an
An O(n 3 )-Time Algorithm for GMMN[Tree]
In this section, we improve the DP algorithm for GMMN[Tree] given in Section 4 so that it can be implemented in O(n 3 ) time.
Overview
Let P be a GMMN[Tree] instance with |P | ≥ 3, and we choose a root r ∈ P of the tree IG[P ] such that r is not a leaf (i.e., r has at least two neighbors). In Section 4, for each v ∈ P − r and each possible in-out pair (p v , q v ) of π u ∈ Π P (u) for v, we compute dp(v, p v , q v ) one-by-one by finding a longest s v -t v path in the auxiliary DAG G[v, p v , q v ]. In this section, using an extra DP, we improve this part so that we compute dp(v, p v , q v ) for many possible in-out pairs (p v , q v ) at once.
As with Section 4, we assume that v is regular, and let u = (s u , t u ) be the parent of v. We also assume that neither u nor v is degenerate (otherwise, we can easily fill up the table dp(v, ·, ·) in O(n 2 ) time by definition). Since u must have a neighbor other than v by the choice of the root r, we have B(u) ⊆ B(v). Hence, for any M-path π u ∈ Π P (u), its in-out pair (p v , q v ) satisfies one of the following conditions: , and then it is completely fixed; (b) p v = s u , q v = t u , and they are on two adjacent boundaries of B(v); (c) p v = s u , q v = t u , and they are on two opposite boundaries of B(v).
For each case among (a)-(c), we design an extra DP to compute dp(v, p v , q v ) for all such in-out pairs (p v , q v ) in O(n 2 ) time. Then, no matter how B(u) intersects B(v), one can classify all the possible in-out pairs into a constant number of such cases, and fill up the table dp(v, ·, ·) in O(n 2 ) time in total by applying the designed DPs separately. 5 This implies that the overall computational time is bounded by O(n 3 ).
No matter which of the three cases (a)-(c) we consider, we first compute the value dp(v, ǫ, ǫ) by computing a longest s v -t v path in the auxiliary DAG G[v, ǫ, ǫ]. In addition, by doing it in two ways from s v and from t v , we obtain a longest s v -z path and a longest z-t v path for every (reachable) z ∈ V (G[v, ǫ, ǫ]) as byproducts. We denote the lengths of the s v -z path and the z-t v path by λ(s v , z) and λ(z, t v ), respectively. Note that this computation for all v ∈ P requires O(n 3 ) time in total (cf. Section 4.2). We also compute the value κ v = w∈Cv dp(w, ǫ, ǫ), which is the baseline of the total sharable length in the subtree rooted at v (cf. Lemma 4.1), where recall that C v denotes the set of all children of v.
We then show that computing the values dp(v, p v , q v ) for all possible in-out pairs (p v , q v ) in each case takes O(n 2 ) time in total. Suppose that H(P, v) ∩ H(P, u) is an a × b grid graph, where a and b are associated with the y-and x-coordinates, respectively. Depending on the cases (a)-(c) and whether the parent u is regular or flipped (hence, we consider six cases), we define auxiliary DP values (e.g., denoted by ω(v, i, j) for i ∈ [a] and j ∈ [b]), and demonstrate how to compute and use them.
When the Parent is Regular
In this section, we consider the case that the parent u is a regular pair.
Case (a): One Endpoint is Fixed in the Subgrid
By symmetry, we consider the situation when , where we define p 1,1 = s u (see Figure 5). In this case, we need to compute dp(v, For each i ∈ [a] and j ∈ [b], we define ω(v, i, j) as the length of a longest s v -p i,j path in G[v, p 1,1 , p i,j ], where we slightly extend the definition of the auxiliary DAG G[v, p v , q v ] in Section 4.1 Figure 5: The case (a) when the parent u is regular.
so that (p v , q v ) is not necessarily an in-out pair of π u ∈ Π P (u) for v but that of its subpath. Then, by Lemma 4.1, we have dp(v, 1 , p i,b ). Thus, after filling up the table ω(v, ·, ·), we can compute the values dp(v, In what follows, we see how to compute ω(v, i, j).
For the base case when i = j = 1, from the definitions of G[v, ·, ·] and λ(s v , ·), we see Next, when i > 1 and j = 1, we can compute it by a recursive formula which is confirmed as follows. Fix a longest s v -p i,j path in G[v, p 1,1 , p i,1 ] attaining ω(v, i, 1), and let π ∈ Π P (s v , p i,j ) be a corresponding M-path. If π intersects p i−1,1 , then the s v -p i−1,1 prefix corresponds to a longest s v -p i−1,1 path in G[v, p 1,1 , p i−1,1 ] of length ω(v, i − 1, 1) and the last segment p i−1,1 p i,1 contributes to the length in G[v, p 1,1 , p i,1 ] in addition. Otherwise, π is disjoint from p i−1,1 , and it then corresponds to a longest s v -p i,1 path in G[v, ǫ, ǫ] of length λ(s v , p i,1 ). The case when i = 1 and j > 1 is similarly computed by Finally, when i > 1 and j > 1, we can compute it by a recursive formula
Case (b): In-Out Pairs Move on Adjacent Boundaries
By symmetry, we consider the situation when (p , where we define p 1,1 as the lower-right corner (see Figure 6). In this case, we need to compute dp(v, p 1,j , p i,1 ) for each pair of i ∈ [a] and j ∈ [b].
Thus, after filling up the table ω(v, ·, ·), we can compute the values dp(v, p 1,j , p i,1 ) for all i ∈ In what follows, we see how to compute ω(v, i, j). We first observe that, for any s v -t v path in G[v, p 1,j , p i,1 ] attaining ω(v, i, j), a corresponding M-path π v ∈ Π P (v) can be taken so that it intersects p i,j by choosing an M-path π u ∈ Π P (u) appropriately (cf. Lemma 3.1).
For the base case when i = j = 1, from the definitions of G[v, ·, ·], λ(s v , ·), and λ(·, t v ), we see ω(v, 1, 1) = λ(s v , p 1,1 ) + λ(p 1,1 , t v ). (5.5) Next, when i > 1 and j = 1, we can compute it by a recursive formula which is confirmed as follows. Fix an s v -t v path in G[v, p 1,1 , p i,1 ] attaining ω(v, i, 1), and let π v ∈ Π P (v) be a corresponding M-path. If π v intersects p i−1,1 , then it corresponds to an s v -t v path in G[v, p 1,1 , p i−1,1 ] attaining ω(v, i − 1, 1) and the segment p i−1,1 p i,1 contributes to the length in G[v, p 1,1 , p i,1 ] in addition. Otherwise, π v is disjoint from p i−1,1 , and hence π v intersects B(p 1,1 , p i,1 ) only at p i,1 . Then, the s v -p i,1 prefix of π v corresponds to a longest s v -p i,1 path in G[v, ǫ, ǫ] of length λ(s v , p i,1 ), and the p i,1 -t v suffix a longest p i,1 -t v path in G[v, ǫ, ǫ] of length λ(p i,1 , t v ). The case when i = 1 and j > 1 is similarly computed by ω(v, 1, j) = max {ω(v, 1, j − 1) + p 1,j p 1,j−1 , λ(s v , p 1,j ) + λ(p 1,j , t v )} . Finally, when i > 1 and j > 1, we can compute it by a recursive formula because, for any M-path π v ∈ Π P (v) intersecting p i,j , it either intersects at least one of p i−1,j and p i,j−1 or intersects B(p 1,j , p i,1 ) only at p i,j , and each case can be analyzed as with the previous paragraph.
Since we only look up a constant number of values in (5.5)-(5.8), each value ω(v, i, j) can be computed in constant time. As the table ω(v, ·, ·) is of size a × b = O(n 2 ), the total computational time is O(n 2 ). Thus we are done.
Case (c): In-Out Pairs Move on Opposite Boundaries
By symmetry, we consider the situation when (p v ) y = (s v ) y and (q v ) y = (t v ) y for all possible in-out pairs (p v , q v ) of π u ∈ Π P (u) for v. We then have (s v ) x ≤ (s u ) x < (t u ) x ≤ (t v ) x and (s u ) y < (s v ) y < (t v ) y < (t u ) y , and let p i,j be the (i, j) vertex on the a × b grid H(P, v) ∩ H(P, u) for each i ∈ [a] and j ∈ [b], where we define p 1,1 as the lower-right corner (see Figure 7). In this case, we need to compute dp(v, p 1,j , p a,k ) for each j, k ∈ [b] with j ≥ k, which we directly compute as follows.
First, when j = k = 1, we have dp(v, p 1,1 , p a,1 ) = max because any M-path π v ∈ Π P (v) intersects the segment p 1,1 p a,1 at some point, and it is partitioned into three parts: the s v -p h,1 prefix, the segment p h,1 p i,1 , and the p i,1 -t v suffix for some h, i ∈ [a] with h ≤ i. The computation of dp(v, p 1,1 , p a,1 ) requires O(a 2 ) = O(n 2 ) time.
Next, for any 1 ≤ k ≤ j ≤ b, we have dp(v, p 1,j , p a,k ) = dp(v, p 1,1 , p a,1 ) + p 1,j p 1,k , (5.10) which is confirmed as follows. Fix a network (π w ) w∈P ∈ Feas(P ) attaining dp(v, p 1,j , p a,k ). Then, without changing the total shared length, we can modify the M-paths π v ∈ Π P (v) and π u ∈ Π P (u) with π u [v] ∈ Π P (p 1,j , p a,k ) so that it also attains dp(v, p 1,j , p a,j ) = dp(v, p 1,1 , p a,1 ) and π v shares all of its horizontal segments in B(p 1,j , p a,k ) with π u in addition, whose total length is d x (p 1,j , p 1,k ) = p 1,j p 1,k (cf. Lemma 3.1 and Figure 2). We can compute dp(v, p 1,j , p a,k ) in constant time by (5.10) for each j, k ∈ [b] with j ≥ k. As the table dp(v, ·, ·) is of size O(b 2 ) = O(n 2 ), the total computational time is O(n 2 ). Thus we are done.
When the Parent is Flipped
In this section, we consider the case that the parent u is a flipped pair.
Case (a): One Endpoint is Fixed in the Subgrid
By symmetry, we consider the situation when p v = s u ∈ B(v) and (q v ) x = (t v ) x for all possible in-out pairs (p v , q v ) of π u ∈ Π P (u) for v. We then have (t v ) x < (t u ) x and (s v ) y ≤ (t u ) y , and let p i,j be the (i, j) vertex on the a × b grid H(P, v) ∩ H(P, u) for each i ∈ [a] and j ∈ [b], where we define p 1,1 as the upper-right corner so that p 1,b = s u (see Figure 8). In this case, we need to compute dp(v, p 1,b , p i,1 ) for each i ∈ [a].
For each i ∈ [a] and j ∈ [b], we define ω(v, i, j) as the length of a longest p i,j -t v path in G[v, p i,j , p 1,1 ]. Then, as with the regular case, by Lemma 4.1, we have dp(v, or is disjoint from B(p 1,b , p i,1 ). Thus, after filling up the table ω(v, ·, ·), we can compute the values dp(v, In what follows, we see how to compute ω(v, i, j).
First, when j = 1, from the definitions of G[v, ·, ·] and λ(·, t v ), we see Similarly, when i = 1 and j > 1, we have (λ(p 1,k , t v ) + p 1,j p 1,k ) , (5.12) because any M-path in Π P (p 1,j , t v ) leaves B(p 1,j , p 1,1 ) at some point p 1,k (k ∈ [j]) and then it shares the first segment p 1,j p 1,k with p u ∈ Π P (u) (with p u [v] ∈ Π P (s u , p 1,1 )). Computing ω(v, 1, j) requires O(j) time by (5.12), and hence it takes O(b 2 ) = O(n 2 ) time in total for all j ∈ [b]. Finally, when i > 1 and j > 1, we can compute it by a recursive formula which is confirmed as follows. Fix a longest p i,j -t v path in G[v, p i,j , p 1,1 ] attaining ω(v, i, j), and let π ∈ Π P (p i,j , t v ) be a corresponding M-path. If π leaves B(p i,j , p 1,1 ) at p 1,j , then the p 1,j -t v suffix corresponds to a longest p 1,j -t v path in G[v, ǫ, ǫ] of length λ(p 1,j , t v ) and the first segment p i,j p 1,j contributes to the length in G[v, p i,j , p 1,1 ] in addition. Otherwise, π leaves B(p i,j , p 1,1 ) at some p 1,k (k ∈ [j − 1]). Recall that, since u is flipped, π can share either horizontal or vertical segments with π u ∈ Π P (u) (cf. Lemma 3.1). If π shares horizontal segments with π u , then we can assume that the p i,j -p 1,k prefix of π consists of two segments p i,j p 1,j and p 1,j p 1,k by modifying π u (with π u [v] ∈ Π P (p 1,b , p i,1 )) so that it traverses p 1,j p 1,k ; we then have ω Otherwise, π shares vertical segments with π u , and we can assume that the p i,j -p 1,k prefix of π consists of two segments p i,j p i,k and p i,k p 1,k by modifying π u so that it traverses p i,k p 1,k ; we then Since we only look up a constant number of values in (5.13) as well as (5.11), each value ω(v, i, j) for i > 1 can be computed in constant time. As the table ω(v, ·, ·) is of size a × b = O(n 2 ), the total computational time is bounded by O(n 2 ). Thus we are done.
Case (b): In-Out Pairs Move on Adjacent Boundaries
By symmetry, we consider the situation when (p v ) y = (t v ) y and (q v ) x = (t v ) x for all possible in-out pairs (p v , q v ) of π u ∈ Π P (u) for v. We then have (s v ) x ≤ (s u ) x ≤ (t v ) x < (t u ) x and (s v ) y ≤ (t u ) y ≤ (t v ) y < (s u ) y , and let p i,j be the (i, j) vertex on the a × b grid H(P, v) ∩ H(P, u) for each i ∈ [a] and j ∈ [b], where we define p 1,1 = t v (see Figure 9). In this case, we need to compute dp(v, p 1,j , p i,1 ) for each pair of i ∈ [a] and j ∈ [b].
For each i ∈ [a] and j ∈ [b], we define ω(v, i, j) as the length of a longest s v -t v path in G[v, p 1,j , p i,1 ]. Then, by Lemma 4.1, we have dp(v, Thus, after filling up the table ω(v, ·, ·), we can compute the values dp(v, In what follows, we see how to compute ω(v, i, j).
For the base case when i = j = 1, from the definitions of G[v, ·, ·] and λ(s v , ·), we see ω(v, 1, 1) = λ(s v , p 1,1 ). (5.14) Next, when i > 1 and j = 1, we can compute it by a recursive formula Finally, when i > 1 and j > 1, we can compute it by a recursive formula where γ(v, p i,j , p 1,1 ) = max {d x (p i,j , p 1,1 ), d y (p i,j , p 1,1 )} is similarly defined (cf. (3.1) and Lemma 3.1). This is because, for any M-path π v ∈ Π P (v), it intersects p i,j , enters B(p i−1,j , p 1,1 ) at some p h,j (h ∈ [i − 1]), or enters B(p i,j−1 , p 1,1 ) at some p i,k (k ∈ [j − 1]), and each case is analyzed as with the previous paragraph. Since we only look up a constant number of values in (5.14)-(5.17), each value ω(v, i, j) can be computed in constant time. As the table ω(v, ·, ·) is of size a × b = O(n 2 ), the total computational time is O(n 2 ). Thus we are done.
Case (c): In-Out Pairs Move on Opposite Boundaries
By symmetry, we consider the situation when (p v ) y = (t v ) y and (q v ) y = (s v ) y for all possible in-out pairs (p v , q v ) of π u ∈ Π P (u) for v. We then have ( , where we define p 1,1 as the upper-right corner (see Figure 10). In this case, we need to compute dp(v, p 1,j , p a,k ) for each j, k ∈ [b] with j ≥ k. Recall that, since u is flipped, any M-paths π v ∈ Π P (v) and π u ∈ Π P (u) can share either horizontal or vertical segments.
because in this case π u [v] consists of a single vertical segment p 1,j p a,j . When j > k, we consider two cases of sharing horizontal and vertical segments separately, and then take the maximum. In the vertical sharing case, the desired value is exactly dp(v, p 1,1 , p a,1 ), because any horizontal segment in B(p 1,j , p a,k ) has no meaning. In the horizontal sharing case, the desired value is dp(v, ǫ, ǫ) + p 1,k p 1,j , because for any longest s v -t v path in G[v, ǫ, ǫ], we can take a corresponding M-path π v ∈ Π P (v) so that it goes through B(p 1,j , p a,k ) horizontally and then it can share the horizontal segment in addition with π u ∈ Π P (u) (with π u [v] ∈ Π P (p 1,j , p a,k )). Thus, we have dp(v, p 1,j , p a,k ) = max {dp(v, p 1,1 , p a,1 ), dp(v, ǫ, ǫ) + p 1,k p 1,j } .
The computation of dp(v, p 1,1 , p a,1 ) requires O(a 2 ) = O(n 2 ) time by (5.18). After computing it, by (5.18) and (5.19), we can compute dp(v, p 1,j , p a,k ) in constant time for each j, k ∈ [b] with j ≥ k. As the table dp(v, ·, ·) is of size O(b 2 ) = O(n 2 ), the total computational time is O(n 2 ). Thus we are done.
Reduction of GMMN[Cycle] to GMMN[Tree]
In this section, we show that GMMN[Cycle] can be reduced to O(n) GMMN[Tree] instances. More generally, we describe a reduction for triangle-free pseudotree instances. The target problem is formally stated as follows, where we emphasize again that the triangle-freeness is crucial in our approach (cf. Section 2.3).
Let P be a GMMN[Pseudotree] instance. If IG[P ] is a tree, we do nothing for reduction. Suppose that IG[P ] has a (unique) cycle of length at least four. Let C ⊆ P be the subset of pairs consisting of the cycle. If C has a degenerate pair, we can cut the cycle by appropriately splitting the degenerate pair into two degenerate pairs which are not adjacent in the intersection graph. Therefore, we can assume that any pair in C is not degenerate.
We choose an arbitrary pair v = (s v , t v ) ∈ C. Without loss of generality, we assume that v is regular and s v is the lower-left corner of B(v). Suppose that H(P, v) is an a × b grid graph, where a and b are associated with the y-and x-coordinates, respectively. Note that a, b ≥ 2 since v is nondegenerate. Let p i,j denote the (i, j) vertex for i ∈ [a] and j ∈ [b], where p 1,1 = s v . For each i ∈ [a] and j ∈ [b], we define Namely, each element of E hor (p i,j ) is a triple representing a way for an M-path π v ∈ Π P (v) to go through an edge {p i,j , p i,j+1 } of H(P, v). Similarly, each element of E vert (p i,j ) indicates a manner for π v to go through {p i,j , p i+1,j }. Let u 1 and u 2 be the neighbors of v in C. Then B(u 1 ) and B(u 2 ) can be separated by an axisaligned line, without their boundaries (recall that they can share corner vertices). By symmetry, we assume that the line is vertical and B(u 1 ) is the left side. Take α ∈ [a] and β ∈ [b] such that (p α,β ) x is the x-coordinate of the right boundary of B(u 1 ) ∩ B(v) and (p α,β ) y is the minimum of the y-coordinates of the upper boundaries of B(u 1 ) ∩ B(v) and B(u 2 ) ∩ B(v). If α = a and β = b, i.e., p α,β = t v , the lower-right corner of B(u 1 ) and the upper-left corner of B(u 2 ) are t v . In this case, we flip both the x-and y-axes so that p α,β = s v . Hence, we can assume that p α,β = t v . Define Then any M-path π v is consistent with exactly one way in E hor (q) for some q ∈ X hor or in E vert (q) for some q ∈ X vert . We try every possibility and then adopt an optimal one. Assume that π v is consistent with (q − , q, q + ) ∈ E hor (q) for some q ∈ X hor or with (q − , q, q + ) ∈ E vert (q) for some q ∈ X vert , i.e., π v goes through {q − , q} and {q, q + }. Then the minimum length of a network under this assumption is the same asÑ ∈ Opt(P ), whereP = ( q), v 3 = (q, q + ), and v 4 = (q + , t v ) (see Figure 11). It is shown that IG[P ] has no cycles as follows.
consists of a single edge in E(H(P, v)) or is a single vertex in V (H(P, v)). Therefore, there exists at most one pair w ∈ Γ v such that B(w) intersects B(v k ). This means that the degree of v 2 and v 3 in IG[Γ] is at most one, and hence they are not in any cycle in IG [P ].
Since the pairs in Γ v are not adjacent to each other in IG[P ] and we have q − ≤ q + and q − = q + by definition, at most one pair in Γ v can be adjacent to both In what follows, we see that none of these is the case.
Finally, we consider the situation when q ∈ X vert − s v and (q − , q, q + ) ∈ E vert . We have i = α by the definition of X vert . If j < β, then (C1) does not hold because B(v 1 ) does not intersect B(u 2 ). Suppose that j = β, which means that q = p α,β ∈ X hor ∩ X vert . Since the present v 1 is the same as that in the case of (q − , q, p α,β+1 ) ∈ E hor if β < b, we have already proved that (C1) does not hold in the above horizontal case (cf. Figure 12). It can be checked that the same proof is valid even if β = b. Thus (C1) does not hold in any case. In addition, since q y ≤ q + y and q y = q + y , we also have B(v 4 ) ∩ B(u k ) = ∅ if α = h k for k = 1, 2. Hence (C2) is not the case either.
Suppose that there exists w ∈ Γ v \ {u 1 , u 2 } that satisfies (C3). As we have mentioned above, ). If j > δ, then B(w) intersects B(u 1 ); this contradicts the assumption that P is triangle-free. Consider the remaining situation when j = δ. If q − = p α−1,j , then B(w) intersects B(u 1 ), a contradiction again. If q − = p α,j−1 , then B(v 1 ) does not intersect B(u 1 ). Therefore, (C3) does not hold in any case, and we are done. Figure 13: An uneasy situation when B(u 1 ) and B(w) for some w ∈ Γ v \{u 1 , u 2 } share their corners at q. (a) π v ∈ Π P (v) goes through q. (b) π v ∈ Π P (v) turns at q.
A Faster Dynamic Programming on Tree Decompositions
In this section, we prove the following theorem (cf. Table 1).
Theorem A.1. There exists an O(f (tw, ∆) · n 2∆(tw+1)+1 )-time algorithm for the GMMN problem, where tw and ∆ denote the treewidth and the maximum degree of the intersection graph IG[P ] for the input P , respectively, and f is a computable function.
A.1 Treewidth and Nice Tree Decompositions
We first review the concepts of tree decompositions and treewidth of graphs, and then define "nice" tree decompositions, which are useful to design a DP algorithm (cf. [7,Section 7.3]).
Definition A.2. A tree decomposition of an undirected graph G is a pair T = (T, (X t ) t∈V (T ) ) of a tree T and a tuple of subsets of V (G) indexed by V (T ) such that the following three conditions hold: (T2) For every {u, v} ∈ E(G), there exists t ∈ V (T ) such that X t contains both u and v.
We call each t ∈ V (T ) a node and each X t a bag.
The width of a tree decomposition is the maximum size of its bag minus one. The treewidth of a graph G, which is denoted by tw(G) (or simply by tw), is the minimum width of a tree decomposition of G.
We choose an arbitrary node of a tree decomposition as a root, and define a nice tree decomposition as follows.
Definition A.3. A rooted tree decomposition T = (T, (X t ) t∈V (T ) ) is said to be nice if the following conditions are satisfied: • X r = ∅ for the root node r, • X l = ∅ for every leaf node l ∈ V (T ), and • every non-leaf node of T is one of the following three types: -Introduce node: a node t having exactly one child t ′ such that X t = X t ′ ∪ {v} for some vertex v / ∈ X t ′ ; we say that v is introduced at t.
-Forget node: a node t having exactly one child t ′ such that X t = X t ′ \ {w} for some vertex w ∈ X t ′ ; we say that w is forgotten at t.
-Join node: a node t having two children t 1 and t 2 such that X t = X t 1 = X t 2 .
By the condition (T3), every vertex of V (G) is forgotten only once, but may be introduced several times. Given any tree decomposition, one can efficiently transform it as nice without increasing the width.
A.2 Algorithm Outline
We first sketch the idea of Schnizler's algorithm for GMMN [Tree], and then extend it to our DP algorithm on nice tree decompositions.
Let P be a GMMN[Tree] instance. Suppose that we fix an arbitrary M-path π * v ∈ Π P (v) for some v ∈ P and consider only feasible networks N = (π w ) w∈P ∈ Feas(P ) with π v = π * v . Then, the instance P is intuitively divided into two independent parts P v − v and P \ P v , where recall that P v denotes the vertex set of the subtree of IG[P ] rooted at v. In particular, if N is minimized (subject to π v = π * v ), then the restriction N [P v ] = (π w ) w∈Pv also attains the minimum length subject to π v = π * v (which is true for the other side (P \ P v ) ∪ {v}). In addition, once we fix the in-out pairs (s ′ u , t ′ u ) of π u ∈ Π P (u) for all neighbors u ∈ Γ v , we can restrict the candidates for such M -paths π * v ∈ Π P (v) on the corresponding coarse grid H(P ′ , v), The number of candidates for P ′ is at most ((4n) 2 ) δv ≤ (16n) 2∆ and the number of candidates for π * v ∈ Π P ′ (v) for each possible P ′ is at most 4δv+4 2δv+2 ≤ 2 4∆+4 , where recall that δ v denotes the degree of v in IG[P ] and ∆ is the maximum degree of IG[P ]. Based on these observations, one can design a DP algorithm from the leaves to the root on IG[P ] that computes minimum-length partial networks N [P v ] = (π w ) w∈Pv subject to π v = π * v for O((cn) 2∆ ) possible M-paths π * v for each v ∈ P , where c is some constant. Let us turn to our DP algorithm. Let P be a GMMN instance and T = (T, (X t ) t∈V (T ) ) be a nice tree decomposition of the intersection graph IG[P ] of width tw. As with the DP for GMMN[Tree] sketched above, we construct partial solutions from the leaves to the root of T . From Lemma A.4, we can assume that T has O(tw · n) nodes.
For t ∈ V (T ), let P t be the union of all the bags appearing in the subtree of T rooted at t, including X t . Then, the following lemma analogously holds, which implies that among all the feasible solutions N = (π w ) w∈P ∈ Feas(P ) satisfying N [X t ] = (π * w ) w∈Xt for some fixed (π * w ) w∈Xt , all the minimum-length solutions have exactly the same length in P t .
Proof. Let N ′ 2 be the network obtained from N 2 by replacing N 2 [P t ] with N 1 [P t ]. As T is a tree decomposition of the intersection graph IG[P ], if we remove all the vertices in X t from IG[P ], then P t \ X t is disconnected from its complement in the remaining graph (cf. the condition (T3) in Definition A.2). Moreover, N 1 and N 2 have the same M-paths for X t , and hence the network N ′ 2 is still an feasible solution for P . In addition, N 1 [P t ] < N 2 [P t ] implies that N ′ 2 < N 2 , and we are done.
Based on this lemma, we define subproblems for possible solutions in X t as follows: given a GMMN instance P and an M-path π * v ∈ Π P (v) for each v ∈ X t , we are required to find a network N = (π w ) w∈P ∈ Feas(P ) such thatN minimizes N [P t ] subject toπ v = π * v for all v ∈ X t . Formally, we define If t is a leaf, i.e., when X t = ∅, then we write (π * v ) v∈Xt = ǫ. As with the tree case, it suffices to consider O((cn) 2∆ ) candidates for π * v ∈ Π P (v), and hence there exist O((cn) 2∆(tw+1) ) candidates for (π * v ) v∈Xt as |X t | ≤ tw + 1. We describe recursive formulae for filling up the DP table in the next section.
A.3 Recursive Formula
We separately discuss the four types of nodes in a nice tree decomposition (cf. Definition A.3).
Leaf node. If t is a leaf node, then twdp(t, ǫ) = 0 since X t = ∅ and P t = ∅.
Introduce node. If t is an introduce node with the child t ′ such that X t = X t ′ ∪ {w} for some w ∈ X t ′ , then where we define N * = v∈X t ′ π * v and the correctness of (A.1) is shown as follows. If w has a neighbor u in P t \ X t ′ , then the edge {u, w} ∈ E(IG[P ]) cannot belong to any bag (because u has already been forgotten in the subtree rooted at t ′ ), which contradicts that T is a tree decomposition of IG[P ] (cf. the condition (T2) in Definition A.2). Hence, it suffices to care the total length of segments shared by π * w and N * , which leads to the formula (A.1).
A.4 Computational Time Analysis
In this section, we show that the whole algorithm runs in O(f (tw, ∆) · n 2∆(tw+1)+1 ) time, which completes the proof of Theorem A.1. Recall that |V (T )| = O(tw · n) and the size of the DP table twdp(t, ·) is bounded by O((cn) 2∆(tw+1) ) for each node t ∈ V (T ) (cf. Section A.2). Table 2: Current best approximation ratios classified by the class of intersection graphs, whose treewidth and maximum degree are denoted by tw and ∆, respectively.
We remark that, in the running time of our algorithm, the exponent of n still contains both the treewidth tw and the maximum degree ∆ of the intersection graph. It remains open whether the GMMN problem is fixed parameter tractable (FPT) with respect to such parameters or not.
B Approximation Ratio Based on Chromatic Number
In this section, we give a simple observation based on graph coloring. Proposition B.1. Let P be a GMMN instance and N * ∈ Opt(P ) be an optimal solution for P . If the intersection graph IG[P ] of P is k-colorable, for every N ∈ Feas(P ), the total length of N is at most k times of the total length of N * .
Proof. Let P be a GMMN instance such that IG[P ] is k-colorable, i.e., there exists a k-partition {P 1 , P 2 , . . . , P k } of P such that every P i is an independent set in IG[P ]. Then, for each i ∈ [k], the total length of an optimal solution N * i = (π w ) w∈P i ∈ Opt(P i ) for the GMMN subinstance P i is equal to the sum of the Manhattan distances, i.e., For every i ∈ [k], the optimal solution N * also contains an M-path for every pair in P i since P i ⊆ P , and hence N * i ≤ N * . Since any feasible solution N ∈ Feas(P ) is written as i∈[k] N * i for some N i ≤ k · N * , and we are done.
From Lemma B.1, we immediately obtain the following corollaries. For complete graphs and odd cycles, obviously, one needs ∆ + 1 colors, where ∆ is the maximum degree. However, all other connected graphs are ∆-colorable [4]. Since the GMMN problem whose intersection graph is a complete graph and a cycle admits an O(1)-approximation algorithm and a polynomial-time (exact) algorithm, respectively, we focus on approximation ratio for other cases.
Corollary B.2. Let P be a GMMN instance whose intersection graph has maximum degree at most ∆, and is neither a complete graph nor an odd cycle. Let N * ∈ Opt(P ) be an optimal solution for P . Then for any feasible solution N ∈ Feas(P ), we have N ≤ ∆ · N * .
It is easy to check that a graph of treewidth at most tw is (tw + 1)-colorable.
Corollary B.3. Let P be a GMMN instance whose intersection graph is of treewidth at most tw. Let N * ∈ Opt(P ) be an optimal solution for P . Then for any feasible solution N ∈ Feas(P ), we have N ≤ (tw + 1) · N * .
Corollary B.4. Let P be a GMMN instance whose intersection graph is planar. Let N * ∈ Opt(P ) be an optimal solution for P . Then for any feasible solution N ∈ Feas(P ), we have N ≤ 4 · N * .
|
2020-04-24T01:01:00.503Z
|
2020-04-23T00:00:00.000
|
{
"year": 2020,
"sha1": "e2419159ea8db31c83bc8a67dde78215ff8035d2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.11166",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e2419159ea8db31c83bc8a67dde78215ff8035d2",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
14634455
|
pes2o/s2orc
|
v3-fos-license
|
Complementary strategy of New Physics searches in B-sector
We discuss a possible strategy for studies of a particular next-to-minimal flavor violation New Physics (NP) scenario at LHC. Our analysis is based on comparison of particular CKM matrix elements, which can be obtained from the processes dominated by diagrams of different topology (tree, penguin and box). We argue that the standard formalism of the overall unitarity triangle fit is not suitable for searches of the chosen NP. We also stress the importance of lattice computations of some relevant hadronic inputs.
Introduction
The main interest of the present day flavor physics is focused towards searching for possible signals of New Physics (NP) -the effects which are not taken into account by the Standard Model (SM).These still hypothetical effects can be roughly divided into two groups.The first, "quantitative" one consists of effects which are present in the SM but whose concrete SM prediction deviates from actual experimental results.A well known example is given by rare decays strongly suppressed in the SM but expected to be enhanced in some NP scenarios.Another, "qualitative" group, is formed by the effects which are not present in the SM at all, like possible observation of nonconservation of any charge (baryon, electric etc) strictly conserved in the SM.At the moment there are no clear indications on possible NP effects of either kind.The strong hope however is that the situation will change in the nearest future with the run of LHC.Of prime importance in flavor physics is an analysis of the CKM mixing matrix.The commonly accepted parametrization-independent language used to discuss the rich physics encoded in CKM matrix is formalism of the unitarity triangle (UT).For introduction into the subject and all details the reader is refered to the materials presented on [1,2,3,4] and to excellent recent reviews [5,6].
The issue of NP search in flavor physics context is certainly much broader than the mere check of CKM matrix unitarity, however precise it can be.Of course, any inconsistencies in the UT construction will undoubtedly indicate the presence of physics beyond SM.The opposite is far from being true -there are many reasonable NP scenarios which are well compatible with perfect unitarity of CKM matrix.
There are different possible strategies to study the CKM matrix.The most popular one, adopted in particular by UTfit and CKMfitter groups [1,2] is to use all available experimental data to overconstrain the triangle.Besides general importance of this activity the hope is that the procedure will exhibit some inconsistencies signaling NP effects.Up to now there is an overall agreement of all constraints (see recent talks [7,8]).
However, this approach also has some disadvantages.In our view, the most important one is the fact that the set of constraints in use is not fitted to this or that particular NP scenario.On the other hand, the relevance of this or that observable from the point of view of its possible NP content strongly depends on what kind of NP we discuss.Let us explain this point taking as a typical example ∆M s /∆M d ratio.For all scenarios where NP couples identically to s and d quarks (U -spin symmetric NP) this ratio is not sensitive to NP contributions, since in this case short-distance functions, even if modified with respect to the SM predictions for each ∆M d , ∆M s exactly cancel in the ratio.This pattern is typical for, e.g.constrained minimal flavor violation (CMFV) NP models (see review of MFV models in [11]).As a result this quantity informs us about ratio of couplings of t-quark to d and s-quarks and also about long-distance SU (3) breaking effects in QCD (see expression (10) below), but brings no information about correctness of the short-distance SM calculation of ∆M d or ∆M s sepa-rately.And it is precisely the latter short-distance piece we are interested in most of all if we are looking for deviations from the SM at small distances.On the other hand, there are NP scenarios such as, e.g.MSSM at large tan β (see [9] and references therein) and next-to-minimal flavor violation [10] where this is not the case and the ratio under study is sensitive to NP.Moreover, it is very natural to expect (and this is our general attitude in the present paper) that NP contributes differently to the processes of different topology (i.e.tree and penguin, penguin and box etc).Obviously, this effect can be lost in comparison of observables of the same topological type.In view of that an alternative way has been proposed (see, e.g.[12,13] and also [1,2,9,14,15] and references therein).Generally speaking, it corresponds to construction of a few a priori not coinciding unitarity triangles, each extracted from branching ratios and asymmetries for processes of some particular kind.In this case any mismatch between these UT's, e.g the so called "reference UT" [13] and "universal UT" (see recent discussion in [9]) would be a clear signal of NP, and, moreover, one could in principle identify the place (EW penguin sector is among the most promising ones) where it has come from.
Adopting the basic idea of the latter strategy we address the following problem.Let us assume the still hypothetical NP is, in the spirit of next-to-minimal flavor violation scenario: a).U -spin symmetric; b).does not contribute to the tree processes and c).does not spoil the unitarity of CKM matrix (i.e.we work in the spirit of next-to-minimal flavor violation scenario).How can we see NP from global UT fits and what observables are the most sensitive to NP effects in this particular case?
To answer this question, we analyze theoretical and experimental (having in mind mostly the LHCb experiment) perspectives for studies of some CKM matrix parameters which can be extracted from the processes of different topology and can be sensitive to NP of the discussed type.It is worth noticing that the mismatch between sin 2β values from B → J/ψK S and from B → φK S modes widely discussed in recent literature (see, e.g.[5,16] and references therein) represents exactly a kind of effects we are interested in.We will also stress the urgent need for new refined lattice data on hadronic input parameters in order to determine the product |V ts V * tb |.The paper is organized as follows.The section 2 is devoted to brief overview of the existing strategies for CKM matrix analysis, while our procedure and results are presented in the section 3 and conclusion in the section 4.
Overview of the standard strategy
In general one can choose different sets of independent parameters which enter the basic unitarity relation1 It is worth noticing that the term "independent" is usually used in the literature in mere algebraic sense, i.e. one assumes no relations between CKM matrix elements other than those following from the unitarity constraints.This assumption may be wrong if some more fundamental underlying structure behind CKM matrix does exist.A common choice for one of the parameters is This quantity can be determined with very good accuracy from the decay mode K → πlν with the latter being dominated by tree level process.The main source of error here is the poor knowledge of the corresponding formfactor f + (0), namely, according to [17] δ|V us | f+(0) = ±0.0018,δ|V us | exp = ±0.0005.
The interior angles of the triangle (1) are conventionally labeled as The Cabbibo-suppressed angle χ important for B s − Bs oscillations is defined by and is also of interest.Let us briefly remind the strategy for γ.The cleanest way to extract it is from the interference of the b → cūs and b → cus transitions (the so called "triangle" approach).Practically, this corresponds to the study of B − → K − D 0 and B − → K − D0 modes with the subsequent analysis of the common final states for D and D mesons decays.One considers CP -eigenstates as final states for D, D mesons decays (GLW approach [18]) or combines observables from different modes (B → K * D, B → KD * , B → KD, B → K * D * ) (ADS approach [19]) to overconstrain the system. 2 Notice that the interfering diagrams are the tree ones. 3he combined results for γ presented in [3] obtained by Dalitz plot analysis [21] are given by where the errors are statistical, systematic and the error resulting from the choice of D -decay model.For the discussion of the situation with γ determination in LHCb the reader is referred to [22].
Using various methods the overall uncertainty in γ at LHCb is expected to be as small as 5 • in 2fb −1 of running and will eventually reach the level of 1 • with increase of statistics.
With the standard assignment for the elements of CKM matrix (see, e.g., [6]) to define the apex of the unitarity triangle one needs to know at least two independent quantities out of two sides and three angles α, β, γ where the latter are defined by (3).In particular, the authors of [15] analyzed all ten possible strategies, distinguished by the mentioned choice of two independent parameters out of five from the point of view of their efficiency in the determination of UT.For example, our geometrical intuition tells us that it is easier to construct general non-squashed triangle taking as inputs one of its angles and adjacent side (because the variations in these parameters are approximately orthogonal) than taking the same angle and the opposite side (because the variation in these parameters are approximately parallel).Numerical simulation done in [15] fully supports this intuition, giving the highest priority 4 to the strategies based on combined use of either (γ, β) or (γ, R b ).This result is particularly encouraging because the quantities R b and γ define the so called reference UT [12,13].The latter is built from the observables that are expected to be unaffected by NP, since their dominant contributions come from tree level processes.Then assuming unitarity of CKM matrix one can compute from (1) reference values for and compare them with the ones obtained by direct measurements in the processes involving loop graphs.Any difference could be a hint for a NP signal (see recent quantitative discussion of this issue in [9]).The elements of CKM matrix which enter the definition of R b (up to terms are known from semileptonic B-decays.The recent inclusive update is given by [1] |V cb | = (41.79± 0.63) • 10 −3 .Experimental determination of |V ub | incl suffers from uncertainties, introduced by specific cuts one has to apply in order to get rid of b → c background.
As for |V ub | excl the main source of error is lattice uncertainty in calculations of B → π, ρ form-factors.Up to date results are given by [3] as At the moment the perspectives to increase the accuracy in experimental determination of R b up to a few percent level are unclear.As can be seen from Fig. 1 and Fig. 2, the errors in γ and R b play a very different role in fixing the angle β with some given precision, which is a simple consequence of the fact that the angle α is close to 90 • and the triangle is almost rectangular.The present accuracy in β extracted from the "golden mode" B → J/ψK S is better than ±2 • , the current world average for sin 2β from tree level decays provided by [3] is (see recent talk [23] and references therein): The corresponding penguin contribution to β is Cabibbo-suppressed (see, e.g.[16]).As shown in Fig. 1 an uncertainty window of ∼ 3 • for β corresponds to an uncertainty window of ∼ (24 ± 5) • for γ and therefore the precise data (9) does not constrain γ via (8) strongly enough to make the comparison discussed above meaningful.On the other hand, since both β from ( 9) and γ from ( 5) are determined from the processes dominated by tree level decays, we do not expect to see violation of the second expression from ( 8) with these values of the angles.Anyway, the experimental uncertainty in γ and hadronic uncertainties in R b make (8) not valuable.Let us briefly discuss the side R t .The are two ways of extracting R t by means of relations not affected by NP contributions in some scenarios, notably CMFV.These are the computation of R t from the first expression in (8) and the computation from the ratio ∆M d /∆M s where, again, short distance contributions to the box diagrams are canceled. 5Concerning the former algorithm, because of the same geometrical reasons (angle α close to 90 • ) R t is sensitive to the uncertainty in the angle γ only (see Fig. 3).Thus, precise knowledge of γ will constrain R t effectively.In the latter approach one obtains the ratio with the nonperturbative parameter6 The typical error of current lattice simulations of ξ is estimated as 6% (see [24] and recent analysis in [25]).Since up to O(λ 4 ) then having at our disposal recent CDF results [26] (see (22)) we can straightforwardly extract for the mean value R t = 0.92 with the uncertainty dominated by ξ.
Let us summarize this part.Suppose we would be able to measure R b and γ with some very high precision.This defines the position of the UT apex which is universal as soon as NP does not contribute to tree processes R b and γ have been extracted from.Let us also assume that we get R t from ∆M d /∆M s and β from B → J/ψK S , and these observables perfectly agree with R b and γ via (8) (i.e. the UT apex defined from R t and β coincides with the one found 7 from R b and γ).Does this fact mean dramatic shrinking of NP parameter space?Not at all: for NP scenarios with U -spin invariance and without sizeable NP mixing effects 8 this coincidence is trivial and brings no any information about the parameter space.One can tell that UT is simply too rough tool to see NP of this kind.In other words, the precise knowledge of ξ is important in this case to calibrate the lattice, but not to find the NP.
Direct comparison of CKM matrix elements from different processes
In what follows we are going to explore a complementary strategy whose essence is the comparison of values of CKM matrix elements, obtained from processes with dominant contributions coming from diagrams of essentially different topology.Again one can consider angles and sides in this respect.We are interested in observables, corresponding to the processes whose dominant contributions come from topologically different diagrams, namely: • radiative penguin in decay modes B → K * γ, B s → φγ and B → (ρ, ω)γ, B s → K * γ, for s and d quarks, respectively • oscillations of neutral B 0 and B s mesons with the dominant contribution given by box diagram, resulting in the mass shifts ∆M s , ∆M d • tree and strong penguin interference in B decays into 2-body final states made of light hadrons π, K, ρ and mixing relevant for the angle α determination 7 Of course, any other pair can actually be used, see discussion above. 8NP physics contributions in the box diagrams could in principle affect both γ and β via D 0 − D0 and B 0 − B0 mixings.
For the first and the second mode the object of our interest is the product For the third mode we confine our attention to the angles α, β and χ.
The reference values for these quantities are defined from tree level processes, since we adopt the usual assumption that they are free from NP pollution.One can make use of the "tree level definition" for |V * tb V ts | ↔ |V * tb V ts | tree , which up to terms O(λ 4 ) reads We have already discussed the corresponding numerical values and their uncertainties.Plugging them in, we get As for the angle α, its reference value is given by where the extraction of β and γ from tree level processes is described above.
As for the angle χ, there are no experimental constraints on it at the moment.The SM prediction is |χ| ≈ 0.02 ÷ 0.04.
Analysis of |V
Let us start with the analysis of |V * tb V ts |.Values of these elements of the CKM matrix must exactly coincide in the SM, regardless of the way they are extracted.On the other hand, lack of such coincidence will be a definite signal of NP, contributing differently to these different types of processes.Qualitatively, one can consider ratios of the following kind where q = d, s and V stands for K * , φ, ρ, ω.Thus we have three ways to extract the product |V * tb V tq | of CKM matrix elements: via expression (18) from the process dominated by the radiative penguin diagram, via expression (19) from the process dominated by the box diagram, and via (12) from the reference tree level processes.It is obvious that by construction one has In the SM however much more restricted condition has to be fulfilled: It is convenient to present a set of three numbers {ζ q,V , ζ q,V } as a single point on the ternary coordinate system with log ζ (i) q,V as an (algebraic) distance from the i-th axis.Then the SM case corresponds to the only point on this diagram -its origin, while any deviation from it is a hint to NP.
The analysis of ratios of the mass shifts ∆M d /∆M s and branchings Br(B → ργ)/Br(B → K * γ) widely discussed in the recent literature [25,27,28] deals in our language not directly with the quantities ζ (i) q,V , but with their ratios like d,ρ .The important advantage of these ratios is the improved accuracy of their theoretical determination, especially from the point of view of hadronic uncertainties.However the price to pay is high -the short distance factors which could contain contributions of NP are canceled in these ratios.In logarithmic coordinates it corresponds to a parallel translation, which could miss a considerable piece of NP, which is clearly seen from the analysis of [25].In short, ζ d,ρ = 1, but not vice versa.Generally speaking, it is meaningless to look for (short-distance) deviations from the SM predictions if one has no quantitative knowledge what the latter actually are.Therefore as soon as we are discussing absolute values of mass shifts, widths etc, these short distance parameters should be determined and not just canceled in the ratios.Corresponding loss in an accuracy for hadronic contributions is perhaps inevitable.Anyway we are stressing that one has to deal with this "less accurate" low energy hadronic inputs if one tries to capture the short-distance effects of NP.For example, it is meaningless, in our view, to consider soft quantities as free parameters to fit observable branching ratios.Any possible NP induced difference between, e.g. the SM prediction for Br(B → V γ) and actual experimental result would be just hidden inside such "extracted from experiment" |ξ (K * ) ⊥ (0)|, which is clearly unacceptable.Simply speaking, to discuss deviations from the SM prediction we have first to know the latter. 9n principle, one can discuss five expressions of the kind ( 16), corresponding to the following choices for (q, V ): (s, K * ), (s, φ), (d, ω), (d, ρ), (d, K * ).However all these channels have universal short-distance structure, while the longdistance contributions are related to each other by SU (3) flavor arguments.The optimal strategy therefore seems to choose just one particular case, which we take to be (s, K * ) in the rest of the paper.The results for the other ones could provide important cross-checks (like, e.g.|V td |/|V ts | ratio), but presumably no new information about a NP content of (16).
We are using the standard SM expressions for the decay rate for B → K * γ and the mass difference ∆M s .The former can be written as [28,29,30]: In the above expression r = m (0) by O(α s ) corrections; according to [31] numerically one has |ξ The expression for ∆M s reads as follows: where η B is calculable short-distance QCD factor, while the m t /M W -dependent factor F tt M 2 W has come from calculation of the box diagram ( [32], see also [33,34]).
According to our strategy we invert the expressions ( 18) and ( 19) to the following form: and The structure of the above expressions is clear.The first factors in the r.h.s. are the short-distance SM contribution, which have to be calculated analytically.These are just numbers of order 1 and it is assumed that we have reliable theoretical control of this part.The typical accuracy of these factors is better than 5%.The second factors (the square roots) are composed from experimentally measurable quantities.The error in these factors is dominantly experimental and is currently at the 5% level for (20) and 1-2 % level for (21) .The third factors (in the square brackets) encode information about soft QCD contributions (and related problem of b quark pole mass m b ) for which we have no systematic approach of studying.The main hope here is focussed on the lattice simulations. 10The uncertainty of currently available data can be conservatively estimated as 10-20 %.The use of ( 20), (21) as probes for NP entirely depends on improvement in the determination of these hadronic factors.
The quantities of our interest are T B→K * 1 (0) and f Bs BBs .The reader is referred to the papers [35] - [40] and the papers [41,42] for lattice and sum rule determination of T B→K * 1 (0), respectively.The corresponding values are in the range 0.2 − 0.4.The relevant references for f Bs BBs are given by papers [43] - [46].Looking at the data one can see that there is no clear agreement between, e.g.lattice computations and light-cone sum rules results.Moreover, the errors given by the authors of the cited lattice papers are mostly statistical ones.The procedure of correct treatment of systematic errors in this case is not yet known.In fact, the same is true for the sum rule calculations.In general, precise determination of T B→K * 1 (0) on the lattice is very difficult and reliability of the calculations done so far is debatable (see [47] and recent discussion in [48]).However the utmost importance of this measurement, which hopefully will be done in the nearest future on new improved lattices in unquenched case cannot be overestimated.
Thus, having no better strategy at the moment, we will be conservative in our error treatment and take for the input value of f Bs BBs f Bs BBs = (280 ± 40) MeV while we also consider three sets of possible values for T B→K * 1 (0), where the errors correspond to those reported in the cited papers: The ultimate goal should be to reach the accuracy of the lattice computations comparable to the accuracy of the r.h.s. of (25).
We have all input data now to estimate the ratios ζ s,K * .According to the three choices of numerical value for the form-factor T B→K * 1 (0) we get three sets of ζ (i) s,K * .The results are presented in Table 2.For graphical presentation one can use planar ternary coordinates where the constraint s,K * = 0 is satisfied automatically.Each solution is represented by a single point on this plane with the distance from the i-th axis to the point given by log ζ (i) s,K * .It is taken positive for two axes forming an angle the point belongs to and negative for the remaining distant axis.With this rule each point on the plane satisfies the constraint (16).Some sample result for the case [44]-B is shown on Figure 4. Notice, that the bars correspond to 1σ deviation in The fact that they cross the corresponding axes means less than 1σ deviation of the actual result from the SM prediction.The origin of this ternary coordinate system corresponds to log ζ (i) s,K * = 0 for all i, which is the SM solution.The main qualitative conclusion is perhaps not surprising: with the reasonable choice of parameters we observe no evidence for NP within error bars.
are two optimistic remarks however.The first is that our errors are very conservative and significant reduction of at least some of them is foreseen in the nearest future.Secondly, the errors in the Table 2 are not independent.There are two sorts of correlations.The first is the uninteresting "kinematical" one, following from the constraint (16).The second pattern corresponds to the error correlation for lattice simulations of T B→K * 1 (0) and f Bs BBs .So far these two inputs have been measured independently, by different lattice groups and within different procedures.Correspondingly, the errors shown in the Table 2 are also treated as independent.On the other hand, it is reasonable to expect an error reduction for the simultaneous calculation of T B→K * 1 (0) and f Bs BBs and we call the attention to importance of such simulation, using the same framework (lattice action, chiral extrapolation procedure etc) and uniform error treatment.It is reasonable to expect that this would result in a better accuracy, first of all for the quantity ζ where the uncertainty in the numerical factor 778 is of order 5% and is mostly theoretical. 11This SM prediction demonstrates the level of precision the lattice computations must reach in order to make reliable conclusion about NP based on the lattice results.We consider the check of (25) on the lattice as a task of primary importance.
Analysis of the angle α
The angle α can be extracted from the two-body decay modes of B into light hadrons π, ρ and K (see recent review [52]).From the theoretical point of view the best channel seems at present to be B → ρρ [53,54].The most promising channel for α at LHCb however is B → ρπ → πππ [55,56].The basic idea of the analysis [57] is to study the interference of the tree amplitude proportional to the weak phase factor e iγ from V * ub V ud and the penguin amplitude proportional to factor e −iβ from V * tb V td .Writing down also the amplitudes for CP -conjugated modes and imposing isospin relations, one can fit four amplitudes, four strong phases and one weak phase from 11 observables (see details in [56]).The expected uncertainty in α of LHCb is about 10 • in one year of running [22].It can be mentioned that the recent result presented by BaBar collaboration [58] for α from B → ρπ channel is while the data uncertainty for B → ρρ mode is ±13 • [49].The above analysis assumes no electroweak penguin contributions.According to the estimates [54], δα EW P = −1.5 • .The isospin breaking effects controlled by parameter (m d − m u )/Λ QCD are expected to be of the same order of magnitude.
For α defined as an argument of the amplitude ratio one gets (see details in, e.g.[6]) where θ 12 is the B 0 − B0 mixing angle, θ is strong penguin phase, r is an absolute value of penguin-to-tree ratio and δα is possible weak NP penguin phase. 12In the absence of penguins, i.e. if r = 0 and if θ 12 = 2β (as in the SM), one gets α ef f = α tree with α tree defined by (14).It is worth stressing (see early discussion of related issue in [12]) that for the discussed scenario the corresponding NP phase shift δα is to coincide up to a sign with that to the angles β and χ: due to the assumed U -spin invariance. 13Also it has to be noticed that the box diagram corresponding to the B 0 − B0 mixing contributes identically to the discussed decay modes and its contribution to the phase (with a possible NP part) is canceled in (28).Certainly beyond the SM one could have θ 12 = 2β, but this phase shift may have no direct relation to the discussed shift δα = −δβ, resulted from the penguin process.Thus we are left with the only NP contribution from the penguin-mediated decay (with respect to the tree level one).We see that the ability to extract δα from experiment (i.e. from α ef f ) crucially depends on the value of r, since given experimental uncertainty in α ef f corresponds to larger uncertainty in δα smaller the ratio r is.The combined fit of the data for B → ρπ and other modes (notably B 0 → K * 0 ρ 0 ) taking into account nonzero penguin NP phase δα is being performed and will be reported elsewhere.Here we would like to notice that the experimental accuracy of δβ is currently limited by the statistics of B → φK S decay and recent update for sin 2β from penguin decay modes as given by [23] is sin 2β| peng = 0.58 +0.12 −0.09 ±0.13 which correspond to about 15 • uncertainty window in the angle β.It is worth mentioning that this penguin-dominated mode would not allow to get sin 2β (and hence δβ) with competitive precision at LHCb, since the latter is expected to be about 0.2 in 2fb −1 of running [22].Higher accuracy should be possible for Super-B factories.
Conclusion
The standard approach to study CKM matrix is to overconstrain the UT using all available experimental information.However not all constrains on the (ρ, η) plane are sensitive to NP, at least if the latter is taken in the form of next-tominimal flavor violation.Some (such as ∆M d /∆M s ) do not distinguish the SM from many NP scenarios just by construction, while others (such as relation ( 8)) are insensitive to NP because of the specific profile of the UT (α close to 90 • ).In this sense there are two possible points of view regarding the fact that up to now all constraints on (ρ, η) plane agree with each other.The first one is that there are no sizeable NP effects seen in flavor physics.The second one is that UT is simply not suitable for the purpose (since NP is not present in the angles determined from the tree processes and could also cancel from the sides) and the room for manifestations of NP in b-physics observables is in fact not so small (because the uncertainties are still rather large).Following the latter attitude, we have discussed in this paper a complementary analysis of the data on the CKM matrix elements.Its key feature is the use of CKM matrix elements ratios which are sensitive to NP provided it contributes differently to the processes of different topology.In this sense the quantities ζ (i) s,K * are different from the ratios like ∆M d /∆M s since the short-distance part is kept in the former.Moreover, since we have more than one choice for observables a given CKM matrix element is extracted from, we could have relations of the form (16), leaving unconstrained more than one degree of freedom.Thus the lattice simulations must match several hadronic inputs simultaneously (and not just one).This, we believe, will allow to reduce the corresponding errors and consequently to make the proposed probes more sensitive to the NP. 14peaking differently, one of our main messages to the lattice community is that the importance of further reducing uncertainties in the ratio ξ is limited with respect to the calculation of hadronic inputs entering the definition of ζ's since the latter are more sensitive to NP than the former.
Concerning the determination of UT angles which are free from lattice uncertainties, we advocate the importance of estimates of the angle δα corresponding to the penguin amplitude extracted from B → ρπ and other modes (and hence subject of possible NP shifts).The accuracy of such a comparison can be comparable or better at LHCb than for sin 2β extracted from B → J/ψK S and B → φK S modes, while the physical meaning is the same; any discrepancy between these values would undoubtedly indicate NP.
In principle, nothing prevents one to include the discussed quantities ζ (i) q,V and δ(α, β, χ) into the global fit of the CKM matrix.It is clear that one gets essentially no new information in this way, since we deal with the same experimental observables the standard fitting procedure does.We feel, however, that careful analysis of the proposed observables provides an alternative and transparent way of looking at NP effects.This strategy can become useful in the nearest future when LHC data will improve the accuracy of our knowledge of the CKM matrix elements dramatically.s,K * .The abbreviation [43]-A corresponds to the branching ratio for B → K * γ from [43] and the Set A choice for T B→K * 1 (0) = 0.25 ± 0.05, and analogously for other columns.1.12/0.240.94/0.180.80/0.131.07/0.23 0.89/0.170.77/0.12ζ (3) s,K * 1.10/0.171.10/0.171.10/0.171.10/0.171.10/0.171.10/0.17 (2) (3)
2 K * /m 2 B
, m b stands for the pole mass of b-quark, a 7 (µ) = C (0) 7 + A (1) (µ) is an absolute value of the corresponding short-distance function including Wilson coefficient C (0) 7 , hard scattering contributions and annihilation corrections.The detailed computation of this function at the nextto-leading order can be found in the cited papers.Notice that we omit terms of the order of m 2 s /m 2 b .The factor |ξ (K * ) ⊥ (0)| differs from the corresponding form-factor T B→K *
Figure 1 :
Figure 1: Error propagation corresponding to the second expression from (8).
Figure 2 :
Figure 2: The same as Fig.1 for different values of the angle α.
Figure 4 :
Figure 4: The results for log ζ (i) s,K * , the case [44]-B plotted as a point in ternary coordinates.The SM solution is the point at the origin.The algebraic distance from the i-th axis is given by log ζ (i) s,K * , positive for two axes forming an angle a point belongs to and negative for the remaining distant axis.With this rule each point on the plain satisfies the constraint (16).The bars correspond to 1σ deviation in ζ * , not in log ζ * .The marks on axes set the scale and serve mainly for guiding eyes.
Table 1 :
Short-distance quantities from the definition of κ.
Table 2 :
Numerical results for ζ
|
2014-10-01T00:00:00.000Z
|
2006-09-18T00:00:00.000
|
{
"year": 2006,
"sha1": "b915444812545cfcdabe194781e020185fb014ba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b915444812545cfcdabe194781e020185fb014ba",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
234940775
|
pes2o/s2orc
|
v3-fos-license
|
Professional meanings as a resource for teacher development in the space of modern innovations
. The article describes the content of professional meanings as a resource for teacher development in the space of modern innovations. The content of the article reveals the features of the projection of professional meanings in the activity of a teacher, the influence of semantic formations on the professional development of a modern teacher. The paper describes the subjective determinants of the formation of meanings and values adequate to the teacher's mission in the context of resources and risks of the modern innovation space. The author describes the teacher as a carrier and translator of professional and personal orientations and semantic models of relationship with the world. The article describes the psychological functions of professional meanings in the activity and promotion of a teacher in the profession. Systematized criteria for the development of professional meanings of the teacher as a resource for its development in the space of modern innovations. The results of an empirical study of teachers ' ideas about the meanings of professional activity and their projections of semantic formations in real practice are presented.
Introduction
Modern educational space with a high innovation index produces special professional meanings that initiate and regulate the activity of the teacher. Components of value -semantic sphere given a vector of self-perception, esteem, and self-attitude of the individual to himself as a subject of innovative activity and creativity [1,2]. The contents of the system of value orientations, meanings-aspirations, subjective image of achievements form a kind of semantic evaluation matrix teacher's own resources and risks, Productivity, innovative activity and satisfaction with the outcome correlates with the degree of motivational "charge" the decision put forward new objectives. The resourcefulness of professional meanings is associated with a reflexive assessment of the personality, its behavior in the past, current or predicted situation for the teacher to model the trajectory of self-changes at various stages of professional life. One of the main functions of semantic regulation of activity in psychology is the activation of subjective resources and personal potential, optimization of the relationship "I-profession" in the context of the General system of personal relations. Adequate professional meanings help to create favorable situations of life activity in the profession, due to self-change and transformation in the conditions of modern educational and social practice [3].
The formed semantic sphere is the basis of professional adaptability and success, setting an individual project of self-development, self-regulation and self-control [4,5].
Professional meanings lead to the expansion of the thesaurus of purposeful pedagogical tools, ways to translate constructive models of building relationships with The world.
The research of psychologists shows that the readiness of the semantic sphere of the subject to accept and implement an innovative format of activity is a predictor of success [6,7].
Self-assessment of psychological resources is associated with value orientations, the allocation of clusters of qualities relevant to professional meanings, generated by the meaning-making activity of the teacher in the space of modern innovations.
Professional meanings expressed and realized by the individual set a positive emotional background at various stages of innovation development, reflect the subjective position of the teacher, and support the intensity of the desire for professional growth and success [8].
The success of pedagogical activity in the space of modern innovations is determined by the formation of subjectivity, the integration of cognitive schemes and conative algorithms with value orientations, the nature of individual semantic space and the image of the world.
Professional meanings are a significant component of motivation for continuous selfdevelopment and personal growth, participating in self-initiation, modeling and regulation of these processes. The structure of the system of this type of semantic formations includes: the image of the pedagogical profession, personal meanings, individual values, professional Selfconcept, image of achievements, acceptance of the mission and adequate semantic attitudes [9,10].
The subjective source of personal and meaningful professional activity of a teacher in the space of modern innovations is sociogenic needs: the need for personalization, selfrealization and self-actualization, self-affirmation as a professional [11].
Professional meanings as a resource for continuous development and self-development of the teacher in the future determine the direction of this process and evaluation markers, set the subjective bar of achievements, the Dynamics of professional and personal value orientations of the teacher reflects the reflection of the subjective experience of pedagogical activity in the innovative educational space. In the content and structure of professionalpersonal orientation of the teacher to integrate the individual-personal semantization of modern professional space, value-semantic grid professional environment nominated from outside the targets, which becomes a resource of preadaptation personality. The content of professional meanings, in essence, determines the activity of the teacher in relation to professional development, self-improvement and self-creation. Successful professional selfdevelopment is associated with the teacher's awareness of target personal orientations; understanding of the individual format of the content and time prospects of life in the profession, reflection on the hierarchy of values and the personal meaning of pedagogical activity [12,13].
In the diversity and content of the system of semantic motivational motivations, we can distinguish a constructive motivational orientation, which is based on self-projecting positive development of oneself as a professional, and a non -constructive one, which focuses on achieving a situational pragmatic result [14].
In assessing the resource value of professional meanings, such parameters as stability, awareness, and effectiveness are taken into account.
Today, there is a humanistic, positive trend in the development of the semantic sphere of the teacher, immersed in the context of innovative educational space.
Semantic motivation non-situational activity professional advancement of the teacher in the space of modern innovation reflects the social and subjective standards, the viability of 210, 22016 (2020) E3S Web of Conferences ITSE-2020 https://doi.org/10.1051/e3sconf/202021022016 the personal scenarios, a predictive model of the future individual, the adoption of selfdevelopment as values, positive attitudes on domestic resource mobilization [15].
An important resource value is an adequate self-assessment of oneself as a carrier of relevant professional meanings and reflection of success factors, difficulties, and psychological barriers in solving the problem of meaning [16]. The space of modern innovations produces the need to accept the idea of the expediency of continuous self-change and mobilization.
In modern science, it is shown that the level of development of the subject's semantic sphere correlates with the richness and diversity of the spheres of life in which he acts as an active figure. This contextual diversity of development conditions harmonizes and integrates the semantic sphere of the teacher's personality. In the process of professionogenesis, the conceptual model of semantic interaction with oneself, others, the subject of pedagogical activity and the World is stabilized.
The set of criteria for the development of professional meanings of a teacher as a resource for its development in the space of modern innovations can be presented as follows: -effectiveness of value-semantic constructs in making productive and pedagogically appropriate decisions; -completeness of projections of value-semantic trends in the system of ways to implement professional behavior; -stability of professional meanings in an entropic environment; -compliance of the style of functioning in a real situation and methods of managing the emotional sphere with value representations and semantic installations; -adequacy of the measure and form of representation of personal meanings and values in interaction situations and professional success.
Methods
The respondents were teachers of the Rostov region who were undergoing professional development at the Rostov Institute for advanced training and retraining of educational workers, in the number of 250 people. To collect empirical data, we used a questionnaire, a project mini-essay, and a survey.
Discussion
In the study, more than half (65.6%) of teachers ' respondents say that they take time for selfanalysis after each lesson, 21.3% face difficulties in this type of activity, and more than 13% resort to self-analysis only when the administration insists on it. Fig. 1. Factors that trigger Assessing the degree of authenticity, openness in the representation of feelings, the broadcasting of attitudes and evaluation interactions with learners, colleagues and parents, teachers are the following positions: 46.3% believe that the inability to openly Express their feelings create professional complexity and lead to burnout; a 35.6% believed that this fact contributes to the development and 18.1% stress that it creates internal tension and lowers the status of professional health. Assessing their behavior during the lesson, more than half of teachers 55% note that they can manage their behavior quite easily at the level of subjective control; 25.4% of respondents note difficulties in controlling behavior in the lesson, analyzing their actions and emerging effects 20.6%. Regarding opportunities for professional and personal development, 45.8% of teachers note that they regularly engage in self-development and self -improvement; 50.6% -often plans to work on themselves are not implemented due to lack of time and lack of energy. When asked about the attitude to the variety of teaching methods, programs, and concepts, most teachers note a sense of confidence in their professional capabilities, due to the availability of choice. At the same time, a certain number of respondents indicate that the need to constantly choose and bear responsibility for the consequences of choice causes irritation and internal tension and believe that this distracts from work.
"It seems to me that a school class filled with students before a lesson can be compared to a mountain peak" -57.9% of teachers believe; 38.8% compare the class to "the sea, which you want to dive into and enjoy its strength and beauty", respectively, and 10.4% of respondents choose the metaphor: "a dark dense forest, which you enter with fear and do not know what awaits you". If the opinions of the teacher and team members differ on a particular pedagogical situation, the overwhelming majority of respondents (54.5%) will prefer to remain in their opinion but will act in accordance with the requirements of others. Show independence, the opinions of the teacher and team members are independent and will act as they see fit, regardless of the opinions of others, 42.1 %. And, a fairly small percentage of respondents are ready to change their opinion, obeying the group -12.5 %
Conclusions
Professional meanings are a resource for the development of a teacher in the space of modern innovations, which is due to their functions: initiation and actualization, regulation and control, subjective assessment of what is happening and reflection. The resource capabilities of meanings also consist in a subjective partiality to the course and results of activity, a focus on representing oneself as a carrier of meanings and values. space, the teacher is constantly faced with the need to solve "tasks for meaning" regarding their progress in the profession, the generation of personal meaning, meaning-making makes it possible to move to a new level of professional activity. At the same time, as the results of the study have shown, a certain part of teachers have risks related to the use of the resource of professional meanings: difficulties in understanding the significance of professional and personal growth for themselves, in assessing the prospects of promotion in the profession at the level of personal meaning, unwillingness to take responsibility for choosing a professional tool; insufficiently developed conceptual semantic subjective control. The perception of the class and the associations that arise in this regard for a small part of teachers dissonates with the meaning of pedagogical activity, which can generate an internal personal conflict.
|
2020-12-10T09:05:01.087Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "152a37258f62bf950a673df59d8d306676cd27a9",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/70/e3sconf_itse2020_22016.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7df07f075e62f14c6c6909b29fcc316e2be7c24f",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Business"
]
}
|
79956926
|
pes2o/s2orc
|
v3-fos-license
|
A case of foreign body granuloma masquerading as a soft tissue tumour CASE STUDY
A foreign body granuloma is defined as a mass that forms at site of surgery due to biological tissue reaction to foreign material in the tissue [1]. It is a rare complication of inguinal hernioplasty and its incidence is still unknown due to lack of reports on such cases. The presentation may vary from simple superficial skin infection (SSI) to a fungating mass mimicking soft tissue malignancy. This report describes a man that presented with a fungating mass over the left inguinal region one year after inguinal hernioplasty.
Introduction
A foreign body granuloma is defined as a mass that forms at site of surgery due to biological tissue reaction to foreign material in the tissue [1]. It is a rare complication of inguinal hernioplasty and its incidence is still unknown due to lack of reports on such cases. The presentation may vary from simple superficial skin infection (SSI) to a fungating mass mimicking soft tissue malignancy. This report describes a man that presented with a fungating mass over the left inguinal region one year after inguinal hernioplasty.
Case presentation
A 42 year old male had bilateral inguinal hernioplasty with mesh repair performed in June 2015. The surgery was uneventful and had no post-operative surgical site infection (SSI). He presented again to our surgical clinic one year later for a fungating mass over the left inguinal region. Clinical examination revealed a fungating mass over the site of incision of previous hernioplasty scar ( Figure 1A). The mass measured 3x3cm, was hard in consistency, fixed to underlying tissue and had raw areas mixed with necrotic slough. He denied any infective symptoms of fever, skin redness or pus discharge prior to this. The full blood count was within normal range without leucocytosis. These findings made us suspect a soft tissue tumour (ie liposarcoma or desmoid tumour) or a squamous cell carcinoma of the skin.
An urgent computed tomography (CT) scan of abdomen showed a heterogenous, fungating soft tissue swelling within the left inguinal region. The mass involved the subcutaneous layer, external oblique aponeurosis, rectus abdominis muscle and was abutting the lateral side of urinary bladder ( Figure 1B). There were also streaky densities in the fat surrounding the mass with increased neovascularization. The mass was seen to abut the patent femoral vessels laterally. From the CT report, the possible differential diagnosis were of soft tissue tumour or skin neoplasm.
Histology of the wedge biopsy revealed pseudoepitheliomatous hyperplasia, underlying dermal fibrosis with infiltration of foamy histiocytes, lymphocytes and plasma cells. There were no granuloma or malignant cells. These features were consistent with a chronic inflammatory histology. Based on the suspicious findings on the CT scan, the patient underwent a wide local excision. The tumour was excised with clear margins.
At the base of the mass, a part of the mesh which had shrunk was identified and excised (Figure 2A, B, C ). It was evident intra-operatively that the chronic inflammatory reactions to the mesh led to the formation of the foreign body granuloma ( Figure 2D). Post-operative period was uneventful. Patient was well at the review 3 weeks after surgery.
The gross morphology of the resected specimen measured 4.2x2.8x1.3cm with a raised polypoidal skin lesion measuring 3.5x2.5x0.5cm. Cut section revealed a grey coloured surface and was solid in consistency. On microscopy, the sections exhibited focal granulation tissue formation with moderate foamy macrophages and neutrophils. These features were consistent with an infected suture granuloma without any presence of malignancy.
Discussion
The Lictenstein inguinal hernia repair technique has been practiced for more than 50 year. Approximately one million meshes are used in inguinal hernia repair annually [2]. Nagar et al reported an incidence of suture granuloma of 0.3% from a retrospective study of 2447 paediatric herniotomy [3]. Incidences of paravesical granuloma after inguinal hernioplasty can be found dated back to 1959 by Brand et al. Subsequently 3 more similar cases were reported by Kise et al in 1999 [1,4].
Foreign body granuloma may occur 0.5-11 years after inguinal hernioplasty. Surgical site infection was seen in the majority of cases of foreign body granuloma. This prolonged infectivity with pus discharge forms a chronic wound. Chronic inflammation due to mesh placement also predisposes to squamous cell carcinoma as reported by Birolini et al. Diagnosis of a malignancy secondary to chronic inflammatory changes had been straightforward in both reported cases with a history of chronic inflammatory wounds after inguinal hernioplasty. In our case, the patient did not present with an infection and wedge biopsy revealed chronic inflammation [5]. Foreign body granuloma manifests on CT scan as a heterogenous mass which mimics a soft tissue tumour. The CT of our patient showed a fungating heterogeneous mass invading into the skin, subcutaneous and rectus abdominis muscle. In the case series reported by Hideaki et al, the CT scans did not reveal any mesh or suture that may be the cause of foreign body granuloma. Similarly, the CT scan of our patient did not reveal any mesh or suture.
Correlating to the suspicious findings on the CT scan, a wide local excision of the mass with clear margins was performed. Intra-operatively the mass was dissected meticulously leaving a rim of healthy surrounding tissue. On reaching the base of the mass we identified a piece of mesh that was firmly attached to the underlying tissues. The mesh was dissected away from healthy tissues (Figure 2A,B,C,D). A full histopathology report of the specimen confirmed the findings of a foreign body granuloma. Foreign body granuloma may occur in the absence of infection and presentation may mimic a soft tissue tumour. A tissue biopsy may guide us to prevent a radical excision which may lead to patient morbidity.
Conclusion
Foreign body granuloma may mimic a soft tissue tumour without the presence of post-operative infection.
Learning Points:
Ÿ Occurrence of foreign body granuloma is rare with an incidence of 0.3% and 24 reported cases over past 60 years. Ÿ Awareness should be made of the possible complication of inguinal hernia repair with mesh leading to foreign body granuloma Ÿ Foreign body granuloma may present as a suspicious soft tissue tumour without any prior history of surgical site infection after inguinal hernia repair.
|
2019-03-17T13:08:03.062Z
|
2017-10-30T00:00:00.000
|
{
"year": 2017,
"sha1": "d86c7e2b34f3d3bc189684d10a64fb0d823c0b8f",
"oa_license": "CCBY",
"oa_url": "http://sljs.sljol.info/articles/10.4038/sljs.v35i3.8410/galley/6180/download/",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d08df742fe1075d412e23ce3e200114d489df8b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1613272
|
pes2o/s2orc
|
v3-fos-license
|
Posttraumatic Growth and Related Factors of Child Protective Service Workers
Objectives The aim of the study is to measure the level of vicarious trauma, posttraumatic growth (PTG), and other factors affecting PTG among child protective service workers. Methods We include posttraumatic stress, social support, stress coping, and demographic data as independent variables. Data was collected from 255 full-time social workers from 43 child protective agencies as acomplete enumeration and 204 included in the final analysis. Results The major findings of the study were as follows: The mean score of PTG was 44.09 (SD:21.73). Hierarchical multiple regression was adopted and "pursuing social support as a way of coping with stress" was the strongest predictive factor (β=0.319, p<0.001) of PTG. Conclusion We suggest that child protective workers are vulnerable to posttraumatic stress and mental health services are indicated. We also recommend various types of training for stress coping program, especially strengthening the social support system of the child protective service workers in South Korea.
Introduction
Child abuse is a traumatic experience that has long term and serious effects on abused children [1]. Incident like child abuse have repercussions for the whole of society, not just the abused persons. People who have experienced these kinds of incident may develop in positive or negative ways. They may feel the similar rage against perpetrator with abused persons, their philosophy about child rearing or their view of life may change, or their relationships with others can become closer or more distant. With regard to the stress that direct victims of child abuse suffer, measures have been put in place by agencies for protecting children. However, the effects of child abuse on persons who indirectly experience it have attracted little attention. Social workers who work in child protective agencies, in particular, receive reports on such cases and investigate related sites, provide services for parents and victims, and conduct work to prevent child abuse. In the process of providing direct and close support to the abused children and their parents, including counseling [2], they inevitably come to have knowledge about these kinds of incidents, even if they do not have direct experience with the traumatic incidents. Through such empathic relations, they come to understand the experience of clients and can vicariously experience the trauma. In the case of trauma experienced vicariously by counselors, they can sometimes experience the symptoms of those who have experienced direct trauma, such as intrusive thoughts or avoidant reactions, sleepless states, and emotional stress [2][3][4][5][6]. In addition, according to some studies, counselors may experience changes in their desires, trust in themselves and others, interpersonal relations, perceptions and memories, identity or view of the world [7]. According to studies on the vicariously experienced trauma of fire fighters and police officers, there are many cases where post-traumatic stress has been reported in relation to cases where children are the victims [8]. In this way, counselors talking with abused children undergo significant stress. The psychological and social health of social workers in child protective agencies is directly related to the quality of child protective services, and social workers often mediated changes of abused children, their parents, and aggressors, so it should be a priority to protect them psychologically. Therefore, it is necessary to understand the trauma experienced vicariously by social workers and to strengthen protective measures to protect them and to enable them to carry on with their jobs.
A traumatic experience becomes an opportunity not only to experience negative emotions but also can bring about positive changes in the personal lives of those who have suffered [9,10]. Growth after suffering trauma can be manifested as increases in confidence resulting from coping with seriously stressful incidents, increases in social resources, increases in personal resources like management techniques, changes in spiritual beliefs, adjustment of one's life priorities, gratitude for life, and changes in one's philosophy of life [9]. According to previous report, growth after trauma can take place when people experience trauma vicariously as well as experiencing it directly [11]. Counselors dealing with sexual violence [12], paramedics [13,14], nurses [14,15], and social workers [16] are indirectly exposed to traumatic incidents and experience growth after such trauma by experiencing the trauma vicariously. Although these professionals suffer from stress due to their jobs, growth after trauma can be especially important in preventing exhaustion and job turnover [13,17].
Efforts should be made to develop interventions to facilitate the growth of social workers of child protective agencies who inevitably experience ongoing vicarious trauma. In the Republic of Korea, several studies have been conducted about trauma that is vicariously experienced because of one's job [2,[4][5][6]12], but such efforts are not enough. Regarding growth after trauma, Gwon and Kim (2006) discussed the growth psychotherapists experience as a result of vicarious trauma. Studies conducted outside Korea have indicated that social support, religion, gender, age, coping strategies, and post-traumatic stress are factors related to growth after vicarious trauma [13,18,19], but factors related to growth after trauma for social workers at child protective agencies in Korea are expected to be different in kind and degree due to the unique aspects of Korean culture.
This study seeks to examine factors related to growth after trauma by considering its general features, the effects of vicarious trauma reported in existing studies, social support, variables regarding stress coping, and workplace conditions of social workers at child protective agencies. To this end, a survey of social workers at child protective agencies in Korea was conducted to understand the social workers' post-traumatic growth.
Study subjects and study design
For the study, a survey was conducted with 255 child protective service workers, as complete enumeration.
They were from the nation's 43 child protective agencies who perform case management and a cross-sectional study design was used. Data were collected through the mail for two weeks from April 21 to May 5, 2008. For the final analysis, 204 questionnaires were used, with 11 excluded for incomplete responses.
Study tool Growth after trauma
The Posttraumatic Growth Inventory (PTGI) was used to measure growth after trauma. This tool was developed by Tedeschi and Calhoun (1996), [11] and Lee (2009) [20] Validated it . This 21-item instrument is divided into five subscales, including seven items about relating to others (changes in the meaning of interpersonal relationships), five about possibilities (discovering new possibilities), two about spiritual matters (changes in spiritual interest), five about personal strength (identifying individual strengths) and three about appreciation of life (appreciation of life). Respondents replied according to a six-point Likert scale, from "no change" (0 points) to "remarkable change" (five points): more points meant greater growth. According to an earlier study [20], the Cronbach's α coefficient was 0.95. In this study, it was 0.952.
Factors related to vicarious trauma
For the variables related to vicarious trauma, the most shocking child abuse incidents, the level of psychological pain at the time of the incidents, and scores related to the stress felt after vicarious trauma were measured. Respondents were asked which child abuse incident was most shocking and they were required to give answers about the level of psychological pain resulting from that incident [9,11], from 1 point (not painful at all) to 10 points (the most serious imaginable pain). The variable of the level of psychological pain was included in the study design on the assumption that growth after trauma appeared as a result of cognitive processing which took place to handle the psychological pain after vicarious trauma. Regarding posttraumatic stress, the revised Korean version of the Impact of Event Scale (IES-R-K), which was originally developed by Horowitz (1979) [21], revised by Weiss et al. (1997) [22], and adapted by Eun et al. for Korea (2005) [23] was used. Eun also confirmed the reliability and validity of the IES-R-K. The IES-R-K consists of 22 items on 3 subscales: hyper-arousal (6 items), evasion (8 items), and invasion (8 items). Responses are collected on a 5-point Likert scale, from "not at all" (0 points) to "very frequently" (4 points). In addition, groups of stress disorder after trauma can be classified on the threshold of 24/25 points, the level of the cut-off value for PTSD selection. In Eun's study [23], the Cronbach's α coefficient was 0.87. In this study, the coefficient was 0.964.
Factors related to social support
Regarding social support-related factors, respondents were asked to give answers about the following questions. How much support have agencies provided in crises? Have you sought support from them? Have you obtained counseling from them? Additionally, a question asking if they were satisfied with current relations with colleagues and supervisors was included in the questionnaire. The level of support in crises was measured using a Korean version of Crisis Support Scale (CSS) originally developed by Elklit et al. (2001) [24]. A Korean living outside Korea with a good command of English and Korean and a cultural anthropologist participated in the translation job and the Korean version of the scale was developed through meetings of researchers. The CSS consists of 7 questions asking 1) if there were persons who would listen to stories in crises, 2) if respondents could share their stories with persons with similar experiences, 3) if respondents could speak about their thoughts and feelings, 4) if respondents were understood by other persons, 5) if respondents received substantial help from other persons, 6) if respondents were disappointed with persons whom they believed would be supportive, and 7) if respondents were generally satisfied with the support that they received. A 7-point Likert scale was used, from "not at all" (1 point) to "always so" (7 points). A higher number of points means higher social support. In Elklit's study [24], the Cronbach's α coefficient was. In this study, the coefficient was 0.812.
Methods for coping with stress
Also used was the Coping Strategy Indicator (CSI), which was developed by Amirkhan (1990) [25] and whose adequacy was verified by Shin and Kim (2002) [26]. The scale for evaluates the ability to cope with stress positively and which ways of coping with stress have been most frequently used. This 33-item scale was composed of 3 subscales: "ways to pursue social support" (11 items) on coping with stress through seeking advice or information, "problem solving-centered methods" (11 items) on directly solving problems through confronting them instead of avoiding them, and "avoidance-centered coping methods" (11 items) on avoiding problems instead of directly solving them. A 3-point Likert scale was used from "not used at all" (1 point) to "frequently used" (3 points). More points meant that they were more active in coping with stress. In Shin and Kim's study [26], the Cronbach's α coefficient was 0.84. In this study, it was 0.912.
Sociodemographic characteristics
Data on the sociodemographic variables of gender, age, educational background, religion, and marital status were collected.
Data analysis methods
General characteristics and basic statistical data were obtained through descriptive statistics, and Cronbach's αwas found through reliability analysis about each scale. The t-test, one-way analysis of variance (ANOVA) and regression analysis were conducted for univariate analysis to identify factors affecting posttraumatic growth (PTG). A model was formed with variables found to be significant in univariate analysis and independent variables were classified into four groups: sociodemographic factors, vicarious trauma-related factors, social support factors and stress coping factors. Then, hierarchical linear modeling was conducted. Multi-collinearity was taken into consideration for forming the model.
General properties of respondents
About 59.3%, or 121, of the respondents were women. About 58.8%, or 120 persons, were in their 20s and 35.8%, or 73 persons, were in their 30s. A majority of the social workers, 179 or 88.2%, were graduates of colleges or universities that provided a national licensure for the job and 24, or 11.7%, had completed graduate school. About 82.3%, or 167 persons, had religious beliefs and 71.6%, or 146 persons, were single ( Table 1).
Properties of major variables
Scores for PTG were 44.09 (SD=21.73) on average and the most shocking incidents of child abuse reported by the respondents were physical abuse 37.5% (75 persons), followed by non-intervention 30.5% (61 persons), sexual abuse 26% (52 persons) and emotional abuse 6% (12 persons). The figure related to the psychological pain caused by those incidents was 6.65 (SD=1.86) and 28.73 (SD=20.05) for posttraumatic stress, higher than PTSD selection cut off point of 24.
Scores for support in crises were 29.70 (SD=7.72) on average and it was found that the majority, or 87.0% (174 persons), couldn't get support from agencies. Psychotherapy and counseling occupied the highest ratio of support from agencies at 61.53% (16 persons). Up to 177 persons (86.8%) have experienced vicarious trauma and sought help in any form for it, but just 13 persons (6.5%) had received counseling. As for the reasons for not getting counseling, 36.9% (65 persons) thought that they could solve problems by themselves and 23.2% (41 persons) replied that they had no time to do that. Regarding relations with colleagues and bosses, just 99 persons (48.5%) were satisfied, showing that more than half of
Factors affecting posttraumatic growth Univariate analysis
According to univariate analysis of factors affecting each area of PTG, religion affected spiritual change, and satisfaction with relationships with colleagues and supervisors affected total scores, relating to others, and new possibilities. Help-seeking affected total scores and all sub-scales except for spiritual change. Experience of counseling affected total scores. Total scores on Posttraumatic stress and all sub-scales affected total scores on the PTG and all sub-scales except for spiritual change. Support in crises affected total scores on the PTG and relating to others (Table 2-3).
Correlation among independent variables
The correlationship among variables was analyzed to confirm multi-collinearity between independent variables before conducting regression analysis. If the value of a coefficient of correlation is 0.80 or higher, it can be said that multi-collinearity exists between the variables. In this study, the correlation among the sub-scales of invasion, hyper-arousal, and avoidance was 0.80 or higher, indicating that there was multi-collinearity. In the hierarchical multiple regression model, hyper-arousal and avoidance were excluded for the explanatory power of the model. All correlation coefficients between other variables were 0.80 or lower, tolerance limits were less than 1, and the VIF(varience inflation factor) was 2.5 or lower, showing that there was not multi-collinearity ploblem among independent variables. The value of Durbin-Watson was 1.677, close to the value of 2, and was not close to 0 or 4, indicating that there were no multi-collinearity between residuals, and thus a hierarchical linear modeling was conducted (Table 4).
Factors affecting Posttraumatic Growth: Hierarchical multiple regression
According to an analysis of influence of sociodemographic variables included in the first stage, the model was not statistically significant (F=1.640, p<0.05) and explanatory power was as low as 1.7%, showing that there was no independent influence of sociodemographic characteristics. The explanatory power of the second stage including trauma-related variables was 13.1%, which was 11.4% higher than the explanatory power with just the sociodemographic variables, and the model was statistically significant (F=7.040, p<0.001). In the second stage, religion and invasion were variables significantly affecting PTG. Higher scores with regard to invasion (β=0.256, p<0.01) meant higher PTG. In the third stage, the explanatory power of a model with the addition of social support variables increased 8.1% to 21.2% (F=6.150, p<0.001). The invasion factor (β=0.257, p<0.01) and satisfaction with relationships with colleagues and supervisors (β=0.146, p<0.05) had significant effects on PTG. In a model of the final fourth stage with the addition of the stress coping factor, the PTG of social workers at child protective agencies was 31.9%, showing that the explanatory power increased 10.7% from Model 3. The factors of invasion and relationships with colleagues and supervisors, which were significant in the previous models, were not statistically significant in Model 4 with the stress coping factor added. With the stress coping factor, the PTG was higher when using the social support pursuit method (β=0.319, p<0.001) ( Table 5).
Discussion
This study investigated the PTG that was experienced by social workers at child protective agencies and analyzed sociodemographic characteristics, vicarious traumarelated factors, social support-related factors, and stress coping methods in order to find factors relevant to PTG.
It should first be noted that the total score for PTG was 44.09 (SD=21.73), relatively lower than the score of 53.7 (SD=11.8) reported by a study on social workers handling trauma conducted by Gibbons et al. (2011) [16] using almost the same tools and 64.42 (SD=20.08) by a [18]. However, it is difficult to generalize about the lower PTG of Korean social workers at child protective agencies compared to non-Koreans with similar jobs and follow-up studies are needed for verification. If the PTG is low despite the similarity in jobs, it is necessary to clarify factors regarding working conditions.
Second, according to an analysis of vicarious traumarelated factors, the score for posttraumatic stress was as high as 28.73 (SD=20.05) on average, higher than 24 points, which is the cut-off score for PTSD selection. Such a result agrees with the results of Korean and international studies indicating that the risk of posttraumatic stress disorder was remarkably high among persons providing child protective services [6,7,27,28]. Posttraumatic stress from work not only causes depression, uneasiness, impulsive acts, drug abuse, and somatization, but can also have harmful effects on organizations and lower the quality and efficiency of work [29,30]. Therefore, vicarious trauma experienced by social workers should be addressed and protective measures should be taken. The degree of those social workers' vicarious trauma should be regularly evaluated to determine if they belong to a risk group and intensive mental health services should be provided if necessary. Psychological anguish was included in the study design stage on the assumption that PTG appears as a result of the cognitive processing that takes place to deal with psychological anguish after vicarious trauma [9,11]. It was hard to compare the results of this study with single item tool from other existing studies. In follow-up studies, more sophisticated tools which can measure the persons experiencing vicarious trauma should be used. Third, in terms of social support, social workers providing child protective services did try to seek help from surrounding people regarding their vicarious trauma, but the ratio of accessing professional support, such as psychotherapy, was low. Considering that about 86.8% of respondents have sought help, they did not seem to be familiar with undergoing counseling, even if they felt the need for counseling. In fact, there have been reports that persons influenced by Asian cultural attitudes and norms are not familiar or comfortable with counseling because of various aspects of Asian culture, such as attitudes toward authority or the emphasis on hierarchical relationships [31]. It is necessary for organizations to actively provide counseling channels and educational opportunities that workers can easily access, instead of letting individuals look for channels to obtain help.
Fourth, regarding stress coping scores, the preferred coping method of social workers at child protective agencies was problem solving. The least frequently used method was avoidance.
Fifth, in the univariate analysis of factors related to PTG, scores for PTG were higher when respondents had religious beliefs, when they maintained friendly relationships with colleagues and supervisors, when they sought help, when they received counseling, when they had serious psychological anguish, when they experienced serious posttraumatic stress, including hyper-arousal, invasion, and avoidance, and when they actively attempted to cope with stress. The relationship between PTG and religion supports the results of existing studies [18,32] and also correlates with the results of studies conducted by Prati and Pietrantoni (2009) [19] and Schaefer and Moos (1998) [33] that social support affects PTG. Tedeschi and Calhoun (2004) [9], who suggested the concept of PTG, also stated that social support was one of the most important predictors of positive changes after trauma. The results concerning trauma-related factors agree with those of many studies that have clarified the positive relationship between trauma stress and PTG [25,32]. Serious post-traumatic stress, even if the stress is painful, has been shown through the results of various studies to promote post-traumatic growth as a result of reflecting on the trauma experience. Through such reflection, new meanings are created and the collapsed world is rebuilt [13,34].
Hierarchical regression was conducted to analyze the final model in order to look for factors affecting PTG. In the final model, the effects of other factors were offset when the four-stage stress coping factors were included and the pursuit of social support seemed to be the most powerful factor. Schaefer and Moos (1998) [33] and Shakespeare-Finch et al. (2005) [35] have noted that active coping actions with regard to life crises result in adaptation and personal growth. In a meta-analysis of the results of factors affecting PTG, Prati and Pietrantoni (2009) [19] stated that, out of the different coping strategies, the coping method of looking for social support was one of the most important factors affecting PTG. Religion, social support, and factors related to the degree of trauma which were meaningful in the univariate analysis were offset in the final model when coping strategies were taken into consideration. Such a result shows that personal factors or the degree of shock that persons experience from incidents, stress, and social support certainly are important for social workers, but it was found that their coping strategies with regard to crises, and active strategies for looking for social support systems to solve problems in particular, were important factors in achieving growth despite trauma or stress.
This study has limitations. First, it is difficult to prove causal relationships between each independent variable and dependent variable because of the properties of cross-sectional studies. Second, regarding variables, such as psychological anguish and support during crises, this study was designed to recollect the experiences related to the most shocking incidents, but the accuracy of recollection can vary because respondents experienced the most shocking incidents at different times. Third, regarding post-traumatic stress, this study was designed to measure the present stress, but the level of stress could have been affected by when respondents experienced those incidents. Fourth, regarding psychological anguish, this study used a single item tool so that respondents could talk about the level of psychological anguish immediately after the most shocking incident on the assumption that post-traumatic growth appeared in the process of handling anguish based on previous studies. More sophisticated tools should be included in follow up studies. In spite of all these limitations, this study examined, for the first time in Korea, the factors affecting posttraumatic stress among social workers at child protective agencies that care for child victims of violence.
Social workers at child protective agencies perform very important duties but their mental health is vulnerable due to vicarious trauma and their turnover rate is very high. As a result, problems arise related to the connectivity and continuity of child protective services. Attention should be paid to mental health issues related to vicarious trauma, in order to protect these social workers and to help them carry on with their jobs, on which the quality of child protective services depends. In addition, efforts should be made to promote post-traumatic growth, which can work as a protective factor. In this study it was found that using coping methods that involved pursuing social support had significant effects on PTG. Therefore, it is suggested that a study be developed that focuses on how to activate stress coping behavior by methods that involve pursuing social support.
|
2016-05-07T10:57:16.597Z
|
2013-05-21T00:00:00.000
|
{
"year": 2013,
"sha1": "83df38c36720eb3cb327ea138bb2372f58756652",
"oa_license": "CCBY",
"oa_url": "https://aoemj.biomedcentral.com/track/pdf/10.1186/2052-4374-25-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20a501d5ed509392b7e63be8e8a4128784cc8c82",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
148796073
|
pes2o/s2orc
|
v3-fos-license
|
Are Parents’ Ratings and Satisfaction With Preschools Related to Program Features?
This study examines whether parents’ overall satisfaction with their child’s early childhood education (ECE) program is correlated with a broad set of program characteristics, including (a) observational assessments of teacher-child interactions; (b) structural features of the program, such as teacher education and class size; (c) practical and convenience factors (e.g., hours, cost); and (d) a measure of average classroom learning gains. It then describes associations between parents’ evaluation of specific program characteristics and externally collected measures of those features. Leveraging rich data from a sample of low-income parents whose 4-year-olds attend publicly funded ECE programs, we find little correspondence between parents’ evaluations of program characteristics and any external measures of those same characteristics. We discuss policy implications, especially in light of recent federal and state informational initiatives, which aim to help families make informed ECE choices.
2 However, parents' high levels of self-reported satisfaction do not necessarily imply that parents are inaccurate assessors of program quality, nor does it guarantee that informational interventions will lead to improvements. If parents are choosing lower-quality programs because highquality programs are nonexistent, oversubscribed, or too costly, then information alone will likely prove ineffective. It could be that parents-particularly, low-income parents who have less flexibility in choosing care-have chosen the best care that was available to them and rate their programs in comparison to other local options.
Parents' satisfaction may also be driven by parents' accurate evaluation of factors, such as location, hours, cost, or other program features, that they find important but are not typically explicitly included as quality measures. To date, studies of parents' evaluations of child care programs have compared parent and researcher ratings, using tools specifically designed by researchers to capture the ECE learning environment (e.g., Cryer & Burchinal, 1997;Helburn & Howes, 1996;Mocan, 2007). It may be that the aspects of program quality that these tools capture are not the ones parents consider most central when selecting programs for their children or when evaluating their child's care setting.
Existing research has not examined how a broader set of program characteristics, including factors such as cost, location, and hours, relate to parents' satisfaction with their child's ECE program, nor have existing studies examined whether parents are able to evaluate ECE programs on key dimensions that may drive decision making, particularly among relatively constrained low-income parents. This is an important gap in the literature. To design effective information systems, policymakers need a clear understanding of the program features that drive parents' satisfaction with their care setting-and whether parents are already able to accurately assess those features. Given the targeting of many of these interventions toward low-income families (e.g., the Child Care and Development Block Grant serves families that receive child care subsidies; in many states, QRIS participation is required only for programs receiving public dollars), it is particularly important to answer these questions among low-income families who face different choices and constraints than their higher-income counterparts.
This study aims to fill these gaps using data from a sample of low-income families whose children attend publicly funded ECE programs in Louisiana (Head Start, prekindergarten, and subsidized child care). We test whether parents' satisfaction with their child's program is predicted by a broad set of program features, including (a) observational measures of process quality (e.g., measures of teacher-child interactions), (b) structural quality measures typically included in QRIS systems (e.g., teacher education and experience), (c) measures of program convenience (e.g., hours of operation), and (d) average classroom learning gains. We also explore to what extent parents' evaluations of specific program features (e.g., warmth, convenience, etc.) are related to these measures. This study will inform both policymakers seeking to design effective informational interventions and researchers looking to understand how parents evaluate the quality of their child's ECE program.
The Promise of Information Interventions in ECE
Providing consumers with accessible information about the quality of service providers can lead to changes in their behavior and to quality improvements in the rated organizations. These types of informational interventions have been effective across a variety of settings, ranging from hospitals (Dafny & Dranove, 2008;Jin & Sorenson, 2006;Pope, 2009) to restaurants (Jin & Leslie, 2003;Wong et al., 2015). In the K-12 sector, experimental evidence indicates that parents shift their school choices in response to easy-to-understand school quality information (Hastings & Weinstein, 2008) and that, in turn, their children's outcomes improve. Quasiexperimental research also suggests that publication of school report cards leads students to leave poorly evaluated schools (Friesen, Javdani, Smith, & Woodcock, 2012;Hanushek, Kain, Rivkin, & Branch, 2007;Koning & Van der Wiel, 2013).
This literature suggests a potential role for information interventions in ECE, and indeed some initial experimental work shows promise (Dechausay & Anzelone, 2016). However, the ECE market differs from the K-12 setting in ways that may influence the potential impact of information interventions. In particular, relative to K-12, the ECE market is characterized by far greater variability with respect to hours of operation, cost (e.g., between free Head Start and prekindergarten programs and private child care centers that generally require a fee, even when subsidized), available transportation, and other logistical factors. Moreover, these differences from the K-12 setting may be particularly relevant for low-income families, who face fewer choices than high-income families and have more binding constraints due to work schedules, transportation issues, and other logistical concerns. Low-income families likely have fewer choices than their higher-income counterparts, and this difference in choice is likely greater for ECE than K-12 decision making. Indeed, although parents report that they value warm, safe, and engaging ECE programs (Barbarin et al., 2006;Bassok, Magouirk, Markowitz, & Player, 2017;Chaudry et al., 2011;Cryer & Burchinal, 1997;Meyers & Jordan, 2006;Rose & Elicker, 2008;Shlay, 2010), research indicates that their actual choices-especially among lowincome families-are oftentimes driven by practical concerns around affordability, location, and convenience (Chaudry et al., 2011). Parents constrained by practical factors may focus primarily on those factors and be less responsive to the quality measures typically included in informational interventions, such as QRIS. 3
Parents' Satisfaction With and Evaluation of Their Child's ECE Program
Parents tend to report being highly satisfied with their child's ECE program (Helburn & Bergmann, 2002;Meyers & Jordan, 2006). For example, a recent nationally representative poll of families with children under 5 reported that 88% of parents rate their child's program as "very good" or "excellent" (National Public Radio, 2016). Similarly, data from a representative survey of families using child care in Minnesota revealed that 86% of parents would "always" choose the same program again (Chase & Valorose, 2010). This pattern persists even among low-income samples, for whom program quality tends to be lower. For example, Raikes, Torquati, Wang, and Shjegstad (2012) reported that 74% of their sample of subsidy-receiving mothers rated the overall quality of their child's program as "perfect" or "excellent." Similarly, Van Horn, Ramey, Mulvihill, and Newell (2001) found that nearly all mothers in their sample of subsidy recipients reported being highly satisfied with their current ECE program.
These high levels of satisfaction are consistent with the K-12 literature, which also finds that parents rate their children's schools highly (Education Next, 2016). However, parents' high ratings of ECE programs are incongruent with the low levels of quality in many ECE programs, particularly those serving low-income students, as measured using researcher-developed observational tools focused on the classroom environment. For example, Burchinal and colleagues (2010) reported that 87% of publicly funded preschool classrooms have levels of instructional support that are too low to promote learning.
There are a number of plausible explanations for this misalignment. The first is that parents' satisfaction with their program may be driven by features that are not typically included in researchers' definitions of quality. A parent may, for example, rate a program highly because it is close to their workplace, offers long hours, and provides two meals daily. This type of "functional quality" is distinct from "quality" as typically measured by researchers or included in QRIS systems. In this scenario, parents' high levels of satisfaction may reflect accurate evaluations of the aspects of ECE that are most salient to them. Existing research has not explored this possibility.
A second possible explanation for the high levels of parents' satisfaction is that parents are poor assessors of quality. Theoretical work suggests that ECE markets are characterized by imperfect information (Blau, 2001;Mocan, 2007;Morris, 1999). Most parents do not spend enough time in an ECE program to accurately evaluate program quality, and young children are unreliable reporters of program quality. Instead, parents rely primarily on recommendations from family and friends, and on program features that may be easy to discern but are weak indicators of quality (Forry, Isner, Daneri, & Tout, 2014;Layzer, Goodson, & Brown-Lyons, 2007;Meyers & Jordan, 2006;Mocan, 2007). This asymmetry of information may result in adverse selection in the ECE market and the provision of lower-quality ECE than is optimal (Mocan, 2007;Morris, 1999). If this is the case, informational interventions may prove particularly promising.
An existing body of empirical research has directly explored parents' ability to evaluate ECE programs by first asking parents to rate their child's program using an observational measure that aims to capture aspects of the classroom environment and then comparing these parent ratings with those completed by trained observers on identical scales. These studies consistently indicate that parents rate the quality of their child's ECE program more highly than do trained observers (Barros & Leal, 2015;Grammatikopoulos et al., 2014;Helburn & Howes, 1996). For example, Cryer and Burchinal (1997) demonstrated that when parents and trained observers both use the Early Childhood Environment Rating Scale (ECERS; Harms, Clifford, & Cryer, 1998), parents rated program quality 6.07 (out of 7), whereas the trained observers rated the same programs as 3.47.
Parents' inflated ratings of the items included in these observational scales-in conjunction with parents' high levels of satisfaction-are often seen as evidence of their inability to discern between low-and high-quality programs, an assumption that has, in part, driven the proliferation of informational interventions in ECE markets. However, the fact that parents rate program features more highly than trained observers does not necessarily indicate a problem accurately assessing quality. One possibility is that parents are accurate evaluators of their child's ECE setting, but their sense of guilt or anxiety around leaving a young child in anything but a high-quality program may keep them from characterizing their child's program as low quality when responding to surveys (Lamb & Ahnert, 2006). Another possibility is that parents' inflated ratings of ECE programs may still be correlated with observer ratings. The evidence on this is mixed. Several studies find only modest correlations between parent evaluations and trained raters (Barros & Leal, 2015;Cryer, Tietze, & Wessels, 2002;Torquati, Raikes, Huddleston-Casas, Bovaird, & Harris, 2011). However, Mocan (2007) used the same data as some of these earlier studies and demonstrated that parents' ratings do parallel those of trained observers after scaling for overestimation.
Other recent studies also provide evidence that parents' ratings can align with externally collected measures. For instance, Araujo, Carneiro, Cruz-Aguayo, and Schady (2016) report that Ecuadorian parents' ratings of kindergarten teachers are correlated with both the average valueadded score of the teacher's classroom and the teacher's score on the Classroom Assessment Scoring System (CLASS; Pianta, La Paro, & Hamre, 2008), a widely used 4 tool for measuring the quality of teacher-child interactions. Similarly, in the U.S. K-12 sector, Chingos, Henderson, and West (2012) find that parents' assessments of school quality are strongly related to average test scores.
Limitations of the Existing Literature
The existing research on both parents' satisfaction and their ability to accurately evaluate ECE quality is limited in a number of important ways. First, as noted above, research shows that low-income families are constrained by cost and convenience considerations when selecting ECE for their children (Forry et al., 2014;Forry, Simkin, Wheeler, & Bock, 2013;Grogan, 2012;Kim & Fram, 2009;Rose & Elicker, 2008). For example, families who work full-time or who have multiple employment settings may find half-day ECE programs or programs that require substantial additional commute time frustrating or ultimately untenable. However, no studies we are aware of have examined whether parents' satisfaction with ECE is related to program characteristics such as their location, cost, or other practical features. If parents' program satisfaction is tightly linked to objective measures of cost and convenience, and parents are able to accurately evaluate these program features, it may be that parents are already identifying and using the features of quality that are most relevant for their choices. If this is the case, informational interventions may not change parents' ECE decisions.
Second, and relatedly, nearly every study that compares parent evaluations with trained observers uses the Environmental Rating Scales (ERS; Harms et al., 1998). These scales represent one widely used measure of ECE quality. However, a number of recent studies have raised questions about the ERS (Gordon, Fujimoto, Kaestner, Korenman, & Abner, 2013;Hofer, 2010;Layzer & Goodson, 2006;Perlman, Zellman, & Le, 2004). Even assuming that the ERS scales accurately capture quality, they may not measure all the aspects of quality that are most salient in the decision making of families, particularly, low-income families.
Third, most existing studies that directly compare parents' evaluations of quality with observed quality were conducted prior to the rise of publicly funded prekindergarten and underrepresent low-income families (Cryer et al., 2002;Cryer & Burchinal, 1997) or were conducted overseas (Araujo et al., 2016;Barros & Leal, 2015;Grammatikopoulos et al., 2014). There are no studies that reflect the current early childhood landscape and, particularly, the diverse set of preschool options available to 4-year-olds (e.g., Head Start, state prekindergarten, subsidized child care).
Present Study
In this study we address two research questions. First, to what extent is overall parental satisfaction with a child's ECE program related to a wide range of specific program features? Second, to what extent are parents' evaluations of specific program features aligned with external evaluations of those features?
This study is the first we are aware of to examine the correlates of parental satisfaction with ECE. It improves on the existing literature in several ways. The primary one is that we consider a far more comprehensive set of program characteristics than earlier studies exploring either parental satisfaction or parental evaluation of program features. Our study includes program features commonly used in QRIS and other informational interventions (e.g., measures of teacherchild interactions, teacher education, and opportunities for parental involvement) as well as measures not typically included in QRIS, such as aspects of convenience (e.g., hours, sick care) and measures of children's learning gains on direct assessments, with particular attention to features that may be salient for low-income families.
A second contribution of the current work is that we leverage a much more recent sample of providers serving primarily low-income families and that our sample includes the full range of available publicly funded preschool programs, including Head Start, state-funded prekindergarten, and subsidized child care settings. By providing a broader, more current exploration of parental satisfaction with and evaluation of ECE programs in a sample that is often the target of informational interventions, the study aims to inform the design of policies intended to help families make informed ECE decisions.
Data and Sample
Data were collected during the 2014-2015 school year as part of a larger study examining efforts to improve quality in Louisiana's ECE system. Five Louisiana parishes were included in the study and were selected from 13 parishes that were part of a "pilot year" for a new QRIS in Louisiana. The five parishes were chosen to maximize regional diversity and include both urban and rural communities. Within parishes, all ECE programs were eligible if they (a) were participating in the state pilot (which included all Head Start and prekindergarten programs and a portion of child care programs that accepted subsidies) and (b) included classrooms that primarily served typically developing 4-year-old children. We selected 90 programs across the five parishes, with probability of selection in each parish proportional to the total number of programs in that parish relative to the total number of programs across all five parishes. Within parishes, we randomly selected a stratified sample of Head Start programs, prekindergarten programs, child care centers, and Nonpublic Schools Early Childhood Development programs (NSECD), which are nonpublic ECE settings that accept state funding for low-income children. 1 Within each program, all teachers of classrooms serving primarily typically developing 4-year-olds were randomly ordered, and the first teacher from each program was contacted. Once a teacher agreed to participate, all parents and children from that classroom were recruited to respond to surveys and for direct child assessments. Response rates were moderate to high. The director survey response rate was 94%, the teacher survey response rate was 98.8%, and parent survey response rates were 78% in the fall and 54% in the spring.
The sample for this study was drawn from the 906 parents who responded to the spring survey, which measured parents' assessments of the quality of their program. 2 In order to explore patterns within a fixed sample of parents, we restricted our analysis to parents whose children were in classrooms with valid information on all quality measures. From these parents, two samples were constructed. The first was an "overall satisfaction sample" (n = 636) that included parents who responded to both items assessing their overall satisfaction with their child's program (see below) as well as all child-and family-level covariates. The second sample included all parents who evaluated all individual program features and also indicated which two program features they liked most (n = 566). 3 Families in the study were predominantly low income, 57% reported annual income less than $25,000, and most parents (85%) did not have a bachelor's degree (see Table 1). About two thirds of the children in the sample were Black.
Measures
Parents' Satisfaction. Information on parents' satisfaction with their ECE program was drawn from the spring parent survey. Parents responded to two items about their overall satisfaction with the program: "Overall, how satisfied are you with the child care/preschool program you selected for your child?" and "How likely would you be to choose this child care/preschool program if you had to do it again?" These items were scored on a 4-point Likert scale ranging from not satisfied/not likely to very satisfied/very likely. Consistent with previous research exploring parents' satisfaction with ECE settings, we found high levels of satisfaction (see online Appendix A, Table A1). These two items were dichotomized, such that a 1 indicates that parents were "very satisfied" and "very likely" to choose the program again; 0, otherwise.
Parents' Evaluations of Program Features.
In the spring, parents were asked to evaluate seven specific features of the care setting: opportunities to learn academic skills, opportunities to learn social skills, warm/affectionate caregivers, a clean and safe environment, convenient hours, convenient location, and affordability. Parents were asked how much they agree that their current program provides each feature of care, scored on a 4-point Likert scale ranging from strongly disagree to strongly agree. Sample items included "My child's main caregiver/teacher is warm and affectionate" and "This child care/preschool program is affordable for my family" (see online Appendix A for a full list of items).
Like the satisfaction items, there was limited variability in parents' responses to the evaluation items. Just 3% to 5% of parents chose either of the two bottom categories (see online Table A1), consistent with previous literature documenting that parents tend to evaluate their care settings highly. Because over 90% of parents selected either of the top two responses (agree or strongly agree), the variation was primarily between parents who indicate they "agree" and those who "strongly agree." As such, these items were coded dichotomously such that a 1 indicates strong agreement, and 0 indicates all other responses. 4 Finally, parents were asked to consider six program characteristics similar to the ones discussed above (e.g., learning, teacher-child interactions, convenient location and hours) and identify the two they "liked most" about their child's program. These questions were recoded into a series of six nonmutually exclusive dummy variables in which 1 indicates the feature was one of the two that parents liked the most and 0 otherwise (see online Appendix A). Responses to these items capture a combination of parental satisfaction and evaluation because identifying the "best" features comes after some evaluative process. Based on earlier studies, we anticipated there might be little variation in parents' reports of both overall satisfaction and their evaluations of specific program characteristics. As discussed above, parents may feel some internal or external pressure to indicate their young child is in a "good" ECE program. We included these "favorite" items on the survey to give parents an opportunity to endorse certain program characteristic without criticizing others.
Observed Program Characteristics. We considered a broad set of program characteristics, including measures of (a) observational assessments of process quality, (b) structural features of the program, (c) practical and convenience factors, and (d) a measure of average classroom learning gains (see Table 1).
Observational assessments of process quality. Process quality was assessed using CLASS (Pianta et al., 2008), a well-validated, widely used classroom observation tool that measures the quality of teacher-child interactions. For example, 18 states use CLASS as part of their QRIS, and Head Start uses CLASS as part of its professional development and quality monitoring.
On average, classrooms were observed four times for 40 min per visit over the course of the school year by trained CLASS observers, in accordance with best practice. Previous research demonstrates that teacher-child interactions can be organized into three broad domains: instructional support, emotional support, and classroom organization (Hamre et al., 2013). Instructional support includes concept development, quality of feedback, and language modeling; emotional support includes positive climate, negative climate, teacher Note. N represents sample based on parent response to the overall satisfaction items and covariates items (sample used in Table 3). CLASS = Classroom Assessment Scoring System; NSECD = Nonpublic Schools Early Childhood Development. CLASS is a widely used, validated classroom observation tool that assesses the quality of teacher-child interactions on a 1-to-7 scale. Because the average classroom learning gains scores are calculated using the standardized residuals from a regression, the mean for this variable is zero by design. 7 sensitivity, and regard for student perspectives; and classroom organization includes behavior management, productivity, and instructional learning formats. These dimensions were each scored on a 7-point scale and averaged to create domain scores. This study considered both the overall CLASS score and the three domains. CLASS codes demonstrated a high level of reliability. Fifteen percent of observations were double-coded by two data collectors, and intraclass correlations (ICC) indicated high levels of agreement between coders (emotional support, ICC = .812; classroom organization, ICC = .878; instructional support, ICC = .883; total score, ICC = .902). Moreover, internal consistency was strong, with Cronbach's alphas ranging from .77 to .96.
Structural features.
Measures of structural quality typically used in QRIS included teacher reports of their years of experience teaching children younger than kindergarten age, teacher-reported highest level of education (less than a BA, more than a BA, with BA omitted), and class size (teacher report of classroom enrollment on the first day of school). We also include an indicator for whether the program has regular opportunities for parental involvement (defined as more than four opportunities) because 90% of QRIS rating systems include measures of family involvement.
Practical and convenience features. Convenience features were drawn from the director survey and included a continuous measure of the average length of the school day across all weekdays; indicators for whether the program provides summer care, transportation, or sick care; a continuous measure of the number of services that the program provides for children (i.e., health screenings, developmental assessments, therapeutic services, counseling services, and social services); and an indicator of whether some families need to pay to attend the program.
Average classroom learning gains. Parents may be more satisfied with an ECE setting when they observe their children making noticeable developmental gains. Although we do not have assessment data for all children in our sample, 12 children from each study classroom were selected at random for direct assessments on a series of widely used measures of math, literacy, and executive function by a trained researcher. Assessments occurred in the fall and spring of the preschool year. As a proxy for children's learning in the ECE setting, we generated average classroom gains by averaging the child-level residuals from individual regressions of each of the six spring assessments (described below) on each corresponding fall assessment.
Children's math skills were assessed using the Applied Problems subscale of the Woodcock-Johnson (Woodcock, McGrew, Mather, & Schrank, 2001); literacy was assessed using the Peabody Picture Vocabulary Test (Dunn & Dunn, 1997), the Test of Preschool Early Literacy (Lonigan, Wagner, Torgesen, & Rashotte, 2007) (Blair, 2002;Diamond & Taylor, 1996). The HTKS asked children to inhibit a dominant response (touching their head or toes when asked by an adult) for a nondominant response (touching the opposite of what had been previously instructed) and is thus linked to inhibitory control, working memory, and cognitive-flexibility executive-function domains (McClelland et al., 2014). The pencil tap task asked children to respond to various pencil tap sequences, varying the sequence to require children to use both working memory and cognitive-flexibility skills.
Covariates. We estimated models both with and without demographic covariates. Covariates were included to account for child and family characteristics that may be correlated with program characteristics, parent satisfaction, and parent evaluations of specific program features. These included child age, gender, and race as well as parent education (coded as a four-level categorical variable: less than a high school education, high school education, some college, with bachelor's degree or more as the omitted category), and a seven-category measure of family income.
Analytic Strategy and Hypotheses
We ran linear probability models in which we regressed the two measures of overall satisfaction, the seven specific parental evaluation items, and the six "most liked" items on each observable program characteristic individually. These models allowed us to explore which program characteristics are most highly associated with parental satisfaction and also whether parents' evaluations of specific program characteristics correlated with external measures of those same features. We hypothesized that parents' satisfaction with their program would be particularly correlated with practical program characteristics, such as location, hours, and cost. We also hypothesized that parent evaluations of a specific aspect of program quality would be more highly correlated with closely corresponding measures of quality. For example, we expected parental evaluations of convenience to be associated with hours of operation, provision of sick care and summer care, and transportation. We expected evaluations of affordability to be linked to whether some families have to pay for the program as well as the number of enrolled children in the classroom, as that is a likely driver of program price. We hypothesized that parents' evaluations of warmth would be correlated with CLASS and teacher education and experience. Finally, we posited that parental evaluations of learning or academic skill provision at the program would be correlated most strongly with CLASS instructional support, teacher education and experience, and average student learning gains.
We also ran models in which we regressed each evaluation measure on the full set of program features to explore how much variation is explained by this extensive set of quality measures. If a significant proportion of the variance in any of our satisfaction or evaluation measures was predicted by the observed program features, this would provide evidence that parents use program features to evaluate their child's ECE program. All models were run with and without controls for child and family characteristics, and all standard errors were clustered by program. Continuous independent variables (CLASS domains, teacher experience, number of children in the classroom, hours of operation, number of services provided, and learning gains) were standardized, dichotomous variables in 0/1 form. Results were not sensitive to the use of linear probability models as compared to logit models.
Descriptive Statistics
Table 1 provides descriptive information for the programs in the sample. Programs had, on average, moderate CLASS scores, with low levels of instructional support, consistent with national CLASS data (Burchinal et al., 2010). The modal teacher had a bachelor's degree, and average teaching experience was 11.27 years (though 45% of teachers had 5 or less years of experience). On average programs operated for 8 hr a day, and all children were in programs that offered services for at least 7 hr a day. Most of the sample attended a program that was free for all attendees (83%).
Parental Satisfaction
Table 2 provides descriptive information on parental satisfaction across three sets of measures. Overall levels of parent satisfaction were high, consistent with previous literature. Nearly 70% of parents were "very satisfied" with their program and reported being "very likely" to choose their program again. There was more variability in parents' evaluations of specific program features, although ratings remained high. For example, roughly 75% of parents "strongly agreed" with individual statements that their program supports academic (79%) and social (75%) development; has a warm and affectionate caregiver (75%); offers a clean, safe environment (73%); and is affordable (74%). A smaller percentage strongly agreed that their program has a convenient location (69%) or that it offers convenient hours (63%).
There was more substantial variation in parental satisfaction when measured with the "most liked" items. For example, although 79% of the sample reported that "helping the child learn" was one of their two favorite program features, just 44% reported that teacher-child interaction was among their favorite features, 12% selected program environment, 23% selected convenience, and just 6% selected affordability. Table 3 presents both unadjusted (Model 1) and covariateadjusted (Model 2) relationships between parents' overall satisfaction with their child's program and individual measures of program quality. These models show no consistent relationship between any program feature and parents' overall satisfaction with their program and showed no substantive differences across models. Just two coefficients (of 64) were statistically significant at conventional levels, a finding that did not exceed what would be expected by chance. The adjusted R 2 values from the saturated models-in which we regressed overall satisfaction on all quality measures simultaneously to assess the proportion of the total variance in satisfaction explained by our full set of quality measures-were quite low across all four models. Three percent of the variation in parental satisfaction was explained by our program features, and this rose to just 5% with the addition of demographic covariates (see bottom panel, Table 3). Table 4 presents the unadjusted relationship between individual measures of program quality and parent ratings of specific program features as well as adjusted R 2 values from models that include all program features simultaneously. Covariate-adjusted models are presented in online Appendix B, Table B1 and are not substantively different from the unadjusted models. Table 4 reveals limited associations between observed program features and parental ratings of specific program features. Only six of 112 (~5%) associations between program characteristics and parent ratings of specific aspects of quality were statistically significant, and in most cases, patterns did not align with hypothesized relationships. None of the program characteristics were associated with parents' satisfaction with their child's learning. Only an indicator for whether the program provided summer care was significantly (although negatively) related to parents' assessments of teacher warmth. Programs that provided transportation were rated as somewhat less clean and safe and convenient. Parental ratings of affordability were positively associated with the number of children in the classroom and negatively associated with attending a program that was not free. In models that simultaneously accounted for all quality measures, program features explained very little of the variation-just 2% to 6%-in any of the evaluation measures (see bottom row for R 2 ).
Associations Between Program Features and Parent Evaluations
Although we explored relationships between all observed program features and all parental ratings, we did not expect associations among all variables. Instead, we expected that parent evaluations of specific aspects of ECE quality would be more tightly linked with measures of quality that were more closely aligned. Figure 1 presents a visual representation of the relationships we hypothesized would be stronger based on previous research and theory overlaid with the findings from the present study. As indicated by the highlighted cells, just two of the hypothesized relationships were observed in the present data, both of which linked classroom features to affordability. Specifically, number of children in the classroom was positively correlated with affordability, and the dichotomous indicator that some families at the program had to pay for care was negatively correlated with affordability.
Finally, Table 5 shows results from unadjusted regression models in which we predict whether a parent indicated a particular program feature was one of the two features they liked most. Online Appendix Table B2 presents these models with the addition of covariates; findings do not substantively differ. There were 11 statistically significant coefficients across 96 estimates, somewhat more than expected by chance. However, as above, these observed measures of quality predicted very little of the variation-just 3% to 5%-of any "most liked" item. Figure 2 summarizes the expected and observed associations between quality measures and child outcomes. As indicated by the highlighted cells, three of the hypothesized relationships were observed in the data. Parents were more likely to list learning as one of their two favorite features in classrooms where teachers held more formal education. They were also more likely to list convenience as one of their two favorite features in programs that provided longer hours. Finally, they were less likely to choose the learning environment as a favorite in classrooms with greater numbers of children.
Many hypothesized relationships were not supported by the data, and some relationships that were not hypothesized emerged. Some of the nonhypothesized associations seem plausible, specifically those related to affordability. For example, although not hypothesized, the provision of summer care and length of day might be positively associated with affordability because these features prevent parents from having to purchase additional care. Similarly, CLASS scores and teacher education might be negatively associated with affordability because these are features of higher-quality, and therefore potentially more expensive, programs. Thus, although limited, there were modest correlations between program features and parents' most-liked program features. Notably, however, these correlations clustered around easy-to-observe features (e.g., affordability, convenience) rather than more-difficult-to-observe features (e.g., quality of learning or interactions).
Discussion
This study uses unique data, including multiple parental assessments of their child's ECE program and extensive information about program features to provide new insights about parental satisfaction in ECE as well as parents' abilities to evaluate specific features of their child's ECE program in a low-income sample. We find that none of our extensive set of program characteristics related to parents' overall satisfaction with their child's ECE setting. This pattern is somewhat surprising and counter to our hypotheses that factors like cost and convenience would relate to broad satisfaction measures. Further, in models in which we include all 15 of our observed program characteristics, we predicted less than 5% of the variance in overall parental satisfaction. It is not clear whether the low explanatory power of our models is due to our omitting key factors that are essential to parents or if instead we identified the relevant factors but did not measure them with enough precision, a point we return to below. We also find little evidence that parents' evaluations of specific program characteristics correlated with external measures related to those same constructs. Given that the parental evaluation items were drawn from a survey administered in the spring of the child's preschool year, and that they therefore reflect parents' summative assessment after observing their child's experience in the classroom for a full school year, we expected stronger alignment. That said, these results echo earlier research (Cryer et al., 2002;Mocan, 2007) that demonstrated weak correspondence between observed quality and parental assessments but did not include the program features that are commonly hypothesized to be most salient for low-income parents (e.g., cost and convenience).
Notably, a slightly stronger pattern of significance did emerge in models predicting parents' selection of a program feature as "most liked." These items are unique to this study and were designed to provide parents with the opportunity to express a preference for one feature over another without having to denigrate their child's program. In models that predicted these outcomes, there is some limited indication that parents are able to accurately evaluate relatively easy-to-measure program features, such as hours of operation, number of children, and teacher education ( Figure 2). Still, taken together, the current findings suggest little correspondence between low-income parents' evaluations of program characteristics and external measures of those same characteristics.
The study makes a number of contributions. First, it is the only analysis we are aware of that examines the correlates of parental satisfaction with their child's ECE programs. Second, we consider a much broader range of program characteristics than have other studies of parents' ability to evaluate ECE. We include items commonly included in QRIS (e.g., teacher education, number of children in the classroom, opportunities for parent involvement), observations of teacher-child interactions (CLASS), and measures of student learning gains based on direct assessments. Importantly, we also include measures of cost and convenience, which are particularly salient for lowincome families making ECE choices but have been absent from previous research of their satisfaction or evaluation (Forry et al., 2013(Forry et al., , 2014. Finally, the study relies on data from a sample of low-income families in a diverse set of publicly funded ECE programs, an important contribution given that earlier studies rely on data that are decades old or underrepresent low-income families. The results suggest a potentially promising role for informational interventions, which have proven effective in some K-12 settings (Friesen et al., 2012;Hanushek et al., 2007;Hastings & Weinstein, 2008). In fact, the complexity of the ECE choice along with parents' difficulty evaluating ECE quality suggests that there may be a particularly large role for informational interventions. Indeed, previous research suggests parents would be willing to use QRIS information specifically in their care choices (Elicker, Langill, Ruprecht, Lewsader, & Anderson, 2011;Starr et al., 2012;Tout, Isner, & Zaslow, 2011). Chase and Valorose (2010) report that 88% of their sample of Minnesota parents would find a QRIS "very helpful" (53%) or "somewhat helpful" (35%), a proportion that was higher among low-income parents (61% say "very helpful" as compared with 45%).
Informational interventions may be helpful to parents because they make comparison shopping easier, especially for low-income parents, who may have little time to research or visit ECE alternatives. Indeed, previous research suggests that parents engage in little to no searching for ECE . Anderson, Ramsburg, and Scott (2005) report that 75% of their sample of subsidy-receiving parents considered just one program. Layzer and colleagues (2007) report that 41% of parents make their ECE decision in 1 day. Thus, providing parents with easy-to-understand information about local ECE options may give parents the ability to more easily compare different programs, including those they may not have heard of from their friends or family, and make different ECE decisions.
At the same time, informational interventions will only be useful to the extent that there are programs of varying quality that are accessible to parents. That is, informational interventions will help parents make better ECE choices to the extent that parents have a choice to make-a condition that may not always be the case for low-income families. If parents' choices are driven primarily by the limited supply in their community, then providing information is not likely to shape parents' decision making.
Limitations
Our study offers evidence that parents' satisfaction is uncorrelated with a large set of program characteristics, and it also shows that parents struggle to evaluate ECE program characteristics. However, several data limitations are notable. First, our data stem from a broader study focused on the ECE experiences of 4-year-old children in Louisiana, most of whom are enrolled in publicly funded Head Start or public prekindergarten settings, and our sample includes relatively few children in subsidized child care centers. Although using a recent, low-income sample of parents of 4-year-olds is a strength, the small number of parents paying for ECE-just 17% of parents in the sample attended a program where some parents pay-limits the generalizability of the study. Head Start and public prekindergarten are not only free but also more highly regulated, which may have limited the variation in program features in the present sample. A sample that includes more subsidy recipients may provide more variability and thus yield stronger associations between program features and parent evaluations. This sample limitation should be addressed in future research.
Second, our analysis relies on parental self-report. There are various reasons why parents' true assessments of program quality may differ from what they choose to report in a survey. For instance, it may be that parents are aware that their child's ECE program is not ideal but nonetheless rate the program highly to relieve their own anxiety or to give what they perceive as the socially desirable response (Lamb & Ahnert, 2006). In ongoing work, we are examining whether parents' actions (e.g., rankings of programs during a program enrollment period) are related to program characteristics in ways that differ from those seen when using parental reports of satisfaction.
Third, it may be that there are characteristics of parents that are correlated with both the characteristics of the ECE program and parents' assessments. Online Appendices A1 and A2 show that findings were not sensitive to accounting for family characteristics. However, these analyses do not address the possibility of unmeasured confounds.
Fourth, it is possible that the null results in this article are influenced by measurement error. Although our study includes a diverse set of program features, our measures may not sufficiently measure the underlying program characteristics of interest (e.g., hours of operation or program provision of transportation may not fully capture convenience; our measures of teacher-child interactions may not fully capture the quality of these interactions). Future research should explore additional measures that may more directly relate to parents' experiences in programs, for example, match between family work schedules and hours of operation or distance between home and program. A related measurement concern is that our measures of program characteristics are not directly aligned with our parental evaluation measures. Unlike previous work using the ECERS, which compared parent ECERS ratings with those of experts, we ask parents to evaluate features that are related to, but not directly aligned with, our observed program features.
Finally, as described in the Measures section, there is not a high level of variability in parent responses to the satisfaction and evaluation items (see online Table A1). Because of this, all variables were coded as "very satisfied"/"strongly agree" versus all other responses. The limited variability in the dependent variable substantially reduces our ability to detect associations between our program features and parent ratings and is a limitation of the present study. That is, a measure that captured greater nuances in parents' satisfaction may have elicited associations between program features and satisfaction. On the other hand, it is also possible that our measure accurately captures low level of variability, that is, that parents do not evaluate their programs differently despite different program characteristics. Future research should continue to devise new measures that could adjudicate between these possibilities.
Conclusions and Policy Implications
Nearly every state has now turned to informational campaigns as a strategy to increase the quality of ECE programs. Between 2004 and 2014, the number of QRIS in the United States quadrupled, the Race to the Top-Early Learning Challenge grants required that states prioritize providing parents with up-to-date, easy-to-understand quality rating information, and the 2014 reauthorization of the Child Care and Development Block Grant required states to improve access to information about child care quality for parents.
The present study provides support for the hypothesis that low-income parents struggle to accurately evaluate ECE programs, suggesting that informational interventions may be an effective way to shape parental decision making and improve overall ECE quality. However, our key findingthat low-income families struggle to evaluate characteristics of their child's ECE program setting, even after their child has been in that setting for months-does not necessarily imply that informational interventions, which aim to help parents with this process, will be effective.
First, even if informational interventions address parents' difficulty in assessing key features of ECE programs by providing easy-to-access and accurate information, they will lead families to make different and better choices only if such choices are available. If families' decisions are constrained by limited options that meet their needs, policies that address these supply issues would be more promising than policies around information. Second, there are many unanswered questions about exactly which type of information informational interventions, such as QRIS, should provide to parents. For instance, although a goal of these initiatives is to nudge parents into selecting "higher-quality" ECE options, measuring quality at scale is challenging. Existing research shows that many of the quality measures currently included in QRIS are poor predictors of children's learning, and a growing body of QRIS validation studies has generally found no or inconsistent associations between QRIS ratings and children's outcomes (Cannon, Zellman, Karoly, & Schwartz, 2017). In addition to refining the quality measures included in informational interventions, it may be important to create systems that also provide easy-toaccess information about the program features that may constrain parents' choices. For instance, Louisiana, where the current study was conducted, is currently rolling out an information portal for parents that highlights both practical program characteristics (e.g., location, eligibility, cost) and measures of classroom quality.
Third, participation in QRIS and other informational campaigns remains voluntary for ECE programs, and many QRIS systems have low rates of participation. Thus, even if parents have a set of local program options, quality ratings may not be available for local programs and thus may not influence parent decision making. Finally, to date, many QRIS and other informational interventions have focused more on measuring and improving program quality than on outreach to families. For instance, data from Indiana and Kentucky suggest that parents are unaware of existing QRIS (Elicker et al., 2011;Starr et al., 2012) and use QRIS at low levels, suggesting that effective informational interventions must also focus on parent outreach and provide specific, easily understandable, and relevant information.
Informational interventions have been central components in several prominent federal ECE improvement efforts. As policymakers pursue QRIS and other informational policies as a strategy for improving quality in ECE, it is important that more research address these issues and that informational systems be iteratively refined to reflect new knowledge. Future research should continue to probe the relationship between parents' preferences, choices, and evaluations using diverse measures of quality and methods of identifying parent evaluations. In particular, experimental research exploring the impact of providing parents with different types of information is a crucial direction for future research, as is continued exploration of the role of ECE supply in shaping parents' decisions.
We thank the Louisiana Department of Education for their willingness to share data for this project, and the children, teachers, and families who generously agreed to participate in this study.
Notes
1. Ten randomly selected programs declined to participate and were replaced with the next randomly selected program within parish and type. Six teachers declined to participate or were later found to be ineligible; in these cases, the next teacher on the list was contacted.
2. Although we cannot assess how the sample respondents compared with the full sample of classroom parents (we have no family data about nonrespondents), t tests comparing parents who responded in the fall and spring suggested few differences. There were no differences in parent education or child gender; however, the spring sample had a larger proportion of parents with incomes under $15,000 and a smaller proportion of missing income information than the fall.
3. We conducted specification checks to assess whether results changed (a) if we used listwise deletion and allowed the sample size to vary depending on quality measure and (b) if we restricted our sample even further to those parents who had answered both the overall satisfaction items and the individual evaluations of program features. Those results, available upon request, suggest little sensitivity across sample restrictions. 4. Results were not sensitive to the use of ordered logistic regression models that estimated whether quality measures predict belonging in each of the four potential parent response categories.
|
2019-05-11T13:07:24.717Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e623804614434e1e2d70fbab66e4cd525a17d1c1",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2332858418759954",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "71d60c4d3edc0e6a191500d8040e237c5994a2f6",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
150062664
|
pes2o/s2orc
|
v3-fos-license
|
Hobsbawm in Trinidad: understanding contemporary modalities of urban violence
ABSTRACT Eric Hobsbawm’s milestone work Bandits is attentive to the rural poor and situates social banditry within the world of peasant resistance, but his concepts are surprisingly adaptable to contemporary urban settings. Drawing on Hobsbawm’s conceptualisation of social banditry and avengers, this article examines the perspective of gangs who perceive themselves as victims of inequality, poverty and capitalism; who serve as social actors and security providers for their communities; and who at the same time engage in cruelty and high levels of violence and terror. This qualitative study is based on fieldwork undertaken in Trinidad and Tobago. Findings show that Hobsbawm’s figure of the avenger contributes to a better understanding of the contemporary modalities of urban violence and helps unpacking and characterise the ambiguity of the relationship between gangs and local communities.
Introduction
Eric Hobsbawm's 1969 milestone work is attentive to the rural poor and situates social banditry within the world of peasant resistance, but his concepts are surprisingly adaptable to urban settings. Urban violence represents one of the most significant security challenges for citizens and governments. Security challenges in cities are complex, and range from organised crime to political and economic unrest. Latin America and the Caribbean have the highest rates of violence worldwide. 1 Many growing cities in Latin America and the Caribbean are witnessing a sharp escalation of various forms of urban violence and some cities have lost control of parts of their territory to non-state violent actors such as street gangs, drug lords or armed groups. By wielding extra-legal power these groups create a parallel structure of social norms and law and order. Low-income neighbourhoods often referred to as favelas, barrios, slums, no go-zones or hot spots are often the arena of urban violence related to criminal gangs. Gangs commonly occupy these 'uncontrolled spaces of a "world of slums"' and have become 'permanent fixtures in many ghettos, barrios, and favelas across the globe'. 2 The relationship of gangs and their local communities is an ambiguous one as local residents face a high level of everyday violence, fear and insecurity, yet a number of scholars found an intimate relationship based on the provision of protection, social services, jobs and financial support. The relationship between gangs and the community within which they operate drew the attention of scholars, resulting in numerous studies highlighting the fundamental ambiguity of this relationship. 3 It is understood that gangs are frequently intimately related to politics and take over social roles in their communities. The social roles that gangs take over are not necessarily emerging due to a void or lack of state presence, but rather in 'state complicity' 4 or in 'co-existence' 5 with state authority and 'generating localized systems of order'. 6 Thus gangs are not 'unchangeably violent or terminally hostile' 7 but can constitute 'guarantors of local security', 8 'orders of violence', 9 and 'recognizable social institutions that obeyed and imposed codified rules', 10 employing a system referred to as 'jungle justice'. 11 Brotherton and Barrios find that 'the possibility of gangs emerging with their own alternative political, economic, or even cultural agenda is never given serious consideration'. 12 Taking the case of urban violence in Trinidad and Tobago, this article sheds light on the interactive sphere between gangs 13 and the community they reign. In Trinidad and Tobago, a state with 1.3 million inhabitants, the homicide rate increased significantly from 9.5 per 100,000 in 2000 to 41.6 per 100,000 in 2008. 14 The national homicide rate has decreased since then, but remains the world's sixth-highest (homicide rate 30.9 per 100,000 in 2015). 15 While not all murders are gang-related, gangs play a major role in pushing the level of homicide and violence. A gang war between two major groups, namely 'Rasta City' and the 'Muslims', has pushed up the homicide rates and spread fear. Violence and gang activity is a nationwide phenomenon, yet mainly found on the island of Trinidad and concentrated in the north-western part of the island where the capital city Port of Spain is located. More specifically, violence and gang activity is concentrated in the area of Laventille and its adjacent neighbourhoods of Morvant, Beetham and Sealots along the east west corridor, which connects the capital city with the rest of the island. The Besson Street police station, which is in charge of East Port-of -Spain and Laventille, accounted for 23.8 per cent of all murders in Trinidad and Tobago in 2005, pushing the Laventille's homicide rate to 249 per 100,000 persons. 16 The gangs have created unofficial borders for the geographical zones they control and have restricted freedom of movement for both regular citizens and gang members. The gangs secure their borders through snipers with high-power assault rifles, who are located at designated observation points. Invisible to outsiders, gang territory begins close to the capital city's major shopping street and main bus station. They are also involved in drug trafficking, burglaries, robberies, prostitution, fraud, and extortion created their own legal companies and non-governmental organisations, receiving government contracts and dispense jobs related to the social welfare programmes.
Urban violence has caught scholarly attention and poses a preoccupation of policy-makers, planners and practitioners around the world. 17 Within interdisciplinary scholarly literature as well as policy-makers and non-state initiatives, there is a recurring challenge of how to limit, contain or prevent gang violence. Most commonly, public authorities, including police, national security agencies and special forces, as well as mainstream sociological and criminological studies focus on youths' deviant behaviour with a tendency to brand gangs as national security threat or 'new urban insurgency' 18 or 'crime and security problem'. 19 In this scenario, states commonly address the issue of gangs by using increasingly punitive measures, based on the criminalisation of youth and the poor, and a militarisation of public space. 20 As part of this reactionary discourse, repressive policy responses have proven to be ineffective or even counterproductive in many cases as they push youth towards more organised forms of criminality. 21 Thus, scholars criticise that gangs are commonly misunderstood feature of urban life. 22 Positive activities and social roles of gangs remain an 'unappreciated' aspect in scholarly literature and, despite numerous case studies, have not 'found a home in the dominant strands of criminological thinking about gangs'. 23 There is a need for a multifaceted perspective on gangs, as their sole criminalisation is 'intellectually dishonest and sociologically baseless'. 24 This article takes Eric Hobsbawm's notion of social banditry as a referential starting point to discuss contemporary modalities of gangs, banditry and violence in Trinidad and Tobago. The image of the social bandit, most commonly personified as Robin Hood, has proven remarkably persistent. The longevity and popularity to use the concept of Robin Hood is evidenced by several recent studies to explain contemporary violence and urban gangs. 25 Framing the 'anti-hero' as Robin Hoods explains their attachment to society and sheds light on the mechanism that enables audiences to take the gangs' side to become complicit with them. 26 In a similar manner, Gutiérrez Rivera argues that the community status of gangs has important implications: the closer the link between community residents and gangs, the higher the chances that residents tolerate gang violence. 27 While acknowledging the usefulness of the concept of Robin Hoods, I find that the figure of Hobsbawm's avenger has even greater potential to explain the multifaceted phenomenon of gang violence as it combines two contradicting perspectives on gangs: as caring social actors and violent actors or fear and terror at the same time. The conceptual combination of these contradicting perspectives is a novelty.
Drawing on Hobsbawm's conceptualisation of avengers, this article examines the perspective of gangs who perceive themselves as both products and victims of capitalism, especially of inequality and poverty, and as social actors and providers for their communities, and who engage in cruelty and high levels of violence, through which they create fear. The conceptualisation of Hobsbawm's avenger is surprisingly adaptable to contemporary urban settings as it goes beyond the image of the Robin Hood to explain and characterise the ambiguity of the relationship between gangs and local communities. To this end the concept of the avenger facilitates a holistic understanding of gangs, and, to this end, presents alternative viewpoint how to understand and deal with contemporary urban violence.
The article proceeds in three parts. In the first, I outline the role of gangs in local communities drawing on scholarly findings from across the world. These findings, from political science, sociology and ethnographic studies show the ambiguity of the relationship of gangs and their local communities. I subsequently theorise this ambiguity by presenting Hobsbawm's figure of the avenger and present my methodological approach. The second part mirrors the narratives of gang leaders and members and local residents from gang-controlled areas concerning the role of gangs as social actors, security providers, and defenders of the community against injustice and social neglect. The third part outlines the character of brutality, cruelty and high levels of violence, through which gangs create fear.
Urban violence
Latin America and Caribbean have the highest homicide rate in the world. 28 Urban violence is becoming more widespread in Latin American and Caribbean cities, 29 which face a 'chronic public security crisis'. 30 Examining scholars' findings across the world, gangs pose a threat of insecurity and violence, yet the relationship to their local communities are ambiguous, intimate, symbiotic and based on reciprocal trust, coercion and compliance. Low-income neighboorhoods, often referred to as favelas, barrios, slums, no go-zones or hot spots are often the arena of urban violence related to criminal gangs. According to Rodgers and Baird, the relationship between gangs and communities is 'often very strong and highly organized'. 31 The relationship of gangs and their local communities is an ambiguous one as local residents face a high level of everyday violence, fear and insecurity, yet a number of scholars found an intimate relationship based on the provision of protection, social services, jobs and financial support. Gangs constitute a real yet misunderstood feature of urban violence, as they are closely linked to increasing levels of inequality and exclusion in Central America. 32 Inequality and exclusion play a major role in nurturing opinions to generate legitimacy and support. Griffin and Persad observe that 'un-civil society groups' have emerged throughout the Caribbean due to 'a failure of the state to deliver on its core functionsproviding a consistent set of public goods, including security, education, care, and basic infrastructure needs'. 33 Gangs take the opportunity to provide these services and in turn become legitimated 'community leaders' 34 or even 'functional equivalents of states'. 35 This is in line with what Harriot observes in Jamaica: the relationship with the community gives the gangs political leverage and immunity against law enforcement. 36 Gang leaders in Jamaica, referred to as dons, rely on a significant level of support of their community members which is based on the dons' provision of social security, physical protection and employment and an 'alternative form of dispensing justice'. 37 Dons provide food, school supplies and gifts and, unsurprisingly explains Jaffe, present themselves as 'benevolent providers and protectors'. 38 Dons perform 'social and economic welfare roles' which in turn grants them authority among community residents. 39 Criminal gangs in Jamaica used profits made through narcotic trade to provide socio-economic services to community members. 40 The roles that dons take over are not necessarily emerging due to a void or lack of state presence, rather in 'state complicity'. 41 Similarly, Stephenson analysed the local social order of gangs in Russia and observed that in the 1990s, gangs turned into 'agents of patrimonial power, acting as a structure of quasi-familial welfare and violent regulation' in their area. 42 Local poor residents received financial support, hungry people were handed free potatoes, and play areas for children were set up. 43 The Russian gang leader, as Stephenson describes, invested in halting street crime and developed good relations to the police. Stephenson noticed a 'gang's penetration into the community' and a self-perception of gangs themselves as 'bastions of order and morality'. 44 Community members accepted the gangs as violent social regulators due to a lack of trust in effective state protection and distrust in institutions of law and order. 45 In South-East Asia's well-known tourist island, Bali, militia groups contest security and enjoy legitimacy at the local level in reference to the 'community in need of protection and its core values'. 46 McDonald and Wilson argue that the power of Balinese militia rule and their security provision doesn't indicate a limitation of state authority due to the strong embedment in the local Balinese culture and linkages to political sphere. 47 The protective role that violent groups play contributes to an ambiguity in people's attitudes towards them, as McIlwaine and Moser observed throughout Latin America. 48 Gutiérrez Rivera found that in the 1990s local communities in Honduras controlled by the gangs MS and M-18 didn't perceive them as threats, but as 'forms of protection, from burglars, delinquents, and other threats'. 49 The 1990 Nicaraguan youth gangs, known as pandilla, were 'recognizable social institutions that obeyed and imposed codified rules' as Rodgers explains, and refrained from harming local community residents. 50 Gangs have also taken over roles as justice providers by enacting rules, prosecuting crimes and sentencing according to their own view of life. Blake finds that gangs in Jamaica became judge, juror and executor of justice at the same time, employing what he calls 'jungle justice'. 51 Jungle justice prohibits, for instance, robberies in their own communities, disrespecting the elderly and sexual abuse of the women living in the community. This is what Hensell found in the case of street gangs in Albania's urban centres. Gangs were in search of public recognition and appreciation, and felt responsible to maintain law and order against the backdrop of a corrupt and incapacitating government. They imposed their own rules such urging gang members to refrain from sexual assaults of women and theft in their communities. 52 Yet, many cases around the world indicate the use of violence of gangs as a strategic means. Siegelberg and Hensell frame social orders which are based on the use or threat of physical violence excerted by violent groups as 'order of violence'. 53 Blake points out that Jamaican dons gained significant power based on the command of armed gangs willing to use their guns, which is a 'means of creating fear and acquiring respect inside garrisons'. 54 In this way gang leaders build a relationship with the residents of their garrisons based on reciprocal trust: dons provide welfare benefits to community members who in return afford legitimacy and authority. 55 In the case of Honduras, gangs increased their use of violence to maintain their 'legitimate community status,' as Gutiérrez Rivera observed, by threatening, mugging and extorting residents, nonresidents and taxi and bus drivers. 56 Scholarly literature has so far failed to theorise the ambiguity the two faces of gangs: as caring social actors and as instigators of cruelty and actors of excessive violence and terror. These examples from all over the world call for a thorough examination of the relationship, but has not resulted in a conceptualisation thereof. Eric Hobsbawm offers a means of addressing this contradiction by introducing the character of the avenger. Although his work on social banditry has been criticised as romanticising crime and as 'methodologically unsound, theoretically flawed, empirically limited', 57 it stimulates thinking of gangs as exercising urban social banditry.
The term 'banditry' denotes challenging the economic, social and political order by challenging those power. 58 Banditry is thus a phenomenon linked to socio-economic and political orders, as banditry is enacted by 'outlaws' who act outside of public law and not as law enforcers. In general social bandits are 'peasant outlaws whom the lord and state regard as criminals, but who remain within the peasant society, and are considered by their people as heroes, as champions, avengers, fighters for justice'. 59 He defines bandits as persons who 'resist obedience, are outside the range of power, are potential exercisers of power themselves, and therefore potential rebels'. 60 Bandits thus voice public discontent in the form of peasant protest and rebellion.
Hobsbawm outlines several types of social bandits, among which are the noble robber and the avenger. I will briefly introduce the type of a noble robber before I continue with the avenger. Noble robbers, framed as Robin Hoods, start as 'victims of what they and their neighbours feel to be injustice'. 61 A noble robber acts as an agent 'of justice, indeed a restorer of morality, and often considers himself as such'. 62 The role of a noble robber is that of 'a champion, the righter of wrongs, the bringer of justice and social equity'. 63 The beginnings of a noble robber are linked to the perception of injustice, which is an important aspect of gangs as well. Hobsbawm outlines nine characteristics of noble robbers: noble robbers are (1) engaged in 'outlawry' as victims of injustice, but considered criminals by authorities. A noble robber (2) 'rights wrongs' and (3) takes from the rich to provide for the poor. A noble robber (4) only engages in killings in self-defence and revenge, (5) is attached to his community, and (6) is 'admired, helped and supported' by the community. Noble robbers (7) only die if betrayed, because the community members would not help the authorities against them, (8) are theoretically 'invisible and invulnerable' and (9) do not oppose the highest political leaders but those who directly oppress them. 64 Clearly, noble robbers are portrayed as good, morally just and heroic in nature. In reality, this one-sided portray is difficult to substantiate, as gang members use violence strategically to pursue a certain goal. The aspects of revenge and retaliation, terror, cruelty and indiscriminate brutality seem to be absent in Hobsbawm's notion of a noble robber. But he introduces another type of criminal: the avenger. The avenger shares many characteristics of the noble robber, but in addition 'avengers' are not solely genuine righters of wrongs in the battle against injustice in the name of the oppressed, but also build their power by creating fear and horror. Hobsbawm cites a poem to shed light on the image of an avenger: He killed for play, Out of pure perversity, And gave food to the hungry, With love and charity. 65 The character of the avenger is highly interesting. The avenger combines the positive aspects of noble robbers with brutality, fear and power: 'terror and cruelty' are 'part of their public image'. 66 Hobsbawm explains that avengers are 'public monsters' who internalise 'values of the "noble robber"' and are 'heroes not in spite of the fear and horror their actions inspire, but in some ways because of them'. 67 Excessive violence and cruelty are part of the character of an avenger, as they live by love and fear: if their roles were based only on love, it would be a weakness, and if it were based only on fear, they would have no supporters. 'Even the best of bandits must demonstrate that he can be "terrible"' and despite, or rather because of, the 'monstrosities' they 'are and remain the heroes of the local population'. 68 Hobsbawm thus highlights the mechanism behind the use of violence to spread fear. Upholding sexual morality and the punishment of rapists can be part of being an avenger, though imparting terror is the dominant attribute. In contrast to the beloved Robin Hoods, '(t)o be terrifying and pitiless is a more important attribute of this bandit than to be the friend of the poor'. 69 Hobsbawm's avenger offers an understanding of contemporary urban violence on two dimensions: On a macro level, rather than seeing (urban) violence as an indicator of a state's inability to address issues of poverty, racism and exclusion, and link the benevolent attitude of gangs to a lack of state presence or inadequate state capacity, the concept of the avenger allows us to analyse the role that violence plays in contemporary democracies. As Arias has pointed out, gangs can generate popular support and become 'resistant to state policy interventions'. 70 Understanding gangs as avengers can thus offer valuable insights into the workings of politics, gangs and local communities and explain why the phenomenon of gangs is persistent over time. On a micro level, viewing the ambiguity of gangs as part of the processes of socialisation of young men into crime is also central to understanding why gangs persist. Instead of viewing gang members as pathological criminals, Hobsbawm's avenger helps to understand crime as a form of social protest and underclass revolt while at the same time refraining from romanticising crime and violence. Acknowledging gang members as victims of social exclusion, poverty and discrimination can serve as a self-fulfilling prophecy for young men, providing them with a legitimisation for opting for 'a life of crime.' But in reality, this is only one perspective on gangs, as scholar Hagedorn reminds to view gang members as 'real people' who react to 'conditions of poverty, racism and oppression' and at the same time urges not to romanticise or underestimate their violent and destructive potential. 71 Allowing Hobsbawm's concepts to travel to Trinidad calls out for attention to two aspects that do not fit: the urban setting and gangs as social bandits. In Hobsbawm's understanding, social banditry was a rural phenomenon limited to peasant outlaws; he excludes urban terrorism, gangs or robbers whose motivations were only economic. 72 He argues that rural and urban spheres are too different to be discussed in the same terms and even antagonistic, as 'peasant bandits, like most peasants, distrust and hate townsmen'. 73 In Hobsbawm's definition, gangs are not social bandits. But Hobsbawm also argues that 'in a peasant society few can be free' as peasants are 'victims of authority and coercion' who are oppressed by 'lordship and labour'. 74 In contemporary terminology, this can be translated to social exclusion, elitism and corruption, and capitalism. The empirical findings of this study underline that Hobsbawm's concepts are adaptable to today's urban settings and very useful to understand socially and economically excluded urban males resorting to violence and crime.
The following section applies Hobsbawm's concepts to Trinidad and Tobago as an analytical framework for understanding the relationship between gangs and their communities. I furthermore critically examine the data based on interviews with active gang members as gangs create narratives about themselves, a practice that gang researcher Brotherton refers to as 'myth-building'. 75 It is important to understand that the shared narratives are based on gang members self-perception or community members' perception of gangs.
Method
This research project is an interdisciplinary, explorative case study. The goal of a case study is to understand complex social phenomena and 'retain the holistic and meaningful characteristics of real-life events'. 76 In this case study I used qualitative methods and a grounded theory approach.
The empirical data was gathered during fieldwork undertaken in Trinidad and Tobago from March to June 2015. In this study, interviews were conducted with a variety of persons, including active gang leaders and members. These interview partners are considered to be hard-to-reach. Researching hard-to-reach populations raises the question of which sampling technique to apply. Interview partners were selected with the use of a refined sampling technique, the successive approach. 77 It is influenced by the logic of snowball sampling and purposive sampling, methodological insights provided by researchers who have studied hard-to-reach population (e.g. accessibility difficulties, indefinite population), and the particularities of conflict environments (e.g. atmosphere of distrust and suspicion). The first step of the successive approach consists of identifying, mapping, and contacting individuals who work or live with the target population of interest. This entails purposive sampling, as researchers use their judgment to identify the individuals they think will provide the best insights. These individuals are similar to experts with context knowledge and are referred to as 'periphery persons.' When sampling periphery persons, it is preferable to generate a sample with diversity of the roles as the perceptions of the research phenomenon might differ depending on the person's role (e.g. whether they are a teacher, religious representative, or social worker). Thus I mapped out people who were working with or on gangs in Trinidad and contacted them. The second step of the successive approach involves contacting members of the hard-to-reach population, in this case active gang members.
Fieldwork and interviews
Carrying out fieldwork in dangerous settings has various implications, ranging from sampling and accessing the population of interest, shaping the research agenda, and formulating strategies on how to manage potential risks. Methodological issues that impact sampling and interview techniques are, for instance, a lack of accessibility, a lack of openness due to mistrust or security issues concerning the researcher. When interviewing active criminals, in contrast to former combatants or ex-members, establishing a rapport and trust with them is paramount; otherwise, the researcher's well-being is in jeopardy. Anthropologists, ethnographers, sociologists, and political scientists have all experienced the challenges of conducting qualitative research and the related hazards in dangerous settings. The resulting publications thereof have contributed to a growing body of literature, which covers 'dangerous fieldwork', 78 'danger in the field' 79 and 'physical dangers to fieldworkers'. 80 Further valuable work based on qualitative gang research and research in highly violent settings has been published by, among others, Enrique Desmond Arias, Adam Baird, Luis Barrios, Philippe Bourgeois, David Brotherton, Vanda Felbab-Brown, Jennifer M. Hazen, Dennis Rodgers, Martin Sánchez-Jankowski, Gaelle Rivard Piché and Sonja Wolf. 81 With the use of the successive approach qualitative, semi-structured interviews and background talks with 39 persons were conducted. Interview partners included active leaders or members of gangs; former gang members, released prisoners, as well as teachers, social workers, youth workers, a pastor, a priest and an imam, youth church groups, musicians, regular community residents from gang-controlled areas, steelband players, former prison officers, police officers and special police unit officers. In addition, I had off-the-record conversations with 18 persons (14 men; four women), including two former gang members. Additionally, I took part in several events in gangcontrolled areas such as community meetings, walkabouts, bible study meetings, etc. and visited a maximum-security prison. The interviews were recorded, transcribed with the computer programme F5 and coded openly and analysed with the help of the computer programme Atlas.ti.
Gangs in Trinidad and Tobago: caring defenders of the community
The first violent groups emerged in the area of Laventille, an area with a long and complex history of violence. Laventille is the area of origin of the steel pan music. The steelband era was marked by outbursts of violence and street fights, underlining the argument that it 'always had gang violence in Trinidad'. 82 Fights between steelbands flourished in the 1950s that took place in the streets, backyards and alleyways at any point in time. 83 In the 1980s the youth of Laventille was easy prey for the mobilisation efforts of an Islamic organisation named Jamaat al Muslimeen. In the late 1980s an increasing number of youth joined under the wing of the Jamaat al Muslimeen, which went from an Islamic community organisation to 'social bandits' to the providers of guns and protectors of the drug trade as 'Allah's outlaws'. 84 After the attempted coup d'état in July 1990, the Jamaat al Muslimeen withdrew officially from their activities in Laventille and gradually lost control to individual kingpins. The following decade in the mid-2000s was marked by a fierce competition between persons who sought to become the next powerful gang leader. The competition eventually led to the consolidation of the two factions around 2008-2012, 'the Muslims' and their counterforce 'Rasta City', who split up vast gang territory among themselves. 'Rasta City' and 'the Muslims' have hence become umbrella groups with a hierarchical leadership and tremendous influence.
In Trinidad and Tobago, gangs' power is bolstered by their social role as defenders and voicers of public discontent in their community. According to my interviewees, gang members perceive themselves as victims of a political system ruled by greedy, power-hungry and corrupt politicians in their country, as well as victims of global mechanisms of inequality where capitalist countries and 'black-minded' world leaders enrich themselves at the expenses of the rest. This self-perception as deprived victims has become a convenient life motto which they live up to. According to this mindset, the gang members' actions are a response to the status quo; they are the freedom fighters of the oppressed and the Robin Hoods of the poor, under the flag of anticapitalism and equality.
We are defending ourselves. They provoke everyday, we can't take that. 85 Through the lens of Hobsbawm this mindset is part of a noble robber who 'never kills but in self-defence or just revenge'. 86 The gang members and leaders interviewed emphasised their frustration and disappointment to justify their actions. They based their criticism on global issues such as world domination, capitalism, and imperialism, as well as on local issues such as corrupt and unaccountable politicians. The perception of injustice can be traced back to the country's resource wealth, which is not shared equally among the country's citizens but remains in the hands of a few. A leader of Rasta City explained that Trinidad and Tobago is a rich country, but that the wealth is not distributed equally. Therefore, not everybody profits from the country's oil. While certain segments of the society live in affluence, others are deprived of basic needs such as sanitary infrastructure. The gang leader recounted how he grew up in Laventille's widespread poverty and accused rich Trinidadians of stealing from society to 'get themselves rich', stating that 'the rich people need to share some of the money'. 87 We live here in Laventille and we still have people living without toilets, [they only have] shit holes, latrines [. . .] plenty latrines around here. And now the government come try to fix it, but they could have done it long time ago, but they hold on to the money. You know they thief [steal]! Everybody thieving this country! I really don't know what they are doing with the money, I can't say. All I know they [their] house big, and the family have everything. 88 Another gang leader from Beetham Gardens stated that theoretically there is enough work for every citizen in Trinidad and Tobago. He argued that certain parts of the society are purposefully kept in dependency in order to control them, stating that the economic deprivation is a purposefully orchestrated move to 'keep the people poor' in order to facilitate politicians' own social mobility who live on the expense of the population. 89 The gang leader's statement is a clear example of how well Hobsbawm's peasant society resembles the contemporary urban poor: 'in a peasant society few can be free' as peasants are 'victims of authority and coercion' who are oppressed by 'lordship and labour'. 90 Voting, the gang leader continued, the means to enforce political change, is 'all lies'. 91 Another gang member felt that the government of Trinidad and Tobago must profit in some way from the ongoing violence, since it is not doing what is necessary to stop it. In his view, the violence between the two rival gangs Rasta City and the Muslims could easily be solved, but 'the government like [sic] the crime'. 92 According to a gang leader's perception, crime could be easily contained because it is a product of poverty. Less poverty would mean less crime, so the gang member called for proper education, enough food and decent work for the people in order to enable them to leave the life of crime. 93 According to regular community members living in Laventille as well as gang members election promises are not being delivered. This perception of political neglect is framed as the reason for the problems in the communities. Places like Laventille 'have lots of love' but the violence and killings take place because of the political neglect, as a gang leader from Beetham Gardens explained: Statements made by gang leaders show traits of anti-imperialism and anti-capitalism, as evident in one interviewee's comments. He criticised the lack of legitimacy of 'world leaders', asking 'who give them the power to dominate the world?' He also stated that some local politicians don't have Trinidad and Tobago but rather the 'dollar sign' in their hearts. In interviewed gang members' own perception, their communities are the victims of world dominance, and 'evil men' who keep the people purposefully blinded, trembling and fighting rule the world. He argued that the voices of the people are only heard when they become violent. 95 He accused the 'world leaders' of 'lock[ing] the world resources' in order to be able to control the people. To this end, he touched on the issue of a revolution, as 'people are fed up' and ready to claim their share. Another leader from the hills of Laventille put the existence of gangs into perspective. He argued that there are poor people in every society of the world, the 'have and the have-nots', and that the poorest people of society take up their arms as a struggle for survival. 96 He made implicit references to the evident inequality in his country: Poor men have no plane, poor men have no villa, but they have guns and struggle for survival. 97 An elderly resident from Beetham supports this perspective: The wealth trickle down [. . .] although we are a oil-producing country, gas-producing country, the oil and gas money, it will filter down to us. The Trinidadian government cream off a large amount and then they trickle down some [money] to the middle class and then the crumbs to us. Everybody keep grabbing. 98 The perception of injustice incorporates the argument that violence is a means of selfdefence. Therefore, gangs offer the service of protection, often framed as self-defence against enemy gangs. Reportedly, the community has to stand up against the destruction of property and robberies of jewellery. Violent actions thus become a reaction to provocations, crime and violence, a perspective which frames those carrying out this violence as the victims rather than perpetrators. A gang member described this as follows: And the people from up there [neighbourhood called Block8] coming down here [St. Paul Street], they break the place, they take people['s] gold, that start a lot of war, nah! So now people in the community now stand up against that. These people have no respect for nobody. They come here, they break my mother's place, they break your mother's place. They take my chain [necklace], they take your chain, and go back over. They come and do that. They come and take whatever, they don't care. We are defending ourselves. 99 The gang members legitimate violence by vengeance, and Hobsbawm explains that for avengers 'cruelty is inseparable from vengeance, and vengeance is an entirely legitimate activity for the noblest of bandits'. 100 The position of a gang leader is referred to as a community leader and involves responsibilities, including the provision of protection for the community. 101 Therefore, only persons who appear eligible may hold this respected position: Nobody perfect, everybody has good and bad. But here you have to defend yourself, you understand? And as a leader, you have to defend the community. But other people don't understand that. They just want to see it their way, just say 'ah he killing people. 102 There are accounts of gangs as law enforcers and crime prosecutors and how they 'don't tolerate' crimes in their communities, which has led to communities with few incidents of 'snatching' (stealing of valuables). 103 When gangs engage in 'law enforcement' they refer to their own rules, regulations, and punishments. In Hobsbawm's vocabulary, they 'right wrongs'. 104 Protecting their community means ensuring that community members aren't robbed of their valuables ('snatching'), physically hurt or sexually assaulted. According to gang members' own accounts, their chances of finding the thieves or perpetrators of crime in their respective community are high. 105 A steel band manager who grew up in one of the most violent parts of town reported that perpetrators of crime could be killed by gang leaders for their offences: A young fella in the community who snatching chains [stealing necklaces] and doing that kind of things, he could get killed by a gang leader. The gang leader will get one of his boys to take him out [kill him]. 106 Besides prosecuting the crime, the leader also acts as the judge for punishments, which apparently range from no repercussions, to beatings, to death for rapists. The punishment depends on the severity of the crime, or the gang leader's mood, as one leader explained: Sometimes I beat them up, but not this time. But rapists get killed. 107 Hobsbawm's avenger also perceives themselves as 'upholders of sexual morality'; bandits were forbidden to rape and seducers faced castration. 108 That the gangs have grown into social actors is apparent in the services they provide. In providing these services, the gangs have seized an important role within their communities, as a resident from gang-controlled Duncan Street in East Port of Spain claimed. He thinks that the groups have 'filled the void' left by the leaders of society.
So the void and the gap that has been left, somebody will fill it! And right now the person who is filling the void that the leaders of society and politicians have left in these areas, are the gangsters! The head of the gang is filling it by giving people money for foods, clothes [. . .] at the end of the day when people leave the church they go home hungry without money and the gangster might give them food and money! What do you want people to do? This is the reality! 109 The gangs provide what would be considered social welfare to the poor and financially deprived people. To gather deeper insights into the roles of gangs I asked a person from the Citizen Security Programme, who has also been working as a social worker with underprivileged youth in Trinidad and Tobago for years, if the gangs in Trinidad were indeed informal support systems. He answered: Definitely. They take over financial roles. They assist single mothers with groceries and assist the kids going to schools, buying uniforms, buying books and so on. 110 In this way, Trinidad's gangs have managed to take hold of an important niche between the community and the state as providers for the community. A leader of Rasta City explained that helping poor people and providing food for hungry people is his joy. He also explained that he is working on getting a teacher with lots of patience for the slow learners to give them a chance to succeed academically. Besides organising classes for the slow learners, he wants cooking classes in the community centre. He also wants the police to assist him in getting the youth out of the 'life of crime'. 111 Benevolently he claimed that in times when he has no money himself he sells his jewellery in order to be able to send people to the doctor. 112 Yet, an elderly resident from the Beetham stated that the gangs 'cream off' the lucrative profits made, leaving little to trickle down to the poor people of the area. 113 While some reports on the positive role are self-ascribed attributes from gang members themselves, there were several occasions that exemplified their positive role to me as an observer. For instance the proclaimed role of gang leaders as protectors and crime prosecutors was also exemplified when a young woman interrupted my interview that I conducted at that time with a number of gang members. Tearing up she told the gang leader that her necklace has just been snatched (stolen), an incident which left visible bleeding scratches on her neckline. The gang leader sent out his soldiers to ask around and dig up the thieves while he reassured me that the chances of finding the thieves are high, while 'the police has no chance'. 114 They found the thieves in less than one hour.
The question of gangs as providers of security remains an ambiguous one. When I interviewed a mother of two who lives in the gang-controlled area called Beetham Gardens, she took a critical stand against the gang culture in her community in general, but argued that it has become safer within her own community, Beetham. She compared the community of Beetham to a 'social club' she is a member of, and members are provided with protection. Indirectly, yeah. They tell you 'we have to protect all you' but that's about it! 116
Gangs as actors of violence, terror and fear
Eric Hobsbawm explains that 'avengers' are not solely genuine righters of wrongs in the battle against injustice in the name of the oppressed but also build their power by creating fear and horror. The following section highlights the excessive violence and cruelty are part of the character of an avenger who live by love and fear and whose dominant attribute it imparting terror. Hobsbawm argues that bandits, in this case gangsters, must demonstrate that they can be 'terrible' to remain the heroes of the local population. 117 In the case of Trinidad and Tobago, gangs have succeeded in manifesting themselves as indispensable actors in society through the effective threat of violence and the proliferation of fear and terror. A community activist pointed out that the groups in Trinidad can be 'vicious', 'morbid' and 'monsters'. 118 Reportedly gang began to use violence strategically to maintain power and respect when they 'got smart' and understood the effectiveness of threats. 119 The level of violence in connection to gangs in Trinidad and Tobago is indeed very high. Shootings and drive-by shootings are the most common form of violence carried out by gangs in Trinidad and Tobago. 120 Gangs in Trinidad and Tobago have access to weapons ranging from pistols, rifles, rudimentary arms (crude guns made by artisans) and shotguns to fully automatic shotguns, semi-automatic rifles, sub-machine guns and assault rifles. 121 The availability of arms, facilitated by the Jamaat al Muslimeen in the 1980s, established a mechanism which tremendously influenced gangs' increase in power: the youth realised 'the power of the gun', which brought them respect and fear. 122 And let people realize the power of the gun. Let the youth control guns and realize what guns can do. How people respect them with guns. 123 The groups institutionalised the strategic use of threats of violence for their own benefit, as they realised that 'everybody is afraid of death'. 124 One reason why gangs opt for brutality is to cement their position and 'prove their mark' in the community, as a mother from Beetham Gardens observed. 125 Interview partners reported that the security situation has deteriorated to such an extent that political leaders are afraid to go into areas such as Laventille. 126 The symbiotic give and take of political patronage has thus shifted to a gridlock in which politicians have no choice but to deal with the gangs. To this end, gangs have made themselves 'an indispensable component of campaigning' through threats and the use of violence. 127 Besides drug trafficking, the gangs in Trinidad depend on government contracts and social work programmes (Community-Based Environmental Protection and Enhancement Programme (CEPEP) and the Unemployment Relief Programme (URP)) as resources. This source of income is fiercely defended. Police officers stated that the elimination of these social programmes would mean 'revolts and violence'. 128 This is in line with what a resident from Beetham Gardens observed: [The gangs] control the 'ten days', CEPEP and URP. They control that. Within any government. Because all of them fear violence. So we are in a dread situation. 129 Officers from the Besson Street police pointed out the paradox of handing out contracts and social work programmes to gang leaders, which enable them to finance the purchase of arms: The government hands them [gangs] a million-dollar project and they use the money to buy expensive guns that they use against us! 130 The gangs use violence strategically to maintain power and respect, a typical characteristic of Hobsbawm's avenger who are 'exerters of power'. 131 Gangs have become powerful and indispensable actors in their communities, and the politicians, as well as non-state agencies, have 'virtually sold out to the gangs because of fear'. 132 Reportedly, fear and terror is what they use to bargain with the government. When they want to inflict pain or send a message, they 'drop some bodies'. 133 According to interview partners, gangs use the threat of violence to push politicians to pay attention to them, and at times 'have them see a skeleton' to remind them of their power. 134 They thus ensure they are not overlooked, and simultaneously underscore their 'importance' in their respective communities. As a Beetham resident explained: The only way how poor people get anything is by rebellion. By the time they [the politicians] see the rebellion, they have full respect for that. 135 The violent potential of the groups in Trinidad is evidenced by a number of circumstances. One is the geographic location of their turf areas. The areas of Laventille, Beetham and Sealots that are controlled by gangs in Trinidad happen to be strategically located on the capital's main artery: the main highway and bus priority route that leads in and out of Port of Spain. This East-West corridor connects the capital city with the international airport and the rest of the island to the east. It is of high importance for getting in and out of the capital city, and a roadblock could cut off the city from the rest of the country. A community police officer of the Hearts and Minds Unit pointed out: What they [gangs] doing is not the ultimate for them, they just doing enough to show the government: 'I could give you more trouble than you could handle'. They could start shooting people in the traffic. 136 The Hearts and Minds police officers took me to an area called Beverly Hills in Laventille and pointed out the countless bullet holes in a building. The officers explained that a way gangs show their power is to terrorise the (rival) communities through random shootings. The intention is to let the other rival groups know what kind of weapons they have. Pistols, revolvers and shotguns are usually single-shot weapons and sound much different from semi-automatic rifles and sub-machine guns. Once a group gets hold of a new kind of weapon, they shoot it into the air or randomly into the rival community to show off their new assets. He claimed that it has become 'like a game for them' to show off their weapons by shooting into the air. 137 What happens is that they will go to one area and just shoot down into the rival area hoping that it would hit someone. [. . .] It's like a game for them. So when they do that, the other side will do the same thing. They are so far away, they can't see each other! Just firing into the community. 138 These shootings impact regular residents in the communities who are not involved in the ongoing gang war. The fear of becoming a victim of random shootings impacts the social life of Trinidadian citizens: Gangsterism has now become a culture of the people! [. . .] now people can't go out so much again because at anytime people can start shooting without any notice. They start to shoot and who they shooting? At anybody they see! 139 For the public, many killings seem random, something which is terrifying for the local population. In Hobsbawm's vocabulary, gangs as avengers practice terror to successfully exert power. In May 2015 a man was shot from long range by a sniper for 'target practice', as the police assumed. 140 The indiscriminate violence has a distinct purpose for gangs fighting a war with rival gangs. Gangs in Trinidad either plan to kill a person of interest, or they plan to kill an innocent person. This means the killings of innocent persons are not as random as they seem. A community activist who has been working with youth in Laventille for decades and joined the Hearts and Minds Unit, a special community police unit formed to improve relationship between the police and the communities, explained: They [gangs] don't do random. If they are coming to shoot you, they shooting you. If they are shooting innocent people, they are shooting innocent people. Whoever pick up, pick up. 141 Interviewees stated a perverted logic behind the killings of innocent people. In these cases, the fatal violence against community members, such as mothers and children, is designed to hurt the whole community. The killing of a criminal or member of a gang wouldn't cause much pain in a community as everybody expected this to happen sooner or later, but the killing of an innocent person is a purposeful strategy to cause pain to a whole community. 142 And they purposefully kill innocent people because it creates terror in the community. Because of that they are not safe, not safe at all. 143 For instance, on New Year's Day (1 January) 2016, a 6-year-old boy and a 69-year-old woman were shot dead on the Beetham. 144 It was the start of a new wave of violence between the rival groups involving several murders, including the execution of two school boys (17 and 15 years old) who were dragged out of a taxi on their way home from school and shot on the spot in front of their friends and siblings. 145 Another example of gangs' terror is provided by the killing of Tecia Henry in 2009. The 10-yearold girl from John John in Laventille became a victim of the ongoing war between rival gangs when she went out to a shop but never returned. Later her body was found half naked, and the cause of death was determined to be strangulation. This was a turning point for the community, as after this incident the residents of John John stood up against the gang violence and proclaimed the 'Tecia Henry Order', which included a ceasefire. The Tecia Henry Order is prominently displayed on a billboard in John John and reads: Peace among all John John and neighbouring gangs. [. . .] Gang leaders must discipline their own members for the order to live. No child is to be used to do anything illegal or wrong. No child is to be kidnapped, hurt or murdered because of gang warfare. There must be absolutely no disrespecting of the elderly. Innocent residents must never become targets of gang rivalry. No violence against service providers, goods trucks and taxi drivers. There must be no housebreaking whatsoever. Raping is a violation of the Order, and must be dealt with seriously. Absolutely no violence during community activities. No brandishing of guns and injecting of fear into residents or visitors and if a misunderstanding occurs seek third-party help before war. We Live The Order. 146 This pamphlet does not criticise gang rule in general, but appeals to the gang leaders to discipline their members. It spells out the terror that community members face, including abuse, the kidnapping and murder of children, rape and indiscriminate violence against innocent residents. This indicates that gangs resemble Hobsbawm's avenger, as gangs are 'public monsters' that exert power and remain heroes of the population despite their monstrosities, or because of them, as Hobsbawm argues.
Conclusion
Drawing on Hobsbawm's conceptualisation of avengers, this analysis has demonstrated that on the one hand, gangs in Trinidad and Tobago have taken over a prominent role as legitimate community actors. From the perspective of gang leaders and members, the gangs and their communities are the victims of inequality, elite corruption and capitalism. Unaccountable politicians, corrupt law enforcers and a weak justice system contribute to this perception. The rebellion formulated by the gang leaders interviewed here is reminiscent of Hobsbawm's peasant wars, yet is found in the midst of an urban setting. Gangs have grown into important service providers, a theoretical phenomenon explained by Hobsbawm's 'social bandits'. The interviews with gang leaders demonstrate their claim to be 'helping poor people and providing food for hungry people', through which they present themselves as Robin Hoodsor noble robbers, to use Hobsbawm's term. They act as the community protectors who defend the communities from outside threats; prosecute crimes; and provide food, money and clothes. The prosecution of crimes, including upholding 'sexual morality' by punishing rapists, can be linked to Hobsbawm's avengers.
On the other hand, gangs engage in high levels of violence, shootings and strategic killings to terrorise communities, regular citizens and politicians. They have access to high-power assault rifles and kill strategically or randomly, including children and regular citizens. Community members face abuse, the kidnapping and murder of children, rape and indiscriminate violence against innocent residents. By spreading violence and fear, the gangs have established a system of impunity, spurring the reproduction cycle of gangs. Most murders are not prosecuted due to a lack of evidence and a lack of witnesses. By acting as 'avengers', the gangs have managed to install a system of local support, impunity and power.
My findings support previous scholarly findings that gangs provide social welfare, protection and justice (e.g. Blake, Harriot and Jaffe on gangs in Jamaica, 147 Stephenson in Russia, 148 McDonalds and Wilson in Indonesia, 149 or Gutiérrez Rivera in Honduras 150 ). Scholars argue that gangs' social behaviour grants them authority, respect and community support. Yet my findings show that it is not the positive aspect of gangs itself, but in combination with the spread of terror and fear that puts community members into a gridlock of compliance. The characteristics of Hobsbawm's 'avengers' provide a basis for thinking about the relationship of bandits to the community they reign in. It contributes to a better understanding of the contemporary modalities of urban violence and his figure of the avenger is very helps unpacking and characterise the ambiguity of the relationship between gangs and local communities. Rather than seeing (urban) violence as an indicator of a state's inability to address issues of poverty, racism and exclusion, and link the benevolent attitude of gangs to a lack of state presence or inadequate state capacity, the concept of the avenger allows us to explain why the phenomenon of gangs is persistent over time. Viewing the ambiguity of gangs as part of the processes of socialisation of young men into crime is central to understanding why gangs persist. These findings support the assumption that repressive responses to gang violence are likely to be ineffective or counterproductive, as it neglects that gangs are more than a 'crime problem', but social actors providing for the community who are helped, supported and admired by many. It is a mechanism in which gangs manage to pull community member on the side of the 'outlaws' and become complicit with them. Instead of viewing gang members as pathological criminals, Hobsbawm's avenger helps to understand crime as a form of social protest and underclass revolt and thus offer valuable insights into the workings of politics, gangs and local communities. At the same time, the concept of avengers is less romanticising and idolising than Hobsbawm's Robin Hood, as it points out to the brutality, terror and fear gang build their power on. This understanding supports the view not to underestimate the power of gangs.
Moreover, empirical evidence from Trinidad and Tobago shows that Hobsbawm's concepts are more contemporary than the critical literature suggests. Yet to make Hobsbawm's 1969 work applicable in the early twenty-first century, one has to indulge in conceptual cherry-picking. The empirical findings presented here match what Hobsbawm framed as social bandits and avengers; however, in Hobsbawm's understanding, social banditry was a rural phenomenon limited to peasant outlaws. It excluded urban terrorism and gangs. If we put Hobsbawm's lords, peasants, and countless examples of nineteenth-century bandits aside, we see that social bandits are indeed applicable to contemporary findings. If there has been a rural-urban shift of violence 151 and a transformation of gangs from economically driven criminal collectives to social actors and legitimised community leaders, 152 then the low-income urban working class is the new peasantry, and the gangs their peasant outlaws.
|
2019-05-12T14:23:47.708Z
|
2018-09-03T00:00:00.000
|
{
"year": 2018,
"sha1": "198841c0f26c18b7dff7cd4175c594d405e06468",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14678802.2018.1511165?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "eaf062144d64ce80b43e52b4db82b515a96cadac",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
253153885
|
pes2o/s2orc
|
v3-fos-license
|
How multimorbidity and socio-economic factors affect Long Covid: Evidence from European Countries
Abstract Introduction An increasing number of individuals continue reporting symptoms following the acute stage of Covid-19 infection. Few studies have investigated the factors related to Long Covid. Our aim was to assess how multimorbidity, socio-economic factors (immigration, education, employment, and income), and country of residence affect the presence and number of persistent symptoms attributable to Covid-19 illness in Europe. Methods We used data from the SHARE Corona surveys collected in 2020 and 2021. The sample included 4,004 respondents aged 50 years and older who were affected by the Corona virus. The outcome was the number of persistent symptoms attributable to Covid-19 illness, including: fatigue; cough, congestion, shortness of breath; loss of taste or smell; headache; body aches, joint pain; chest or abdominal pain; diarrhoea, nausea; and confusion. We conducted a multilevel analysis for a hurdle model with negative binomial distribution. Results Overall, 73% of respondents were estimated to have at least one persistent symptom associated with Covid-19 illness and, on average, they had 2.73 symptoms. However, there were some statistically significant across country differences in the presence and number of symptoms. Respondents who were employed were more likely to report at least one symptom (OR = 1.40) and those with higher levels of education were less likely to report any symptoms (OR = 0.67). Respondents with multimorbidity had an increased risk of experiencing an additional symptom (RR = 1.12) while respondents who were employed had a decreased risk of experiencing an additional symptom (RR = 0.85). Discussion and conclusions Presence and number of persistent symptoms associated with Covid-19 illness was highly prevalent and varied significantly across European countries. Evidence from the present work underscores the need to target high-risk groups and those with multimorbidity to reduce long-term health consequences of Covid-19.
Introduction:
An increasing number of individuals continue reporting symptoms following the acute stage of Covid-19 infection. Few studies have investigated the factors related to Long Covid. Our aim was to assess how multimorbidity, socioeconomic factors (immigration, education, employment, and income), and country of residence affect the presence and number of persistent symptoms attributable to Covid-19 illness in Europe.
Methods:
We used data from the SHARE Corona surveys collected in 2020 and 2021. The sample included 4,004 respondents aged 50 years and older who were affected by the Corona virus. The outcome was the number of persistent symptoms attributable to Covid-19 illness, including: fatigue; cough, congestion, shortness of breath; loss of taste or smell; headache; body aches, joint pain; chest or abdominal pain; diarrhoea, nausea; and confusion. We conducted a multilevel analysis for a hurdle model with negative binomial distribution.
Results:
Overall, 73% of respondents were estimated to have at least one persistent symptom associated with Covid-19 illness and, on average, they had 2.73 symptoms. However, there were some statistically significant across country differences in the presence and number of symptoms. Respondents who were employed were more likely to report at least one symptom (OR = 1.40) and those with higher levels of education were less likely to report any symptoms (OR = 0.67). Respondents with multimorbidity had an increased risk of experiencing an additional symptom (RR = 1.12) while respondents who were employed had a decreased risk of experiencing an additional symptom (RR = 0.85).
Discussion and conclusions:
Presence and number of persistent symptoms associated with Covid-19 illness was highly prevalent and varied significantly across European countries. Evidence from the present work underscores the need to target high-risk groups and those with multimorbidity to reduce long-term health consequences of Covid-19.
Introduction:
After COVID-19, many people continue to experience various symptoms for several weeks, even after a mild acute phase, and encounter difficulties when confronted with the healthcare system. Patient associations asked the Belgian Health Care Knowledge Centre to investigate the needs of these patients to improve their management. Purpose of research: An online quantitative survey was conducted in 2021 among Belgian patients with history of COVID-19; having/had persisting symptoms for at least 4 weeks. Alongside questions on symptoms, treatment and impact on employment, Health-Related Quality of Life (HRQoL) before and after COVID-19 was measured through the EQ-5D-5L. A regression analysis identified the factors associated with the impact of long COVID on HRQoL. The qualitative approach consisted in 33 interviews and forum discussions among 101 patients. Results: 1320 patients completed the online survey, most were symptomatic for more than 3 months. The average EQ-5D-5L index score was 0.85(95%CI:0.83-0.86) before and 0.65(95%CI:0.63-0.66) after infection. Duration, number and type of symptoms of long COVID significantly impacted HRQoL. More than half of the patients were unable to work. Qualitative part identified lack of empathy of health professionals, of systematic diagnostic approach, of interdisciplinary coordination. Patients felt misunderstood and developed their own diagnostic or treatment strategies. They questioned the value of medicine and resorted to non-reimbursed alternative therapies.
Conclusions:
Long COVID has a significant impact on HRQoL and employment. Because of long COVID, patients were confronted, sometimes for the first time, with the imperfections of the health system. Better informing the health professionals on Long COVID patterns and management options, including reimbursement possibilities, and a comprehensive interdisciplinary assessment would give them the tools to respond to the needs of these patients.
|
2022-10-27T15:14:33.728Z
|
2022-10-01T00:00:00.000
|
{
"year": 2022,
"sha1": "fda28b4f8448176de421ffdc547ab6f8803bee26",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/eurpub/article-pdf/32/Supplement_3/ckac129.137/46587332/ckac129.137.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "43d185e90b6e812d997c2f9f427375afc795e169",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
204933851
|
pes2o/s2orc
|
v3-fos-license
|
Non-Perturbative Contributions in the Plane-Wave/BMN Limit
This talk surveys recent work on the contribution of instantons to the anomalous dimensions of BMN operators in $\calN=4$ supersymmetric Yang--Mills theory and the corresponding non-perturbative contributions to the mass-matrix of excited string states in maximally supersymmetric plane-wave string theory. The dependence on the coupling constants and the impurity mode numbers in the gauge theory and string theory are in striking agreement. [Presented by MBG at the Einstein Symposium, Bibliotecha Alexandrina, June 4--6 2005.]
Introduction
The conjectured correspondence between the BMN sector of N = 4, d = 4 supersymmetric Yang-Mills and type IIB string theory in the maximally supersymmetric plane-wave background has been examined in some detail at the perturbative level. However, the understanding of non-perturbative aspects of the correspondence has been very limited. Such non-perturbative effects are well-studied in the context of the AdS/CFT correspondence where Yang-Mills instanton effects in N = 4 supersymmetric Yang-Mills correspond closely to D-instanton effects in type IIB superstring theory in AdS 5 × S 5 . A natural question to ask is whether there is a similar relationship between non-perturbative effects in plane-wave string theory and the BMN limit of the gauge theory.
The correspondence relates the plane-wave string mass spectrum to the spectrum of scaling dimensions of gauge theory operators in the so called BMN sector of N =4 SYM. This consists of gauge invariant operators of large conformal dimension, ∆, and large charge, J, with respect to a U(1) subgroup of the SU(4) R-symmetry group. The duality involves the double limit ∆ → ∞, J → ∞, while ∆ − J is kept finite and related to the string theory hamiltonian by The background value of the Ramond-Ramond (R − R) five-form flux, µ, is related to the mass parameter, m, which appears in the light cone string action by m = µp − α ′ , where p − is a component of light cone momentum. The two-particle hamiltonian is the sum of two pieces H (2) = H (2) pert + H (2) nonpert . (1. 2) The perturbative part, H pert , is a power series in the string coupling, g s , while H (2) nonpert is the non-perturbative part, which is suppressed by powers of e −1/gs .
The correspondence between the spectra of the two theories is the statement that the eigenvalues of the operators on the two sides of the equality (1.1) coincide. A quantitative comparison is possible if one considers the large N limit in the gauge theory focusing on operators in the BMN sector. As a result of combining the large N limit with the limit of large ∆ and J, new effective parameters arise, which are related to the ordinary 't Hooft parameters, λ and 1/N , by a rescaling, The correspondence relates these effective gauge theory couplings to string theory parameters in the plane-wave background, (1.4) The double scaling limit, N → ∞, J → ∞, with J 2 /N fixed, connects the weak coupling regime of the gauge theory to string theory at small g s and large m. Perturbative contributions to the mass spectrum have been analysed in some detail on both the string side and compared with corresponding contributions to the anomalous dimensions of BMN operators in the gauge theory. However, there have been no calculations of non-perturbative corrections due to D-instanton effects, which contribute to H (2) nonpert or of the corresponding Yang-Mills instanton contributions in the BMN limit. Indeed, it is not at all obvious at first sight that Yang-Mills instantons survive the BMN limit, but the correspondence with string theory D-instantons implies that they must. This talk, which is necessarily brief, reviews the contents of [1,2] that study the BMN/plane-wave correspondence at the non-perturbative level and details can be found in these papers 1 . The next section summarizes the results of [1] on plane-wave string theory while the gauge theory results of [2] are summarized in section 3. The agreement between the dependence of the instanton contributions on the two sides of the correspondence is impressive. Further details concerning states with fermionic impurities are in a forthcoming publication [3].
Mass-matrix elements in plane-wave string theory
In the maximally supersymmetric plane-wave background the five-form R−R potential has a non-zero value that sets the scale for the masses of the supergravity fields and reduces the isometry of the background to SO(4)× SO(4). Light-cone gauge string theory in this background is a free world-sheet theory with eight massive world-sheet bosons, X I , and eight massive world-sheet fermions, θ A , and may be described by the world-sheet action where θ 1 and θ 2 are SO(8) Grassmann spinors and Π = γ 1 γ 2 γ 3 γ 4 (where γ I are SO(8) gamma matrices). In the quantum theory the zero modes in the eight transverse directions, X I , define harmonic oscillators with strength proportional to m.
The classical supergravity states are obtained by applying the zero mode bosonic and fermionic creation operators to the ground state (the BMN vacuum). Excited string states are constructed as usual by applying higher mode creation operators to the zero mode states. A state constructed by applying p excited bosonic or fermionic creation operators is said to have p 'impurities', a terminology that makes contact with the corresponding operators in the gauge theory. Each oscillator can be in any excited state subject to the usual 'level-matching' restriction which means that there are, in general, p − 1 independent mode numbers that enter into the definition of the p-impurity state. Some effort has been expended in constructing a three-string vertex, from which a certain amount of perturbative information concerning string two-point functions -or, equivalently, the mass matrix elements -beyond free string theory can be extracted. We are concerned with nonperturbative contributions to the hamiltonian due to D-instantons. The single D-instanton sector has a measure that is proportional to e 2iπτ where τ = τ 1 + iτ 2 ≡ C (0) + ie −φ (C (0) is the R − R pseudoscalar, φ is the dilaton and g s = e φ ). Although this is exponentially small, it is the leading contribution with the phase factor e 2πiC (0) . It is therefore of interest to understand how the mass matrix is modified by these contributions. In the following we will outline the calculation of such D-instanton contributions to mass matrix elements, or two-point functions, to leading order in the string coupling.
A D-instanton with position x 0 is described, to lowest order in the string coupling, by world-sheet disks with Dirichlet boundaries fixed at x 0 . The light-cone boundary state description of the D-instanton in plane-wave string theory, generalizes that of the Minkowski space theory. The D-instanton boundary state couples to single closed-string states and preserves eight kinematical and eight dynamical supersymmetries and is given (at a specific value of x + 0 ) by where α,α, S andS are the left and right-moving non-zero modes of the bosonic and fermionic coordinates, X I , θ 1 and θ 2 , and |x 0 0 is the ground state of all the oscillators of non-zero mode number. The coordinate x 0 I is the eigenvalue of the position operator constructed from the zero-mode oscillators, a † I and a I . The quantity M k is a matrix in spinor space and is a function of m that reduces to the unit matrix in the flat-space limit, and ω k = √ m 2 + k 2 .
The leading contribution to the two-point function of string states in a D-instanton background comes from a disconnected world-sheet that is the product of two disks, with one closed-string state attached to each, and with Dirichlet boundary conditions. The two-boundary state is simply the product of two single-boundary states acting in distinct Fock spaces, ||B x 0 2 = |B, x 0 ⊗ |B, A dressed boundary state, including these supermoduli can be defined by 3) The factor of g 7/2 s in the D-instanton measure can be extracted from previous work on D-instanton contributions in AdS 5 × S 5 (and we are not keeping overall multiplicative constants). The on-shell two-point function between string states |χ 1 and χ 2 | is given by the integrated matrix element Integration over the light-cone moduli, x ± 0 , ensures the conservation of p ± in any process while integration over the other supermoduli generate correlations between the two disks.
For a state with occupation numbers n r (where r labels the oscillator levels) the lightcone energy is given by the nonlinear formula p + = r ω r /2p − . It follows that conservation of p + implies that the number of impurities is preserved by this process, so that |χ 1 and χ 2 | have the same number of impurities. Generally, conservation of p + imposes the even stronger condition that the non-zero mode numbers of oscillators in the incoming state coincide with those of the outgoing state. The nonlinear energy relation is seen on the gauge side after summing perturbative planar contributions to all orders in λ ′ . However, to leading order in the 1/m 2 ∼ λ ′ expansion ω nr = m and conservation of p + imposes no relation between the mode numbers of the incoming and outgoing states. Therefore, since we are interested in comparing with perturbative gauge theory we need not impose the equality of incoming and outgoing oscillator mode numbers in the following.
Certain other general features of D-instanton dominated matrix elements follow from general properties of the boundary state (2.3). For example, the boundary state couples to arbitrary numbers of pairs of modes, where each pair consists of one left-moving mode with a mode-number n and a right-moving mode with the same mode number. This means that it only has non-zero coupling to states that are level-matched in this pairwise fashion -a feature that must therefore also be true on the gauge theory side although it will prove much harder to see this from a conventional Yang-Mills instanton calculation.
Examples of D-instanton contributions to matrix elements between states with various numbers of bosonic and fermionic impurities were considered in [1]. Those results that are particularly relevant for comparison with the gauge theory results of [2] are the following.
Two bosonic impurities
A level-matched state with two bosonic impurities is associated with a single mode number. The two-state bra vector in (2.4) is given by where the wave functions t IJ are tensors of SO(4)× SO(4) (with indices that take the values I, J, K, L = 1, . . . , 8). The ground state |0 h denotes the BMN ground state, which is the state of lowest p + . The leading semi-classical one D-instanton contribution to the two-string mass-matrix element is independent of the mode number and has the form (ignoring a constant overall factor) in the large-m limit (where m = µ p − α ′ ). We have indicated the expression in terms of the gauge theory parameters in the second line for future reference. In this expression we have also specialized to vector indices i, j, p, q lying in one of the SO(4) factors of the SO(4)× SO(4) isometry group since this is the case that is easiest to calculate in the gauge theory.
Although the exact string theory expression includes all non-leading terms, it is only the large-m limit that can be compared with the gauge theory calculations. Note, in particular, that this leading contribution is of order λ ′ 2 and is suppressed relative to potential O(λ ′ 0 ) effects. This fact makes the Yang-Mills instanton contribution to the two impurity case more difficult to evaluate in precise detail than cases with higher numbers of impurities.
Four bosonic impurities
With four bosonic impurities there are three independent non-zero mode numbers for each external state after taking level matching into account. However, as we remarked above, the only non-zero matrix elements are those in which each α n mode is accompanied by aα n with the same mode number, n. In this case the bra state in (2.4) is given by (2.7) The tensor wave functions t p 1 p 2 p 3 p 4 of the in and out states have again been restricted to have indices in a single SO(4) factor of the isometry group simply because that is the easiest case to consider in the dual gauge theory. In this case the mass-matrix is given, at leading order in powers of 1/m, by In this case the result is zeroth order in λ ′ perturbation theory. The expression (2.8) implies that to leading order in m only scalar states have an induced D-instanton coupling. The rest of the possible bosonic four-impurity states have couplings that are suppressed by powers of m compared to this leading result. Further details of these four-impurity matrix elements are given in [1].
Anomalous dimensions of BMN states in N = 4 Yang-Mills theory
We will now discuss semi-classical instanton contributions to the anomalous dimensions of BMN operators in N = 4 SU(N) Yang-Mills theory, which are extracted from two-point correlation functions. Conformal invariance determines the form of two-point functions of primary operators, O andŌ, to be where ∆ is the scaling dimension. In general in the quantum theory ∆ acquires an anomalous term, ∆(g YM ) = ∆ 0 + γ(g YM ). At weak coupling the anomalous dimension γ(g YM ) is small and substituting in (3.1) gives where Λ is an arbitrary renormalisation scale. As a function of the coupling constant the anomalous dimension admits an expansion consisting of a perturbative series plus nonperturbative corrections. The generic two-point function at weak coupling takes the form Therefore perturbative and instanton contributions to the anomalous dimension are extracted from the coefficients of the logarithmically divergent terms in a two-point function.
The general structure of these anomalous dimensions is an expansion of the form where τ = θ 2π + i 4π (3.5) The conjugate operatorŌ has a large number ofZ's instead of Z's. For the two-point function (3.3) to be non-vanishing the operators must have equal and opposite values of J.
Instanton contributions to two-point correlation functions
In semi-classical approximation, correlation functions of composite operators are computed by replacing each field by the solution of its corresponding field equation in the presence of an instanton, expressed in terms of the fermionic and bosonic moduli. These moduli encode the broken superconformal symmetries together with the broken (super)symmetries associated with the orientation of a SU(2) instanton within SU(N ). The symmetries are restored by integration over the supermoduli. For large N , the integration is carried out by a saddle point procedure. In the case of a two-point function of a generic local operator, O(x), and its conjugate the supermoduli integral takes the form where we have denoted the bosonic and fermionic collective coordinates by m b and m f respectively. In (3.6) dµ inst (m b , m f ) is the integration measure on the instanton moduli space, S inst is the classical action evaluated on the instanton solution andÔ andÔ denote the classical expressions for the operators O andŌ computed in the instanton background. A one-instanton configuration in SU(N ) Yang-Mills theory is characterised by 4N bosonic moduli that can be identified with the size, ρ, and position, x 0 , of the instanton as well as its global gauge orientation. The latter can be described by three angles identifying the iso-orientation of a SU(2) instanton and 4N additional constrained variables, w uα and wα u (where u = 1, . . . , N is a colour index), in the coset SU(N )/(SU(N −2)×U(1)) describing the embedding of the SU(2) configuration into SU(N ). In the one-instanton sector in the N =4 theory there are additionally 8N fermionic collective coordinates corresponding to zero modes of the Dirac operator in the background of an instanton. They comprise the 16 moduli associated with Poincaré and special supersymmetries broken by the instanton and denoted respectively by η A α andξα A (where A is an index in the fundamental of the SU(4) R-symmetry group) and 8N additional parameters, ν A u andν Au , which can be considered as the fermionic superpartners of the gauge orientation parameters. The sixteen superconformal moduli are exact, i.e. they enter the expectation values (3.6) only through the classical profiles of the operators. The other fermion modes, ν A u andν Au , appear explicitly in the integration measure via the classical action, S inst , and are therefore 'non-exact' moduli. This distinction plays a crucial rôle in the calculation of correlation functions. The ν A u andν Au modes satisfy the fermionic ADHM constraints which effectively reduce their number to 8(N − 2). The manner in which these moduli enter into the expressions for the fields is determined by the solution of the field equations for N =4 SYM theory in an instanton background. The solution for each field in the Yang-Mills suprermultiplet can be written as a sum of terms containing different numbers of fermionic zero modes. For the purpose of this talk let us note that a scalar field has the form where the notation Φ (4n+2)AB denotes a term in the solution for the field Φ containing a product of 4n + 2 fermion zero modes. The minimum number of fermionic moduli in a scalar field is therefore two, while the next term contains a product of six fermionic moduli and so on. It is understood that the number of superconformal modes in each field cannot exceed 16 and the remaining modes are of ν A u andν Au type. Furthermore, terms with higher numbers of moduli are suppressed by powers of the coupling, so the leading contribution to the two-point function is that with the minimal number of moduli in each scalar field.
In order to evaluate the two-point function (3.6) the expressions for the fields in terms of moduli must be substituted into each composite operator and the resulting traces must then be evaluated. The actual integration over the large number of supermoduli is reasonably straightforward, but there are complicated combinatorics involved in distributing the moduli among the fields in the two operators, that we will now outline (and are discussed in detail in [2]).
The J + k scalar fields in the operator O defined in (3.6) each contain at least two fermionic moduli, which may be chosen from the superconformal moduli, η andξ, or from the non-exact moduli, ν andν. The sixteen fermionic superconformal moduli naturally arise in the combination where ζ A α (x) are eight position-dependent Grassmann variables. This means that there has to be a factor of 4 A=1 (ζ A (x 1 )) 2 in each operator in the two-point correlation function. In other words each of the two operators in the correlation function has to contain eight of the superconformal moduli. Taking their SU(4) quantum numbers into account only four of these can be soaked up by the Z fields and the rest have to be contained in the impurity fields, ϕ AB . Once the sixteen superconformal moduli are distributed among some of the scalar fields the non-exact moduli are soaked up by the remaining (large number) of fields, which are mostly Z's.
The bosonic integrations over the position and size of the instanton are left as a last step. These integrals are logarithmically divergent, the coefficient of the logarithm corresponding to the contribution to the matrix of anomalous dimensions.
In [2] we considered the two impurity and four-impurity cases in detail. The results were as follows.
Two bosonic impurities
For the two impurity case there is a technical problem in carrying out a complete analysis. The point is that in order to soak up all sixteen of the fermionic supermoduli, at least one of the scalars in each operator has to soak up six fermionic moduli, rather than the minimum number of two. This means that the contribution is of higher order in λ ′ than a leading contribution would be, which is in line with the two-impurity result in plane-wave string theory described earlier. It is technically very complicated to derive the precise from of this six-fermion contribution, but this is needed to determine the J-dependence of the two-point function. Nevertheless, if we assume BMN scaling the analysis can be carried through sufficiently to argue that the result is in agreement with the string calculation. This follows since the dependence on g YM and N can be determined without knowledge of the details of the six-fermion term, and this uniquely fixes the power of J needed for BMN scaling. This requirement, in turn, constrains the way in which the fermion zero modes can appear in the profile of the operator. Specifically, the two-point function can obey BMN scaling only if the distribution of the zero modes is such that the final result is independent of the single mode number entering the definition of the two impurity operators.
Since in this case the analysis is incomplete we will only state the final result here, but will give a more detailed description of our method in the four-impurity case. It is simplest to choose the two states to be in the representation 9 of SO(4) R , since this sector contains only one operator which cannot mix with any other. The result yfor the two-point function of this operator, assuming BMN scaling, has the form where I is a logarithmically divergent integral over the bosonic moduli, which can be regulated by dimensional regularisation of the x 0 integral. The coefficient of this divergence gives the instanton induced anomalous dimension of O This is in agreement with the non-perturbative correction to the mass of the dual string state computed in [1]. In particular, the anomalous dimension (3.11) is independent of the parameter n corresponding to the mode number of the plane-wave string state. Apart from the exponential factor characteristic of instanton effects, (3.11) contains an additional factor of (λ ′ ) 2 . This is due to the inclusion of six-fermion scalars which give rise to additional (νν) 6 bilinears, each of which brings one more power of g YM . As will be seen in the next subsection, in the case of four impurity SO(4) R singlets it is sufficient to consider the solution for all the scalars that is bilinear in the fermions and as a consequence we shall find a leading contribution of order (g 2 ) 7/2 e −8π 2 /g 2 λ ′ .
Four bosonic impurities
The calculation of two-point functions of four impurity operators is more involved than the corresponding calculation in the two impurity case from the point of view of the combinatorial analysis. However, at the four impurity level, in the case of SO(4) R ×SO(4) C singlets, the calculation of the leading instanton contributions requires only the inclusion of the quadratic fermionic terms in the classical profiles of the scalar fields, which are known explicitly.
However, at the four impurity level, in the case of SO(4) R ×SO(4) C singlets, the leading instanton contributions the classical profiles of the scalar fields involve only the quadratic fermionic terms and are known explicitly. Therefore, in this case the semi-classical contributions to the two-point functions can be analyzed more completely. The fact that non-zero correlation functions are obtained using the minimal number of fermion modes for each field also implies that in this case a contribution to the matrix of anomalous dimensions arises at leading order in λ ′ . The case in which the external state is an SO(4)× SO(4) singlet with four scalar impurities is the simplest to analyze and also corresponds to the states we discussed in the context of the plane-wave string theory. More explicitly, the operators to be considered are of the form which is dual to the scalar plane-wave string state ε ijkl α i −n 1 α j −n 2 α k −n 3 α l −(n 1 +n 2 −n 3 ) |0 h . The conjugate operator involvesZ's instead of Z's.
As before, in considering the distribution of the fermionic moduli among the J + 4 fields within a trace, half of the superconformal modes (i.e., eight) must be soaked up by each of the two operators in the two-point correlation function. Furthermore, at least four of these have to be soaked up by the impurity scalar fields since the quantum numbers of the Z's are such that they can soak up at most four of the superconformal modes. The number of possible ways of distributing each kind of fermionic modulus among the J + 4 scalar fields is very large and we will not describe the combinatorics here. After summing this very large number of terms the resulting expression for the correlator is (omitting overall coefficients) × d 5 Ω Ω 14 J Ω 23 J K(n 1 , n 2 , n 3 ; J)K(m 1 , m 2 , m 3 ; J) , (3.13) where Ω AB are angular variables on the five-sphere that emerge from the integration over the ν andν moduli. The J and N dependence in the prefactor is obtained combining the normalisation of the operators, the contribution of the measure on the instanton moduli space and the factors of g YM √ N associated with the ν andν variables. The expression (3.13) contains integrations over the bosonic moduli, x 0 and ρ, the sixteen superconformal fermion modes and the five-sphere coordinates Ω AB .
The dependence on the integers n i , m i , i = 1, 2, 3, dual to the mode numbers of the corresponding string states is contained in the functions K(n 1 , n 2 , n 3 ; J) and K(m 1 , m 2 , m 3 ; J). These are given by the sum of 35 terms, which are sums over integers q, r, s of phases exp{2πi[(n 1 + n 2 + n 3 )q + (n 2 + n 3 )r + n 3 s]/J} multiplying the multiplicity factors associated with the different distributions of Z's in each case. There are very many contributions to each of these 35 terms and the sums over this very large number of phase factors lead to some very impressive cancellations of what would otherwise be large and unlikely looking expressions.
The final result is obtained after performing the bosonic integrals. At each step various powers of g Y M , N and J enter, and it appears rather miraculous that in the end they all combine into a function that depends only on g 2 and λ ′ , in accord with the BMN scaling. We can indicate where these different powers of the couplings come from as follows, (3.14) The final result for the two-point function turns out to vanish unless the mode numbers of the operators are equal in pairs -just as in the string theory D-instanton calculation. The result is where the scale Λ appears as a consequence of the 1/ǫ divergence in the ρ, x 0 integration. The physical information contained in the two-point function is in the contribution to the matrix of anomalous dimensions which is read from the coefficient in (3.15) and does not depend on Λ. Unlike the two-point functions of two impurity operators (3.15) is independent of λ ′ , apart from the dependence in the exponential instanton weight. The mode-number dependence in (3.15) is extremely simple, given the very large number of terms that had to be summed. The calculation presented here is not sufficient to determine the actual instanton induced anomalous dimension of the operator O 1 . This requires the diagonalisation of the matrix of anomalous dimensions of which we have not computed all the entries. Other entries are determined by the corresponding two-point functions whose calculation follows the same steps described here and results in expressions similar to (3.15). From this we can conclude that the behaviour of the leading instanton contribution to the anomalous dimensions of singlet operators is which is in agreement with the string result (2.8). It is worth stressing that the the condition of pairwise equality of mode numbers appears in a highly non-obvious manner in the gauge theory calculation, while it followed rather trivially from the form of the boundary state in the plane-wave string theory. The Yang-Mills instanton contributions to other (non-singlet) four-impurity operators are suppressed by powers of λ ′ , as in the two-impurity case. This is also in qualitative agreement with the string side of the correspondence. However, in order to evaluate the semi-classical profiles of the BMN operators we would again have to use the contribution to some of the scalar fields that contains a product of six fermionic moduli, which presents the same technical obstacle as in the two impurity case.
Other issues
The basic message is that we find striking agreement between instanton effects in the gauge theory and those calculated in the plane-wave string theory. We focused on operators with two and four scalar impurities since these are the easiest to calculate on the gauge side. The four impurity case, although more involved, is fully under control, whereas the two impurity case presents subtleties due to the fact the leading semi-classical approximation vanishes and the first non-zero contribution arises at higher order. Clearly it would be interesting but very challenging to generalize the present work from the one-instanton sector to multi-instanton sectors.
The structure of the string theory side of the calculation was much simpler than the gauge side. In fact, many properties of the Yang-Mills side would be very difficult to calculate without the insights provided by the string calculation. For example, one generic feature of the string calculation is that only states with an even number of non-zero mode insertions receive D-instanton corrections. Zero mode oscillators can appear in odd numbers with the condition that they be contracted into a SO(4)×SO(4) scalar between the incoming and outgoing states. Another peculiarity observed in the string theory calculation is that the D-instanton contribution to the masses of certain states with a large number of fermionic non-zero mode excitations involves large powers of the mass parameter m. These mass-matrix elements are ones that receive no perturbative contributions. When expressed in terms of gauge theory parameters this behaviour corresponds to large inverse powers of λ ′ . This is not pathological in the λ ′ → 0 limit, because the inverse powers of λ ′ are accompanied by the instanton factor, exp −8π 2 /g 2 λ ′ . From the point of view of the gauge theory this result is intriguing, not only because of the unusual coupling constant dependence that the anomalous dimensions of the dual operators display, but also because there are no other known examples of operators in N =4 SYM whose anomalous dimension receives instanton but not perturbative corrections. This particular class of BMN operators will be discussed in [3].
Finally, we should note that the issue of non-perturbative corrections to anomalous dimensions is very far removed from the interesting issues surrounding the integrability of string theory in AdS 5 × S 5 . Integrability is expected to be a property of tree-level string theory and the corresponding planar approximation to N = 4 Yang-Mills, which can be successfully modelled by local spin chains. In contrast, an instanton affects all the fields in the BMN operator equally, and is therefore highly non-local along the chain. However, instantons are crucial in describing the SL(2,Z) S-duality transformations of the theory and, in particular, for understanding how SL(2,Z) acts on the anomalous dimensions. In general SL(2,Z) transformations relate operators of small and large dimension, just as in string theory they relate fundamental strings to D-strings, which have large masses of order 1/g s , in the limit of weak string coupling, g s ≪ 1. It would be interesting to understand how S-duality is realised in type IIB string theory in the plane-wave background. A corresponding symmetry should exist in the BMN sector of N =4 SYM and the instanton effects which we have described should be relevant to its implementation.
|
2019-04-14T02:54:16.013Z
|
2005-10-19T00:00:00.000
|
{
"year": 2005,
"sha1": "39cc3109a492a58bc7d13d4e76fcfdb8aa499221",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "36ee18a87011500df9593f89e2fd2221d08bfab1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235410013
|
pes2o/s2orc
|
v3-fos-license
|
Risk attitude and risk strategy management in red chili business in Langkat Regency North Sumatera Province
Red Chili one of the most potential commodities for cultivation. however, red chili farmers do not always experience profits. There are times when farmers often experience enormous losses. This is related to the risks and uncertainty of farming faced by farmers. Therefore, it is necessary to conduct research on the relationship of risk attitudes and risk management strategies in red chili farming in Langkat Regency, North Sumatra Province. The data used are primary data sourced from questionnaires and interviews with 112 red chili farmers in Langkat Regency. The analytical method used is the Pearson Correlation to see relationship of risk attitudes and risk strategies. The results showed that the correlation between the variable risk attitude and risk management strategy was strong, significant and unidirectional or risk attitude had a significant and negative relationship to the intention to implement risk reduction strategies in red chili farming by farmers.
Introduction
Doing business in agriculture is generally known to have high potential, but also has enormous risks. Agricultural risk occurs due to various factors, starting from diversity and climate change, natural disasters, uncertainty in productivity and prices, weakness in infrastructure, market weaknesses and lack of financial services, including limited risk control tools that still barely touch the world of agriculture. The steps taken by farmers are very much influenced by the attitudes and relationships in the local community where they live. For a farmer, it is the community that is the main source of his welfare [1].
In running their farming business, red chili farmers face complex problems, both internal and external. Usually, internal problems are related to problems that can be controlled by farmers, such as the problem of limited land tenure, low technology mastery and low capital. Problems that are beyond the farmer's control relate to problems beyond the farmer's control, which include climate change problems, plant pests (OPT) attacks, as well as problems with fluctuating selling prices. Red chilies are one of the most potential commodities for cultivation. However, red chili farmers do not always experience profits [2].
The individual's choice to act on the basis of risk depends on the individual's assessment [3]. Another key factor in is believed to be a risky attitude [4]. The attitude of risk or sometimes referred to the farmer's orientation towards taking risk. The risk attitude can vary from being very risk averse to very risk seeking. Different people have different attitudes to risk which cause them to deal differently [5][6][7] shows that the risk behaviour of farmer may affect their decision on input usage".
Smallholder risk management strategies dealing uncertainty and risk in different ways. Commonly strategies are avoiding heavy credit dependence or maintaining stability during times of financial hardship, generating other income (off-farm), using external factor like crop insurance, diversifying production or sources of income and saving on personal expenses [8]. The author realizes that farmers can and implement risk management strategies in an effort to reduce the risk of their farming.
Determination of the research area
Determination of the research area was carried out purposively by considering that Langkat Regency was one of the regions in North Sumatra Province as the centre for producing red chili, but out of the 5 red chili producing districts, Langkat Regency was among the lowest in production, while the land area and productivity could be higher. both from other districts.
Data collection method the data
Collection method was carried out by collecting primary data by interviewing farmers with a structured questionnaire guide. The data collected in relation to this paper includes: farmer household characteristics, control of land and other assets, cropping patterns, farm input and output structures, and household income structure. The aspect related to farmer attitudes is that farmers who like risk and don't like risk. Regarding the choice of farmers in a strategy to reduce risk, there are several options for efforts that are usually carried out by farmers in reducing the risk of farming red chilies
Data analysis method
The analysis model used is the Pearson correlation with the following equation:
Correlation
(1) Description : rxy : correlation coefficient r Pearson n : number of samples / observations x : independent variable / first variable y : dependent variable / second variable
Characteristics of respondents
The socio-demographic description of the research respondents consisted of 112 farmers spread across 4 (four) Districts in Langkat Regency, namely, Stabat, Secanggang, Kuala, and Sei Bingai Districts who cultivate red chili. Following are the results of the data processing of farmer characteristics seen from age, level of formal education and farming experience of each respondent farmer can be seen in table 1.
The age of the farmers who became respondents ranged from 38 -61 years, with the most respondent farmers ranging from 40 to 49 years as many as 50.89% because this age is the productive age for farmers in conducting red chili farming. For those under 40 years old, the percentage of respondent farmers is 17.86%. This is because those under 40 years old still do a lot of work in other business sectors besides being farmers. The education of the most respondents is secondary school graduates (SMA) with a percentage of 56.25% and for farmers with formal education levels below secondary school as much as 36.61%. The education level of farmers influences farmers in classifying farmers who are risk averse and those who like to risk. The age of the farmers who became respondents ranged from 38 -61 years, with the most respondent farmers ranging from 40 to 49 years as many as 50.89% because this age is the productive age for farmers in conducting red chili farming. For those under 40 years old, the percentage of respondent farmers is 17.86%. This is because those under 40 years old still do a lot of work in other business sectors besides being farmers.
The education of the most respondents is secondary school graduates (SMA) with a percentage of 56.25% and for farmers with formal education levels below secondary school as much as 36.61%. The education level of farmers influences farmers in classifying farmers who are risk averse and those who like to risk.
For the length of farming, the average respondent farmer ranges from 10 to 20 years. Most of the farmers as much as 40.18% are farmers who cultivate red chilies between 11 to 20 years, the remaining 33.4% who have farming experience under 1 years and as many as 26.79% of respondent farmers who have experience in red chili farming over 20 years. Judging from the results of data processing for farming experience, it shows that farmers in Langkat Regency already have sufficient experience in farming. So it can be said that farmers already have sufficient experience and knowledge about agricultural risks. 4 management strategies obtained from these farmers tend not to carry out or apply any strategies because red chili farmers in Langkat Regency have anticipated the impact of the risks. (preventive) which will occur before doing red chili farming.
Attitudes toward risk management strategy
Based on the results of the Pearson Correlation data analysis performed using SSPS, the following results are obtained: From the table above, it can be seen that the Pearson correlation coefficient is -0.630 **. This means that the magnitude of the correlation between the Risk Attitude and Management Strategy variables is -0.630 or strong because it is close to number 1. The two-star sign (**) means that the correlation is significant at the significance level of 0.01 and has the possibility of two directions (2tailed).
Based on the existing criteria, the relationship between the two variables is significant because the significance value is 0.000 <0.001. (If there is no two-star sign, the significance is automatically 0.05). The relationship between the two variables has two directions (2-tailed), which can be unidirectional (positive) and unidirectional (negative).
See the direction of the correlation between two variables. The direction of correlation is seen from the number of positive or negative correlation coefficients. Because the correlation coefficient value is negative, namely -0.630; then the correlation of the two variables is not unidirectional. It is called Negative Correlation if two (or more) correlated variables go in opposite, contradicting, or opposite directions. This means that an increase or increase in variable X, for example, will be followed by a decrease or decrease in variable Y. in this research is means that if the value of the number of risk attitudes is high, the value of the strategy will decrease and / or vice versa. The correlation between the Risk Attitude and Management Strategy variables is strong, significant and unidirectional or the risk attitude has a significant and negative relationship to the intention to implement a risk reduction strategy in red chili farming by farmers.
Relationship of risk attitudes and farm risk management strategies
In this study, the variable risk attitude was measured by various statements about farm risk-taking behaviour. Farmers as respondents need to rate their agreement with these statements on a five-point there is 1 -5 (strongly disagree -strongly agree).
The risk attitude negatively influences strategy; that is, farmers agree with the statement that they are less willing to take risks (risk averse) which results in a greater tendency to earn off-farm income, to deal with risks with special measures and to maintain financial stability. Thus, we can classify two International Conference on Agriculture, Environment and Food Security:2020 IOP Conf. Series: Earth and Environmental Science 782 (2021) 022047 IOP Publishing doi:10.1088/1755-1315/782/2/022047 5 risk management approaches farmers primarily, risk averse tend to passively handle risk, by maintaining stability, ensuring off-farm income or working harder and holding back personal expenses in times of trouble. Meanwhile who are more willing to accept risk (risk seeking proactive farmers) take approach to risk, use external risk management, namely diversifying production and sources of income, and optimizing their agriculture.
The results of this study are in line with research conducted by [9], showing that most of the small farmers in developing countries are reluctant to take risks. The behaviour of farmers in facing agricultural risks shows that most farmers are risk averse, while farmers who are risk neutral are 23.75% and lovers of the least risk (7.5%), the attitude to avoid risks to farmers is to reduce the risk of pest attack. [10] stated that farmers' behaviour towards farming risks is still risk averse. The fact that a large sample of farmers is risk averse is understandable because the life of rural farmers is so close to the border of substances and erratic weather. In addition, farmers have a different character, namely trying to avoid failure rather than getting bigger profits by taking risks.
The success of agricultural management is closely related to the willingness of farmers to take risks, which reflects their willingness to use more inputs than other farmers in general. As technology advances today and in the future, farmers are expected to become risk takers instead of avoiding risks because they must have a more optimistic and bright future. Distinction can be made between farmers who are willing to take risks and farmers who are not willing to take risks [11]. Therefore, the farmers' statement of risk averse looking for risk (risk seeking) becomes clear. Increasingly, risk tend not to adopt risk management strategies -ex-ante averse farmers and rely more unactioned-post curative. On the other hand, the more farmers who seek risk (risk seeking), the more likely it is that they will apply management strategies before taking their agricultural actions (ex-ante).
One explanation for the obvious statement is that farmers who are more willing to take risks have a greater need to protect themselves against these risks and are thus more likely to adopt certain risk management strategies. This is especially true for external risk management strategies, as farmers can enable them to take more risks when they are confident and therefore certain at a minimum price or income. In terms of agricultural optimization, farmers can be very risk averse, unwilling to take on the financial risks associated with modernization and / or scaling up, even if such strategies can reduce operational risks and increase returns. Given the complexity and interdependence of different risks, it is often the fact that managing one risk carries the risk of another. Finally, it could be that the farmer balances risks: farmers who are more willing to take certain specific risks at the same time manage other collateral risks to balance the total risk.
Conclusions
Risk attitude has a significant negative relationship to the risk management strategy of farming, where farmers who reject risk (risk averse) do not strategies in the implementation of red chili farming, while farmers who like risk (strategy of risk seeking apply risk management) will apply the risk management red chili farming. The government should be able to understand the needs of farmers and the risks faced by farmers in cultivating red chilies. Agricultural risks occur due to various factors, starting from diversity and climate change, the occurrence of natural disasters, uncertainty in productivity and prices, weakness in infrastructure, market weaknesses and lack of financial services, including limited risk control tools that only touch the world of agriculture. Increasing human resources in agricultural management and regulations in applying market price standards so that red chili farmers are not disadvantaged.
|
2021-06-12T20:02:28.466Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "116e82ce12fec0261fb8343ababf423749ce66cf",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/782/2/022047",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "116e82ce12fec0261fb8343ababf423749ce66cf",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Business",
"Economics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
89572982
|
pes2o/s2orc
|
v3-fos-license
|
State‐and‐transition simulation models: a framework for forecasting landscape change
A wide range of spatially explicit simulation models have been developed to forecast landscape dynamics, including models for projecting changes in both vegetation and land use. While these models have generally been developed as separate applications, each with a separate purpose and audience, they share many common features. We present a general framework, called a state‐and‐transition simulation model (STSM), which captures a number of these common features, accompanied by a software product, called ST‐Sim, to build and run such models. The STSM method divides a landscape into a set of discrete spatial units and simulates the discrete state of each cell forward as a discrete‐time‐inhomogeneous stochastic process. The method differs from a spatially interacting Markov chain in several important ways, including the ability to add discrete counters such as age and time‐since‐transition as state variables, to specify one‐step transition rates as either probabilities or target areas, and to represent multiple types of transitions between pairs of states. We demonstrate the STSM method using a model of land‐use/land‐cover (LULC) change for the state of Hawai'i, USA. Processes represented in this example include expansion/contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland and harvest of tree plantations; the model also projects shifts in moisture zones due to climate change. Key model output includes projections of the future spatial and temporal distribution of LULC classes and moisture zones across the landscape over the next 50 years. State‐and‐transition simulation models can be applied to a wide range of landscapes, including questions of both land‐use change and vegetation dynamics. Because the method is inherently stochastic, it is well suited for characterizing uncertainty in model projections. When combined with the ST‐Sim software, STSMs offer a simple yet powerful means for developing a wide range of models of landscape dynamics.
Introduction
The world is composed of landscapes, natural and humaninfluenced, that are heterogeneous in space and time. Simulation models can provide valuable insights into the dynamics of these landscapes, including improving our understanding of how these landscapes change and, in turn, providing forecasts of their future state (Baker 1989;Sklar & Costanza 1991;Veldkamp & Lambin 2001).
Since the early 1970s, a wide range of spatially explicit simulation models have been developed to understand and forecast landscape dynamics. First, there are landscape vegetation models, developed principally by ecologists, which focus on predicting landscape-scale changes in vegetation in response to ecological drivers such as climate, biophysical conditions and disturbances (Baker 1989;Keane et al. 2004;. While many of these landscape vegetation models have been developed for specific questions or regions (Keane et al. 2004), a few have been generalized sufficiently to become modelling platforms, including SELES (Fall & Fall 2001), TELSA (Kurz et al. 2000) and, for forested systems, LANDIS Wang et al. 2014). Secondly, there are land-use/land-cover (LULC) change models, developed principally by geographers, where the focus is to represent the effects of human-driven processes on LULC change (Agarwal et al. 2002;Verburg et al. 2006;Brown et al. 2013). Examples of more general LULC change modelling platforms include CLUE-S/Dyna-CLUE (Verburg et al. 2002;Verburg & Overmars 2009), SLEUTH (Chaudhuri & Clarke 2013), DINAMICA EGO (Soares-Filho, Cerqueira & Pennachin 2002) and CA_MARKOV (Pontius & Malanson 2005).
Existing models of landscape dynamics share common features, many of which we believe can be captured in a generalized landscape modelling framework. Such a framework would reduce the duplication of efforts across modellers and foster innovation, communication and collaboration across the landscape modelling community. The approach we present here has emerged from our involvement in the development of landscape vegetation models for a range of ecological systems, including forests, rangelands, wetlands and land-use change (Wilson et al. 2014). Our method, which we refer to as a stateand-transition simulation model (STSM), can be characterized as follows: (i) space is represented as a set of discrete spatial units; (ii) time is represented in discrete steps; (iii) the change over time in the discrete state of each spatial unit is represented as a stochastic process; and (iv) time-inhomogeneous rates of change between states are expressed as probabilities.
Why the new term 'state-and-transition simulation model'? While STSMs share some common features with Markov chainswhen the Markov chains are referenced spatially in the manner proposed by Baker (1989) the differences between Markov chains and our method are significant enough to warrant a different term. As will be described below, the principles behind an STSM were developed specifically to overcome some of the limitations of Markov chains for modelling landscape change. Secondly, while STSMs also share features with cellular automata models (Balzter, Braun & Kohler 1998;White & Engelen 2000), there are also significant differences between STSMs and cellular automata in their representation of spatial interactions. Furthermore, terms such as 'state-and-transition model' and 'transition model' have been widely used in the literature without a clear, formal definition as to their meaning, including references to conceptual diagrams (Stringham, Krueger & Shaver 2003), Markov chain models (Acevedo, Urban & Ablan 1995) and other forms of stochastic processes (Keane et al. 2004). Thus, the term 'STSM'which was first used by Czembor & Vesk (2009)serves to distinguish our specific approach from other more ambiguous terms.
The objective of this paper is to present the details of our STSM framework. We begin with a description of the STSM method, followed by a brief overview of the software available to develop STSMs. A case study example is then presented demonstrating some of the key elements of STSMs. We conclude with a brief discussion of how STSMs relate to other modelling approaches, and opportunities for further enhancements to the STSM framework.
STSM approach
Like most approaches to spatially explicit landscape modelling, the first step in developing an STSM is to divide the landscape spatially into a set C of n simulation cells; these cells can be any shape and size, although they are most commonly represented as a regular raster grid. An STSM represents the change in state of each simulation cell over time as a discrete-time stochastic process {X t : t ≥ 0}, where the state space is a set S of r discrete state types (X t 2 S) and t represents discrete timesteps. As in a Markov chain, probabilities are defined for one-step transitions between states for each cell. In a Markov chain, these one-step transition probabilities are specified by defining the probability, P ij , that the state of the stochastic process for each cell, X t , will move from state type i to state type j, where i,j 2 S. These probabilities are then represented for each cell as an r by r transition matrix P = (P ij ).
In many landscape models, however, it is important to distinguish the different types of transitions between states, something that is not possible with Markov chains. For example, in a forested system, there can be multiple processes responsible for transitions between states, such as succession, wildfire and timber harvest, which are all combined into a single transition probability between any two states in a Markov chain representation. With an STSM, however, a set U of m discrete transition types are also defined for all cells, with a separate transition matrix P defined for each possible transition type k 2 U. The nonzero entries across all P are referred to as transition pathways. So, in our forested system example, separate probabilities can be represented for each processa separate transition matrix for succession, wildfire and timber harvest.
A single Monte Carlo realization of an STSM begins by setting initial values for the state type of each cell (i.e. assigning values to X 0 ) and then using the transition probabilities, P ij , to simulate the state of each cell, X t , for every successive timestep. To accommodate the requirement for multiple transition types in an STSM, within each timestep the transition matrices, P, are applied sequentially for each transition type k 2 U in order to update the state of each cell between timesteps. As more than one type of transition can occur within a timestep, the order in which the transition matrices are applied within each timestep can also be specified in an STSM, as this order will influence the results of a simulation.
A second difference between STSMs and Markov chains is that with STSMs 'counters' can be defined as additional state variables for each cell. Each counter is a positive integer random variable, measured in units of timesteps, that is initialized for every cell in a simulation and then incremented by one for every timestep. The motivation for including counters as additional state variables in STSMs is twofold: first, to report these as model outputs and secondly, to allow transition probabilities to be defined as a function of the value of these counters.
A G E A N D T I M E -S I N C E -T R A N S I T I O N
Counters are most commonly used to track the age and timesince-transition (TST) for each cell, where TST refers to the number of timesteps that have elapsed since one or more transition types last occurred. To capture this in a Markov chain, the state space for X t could be expanded to include all possible combinations of state type, age and TST. Formulating a stochastic process this way, however, results in an unmanageably large state space for even the simplest models. STSMs instead track each counter (in addition to the state type X t ) as a separate discrete-time stochastic process (Fig. 1).
In order to use counters to track age, each cell is assigned an initial age (A 0 ) at the start of the simulation, and this age is then updated every timestep using the following rules: (i) if a transition occurs for the cell, then a corresponding probability distribution is used to determine the fate of the cell's age; these probability distributions for the change in age can vary as a function of the state of the cell (i.e. its state type, age and TST) and the type of transition; (ii) if no transition occurs for the cell, then the age of the cell is incremented by 1. The TST is tracked in a similar way and repeated for each transition type.
T R A N S I T I O N T A R G E T S
Another limitation of Markov chains is that transitions between states must always be characterized in terms of a probability of occurrence. However, there are some transitionsoften those that are management-orientedthat are more appropriately expressed as a target for the area to be transitioned over time, rather than as a probability. With STSMs, transitions can be characterized using either probabilities or target areas, where targets are dynamically converted into transition probabilities during the simulation; this can be calculated if one knows the full state of landscape at the time of the transition. STSMs also allow transition targets to be specified for other derived variablese.g. the volume of timber harvestso long as these variables can be expressed as a function of the STSM state variables (i.e. state type, age and TST) and transition types.
S P A T I A L A N D T E M P O R A L H E T E R O G E N E I T Y
Because the state variables in STSMs are random variables, there is some inherent variability in when and where transitions occur in any one realization of a model. However, there are often situations in which additional temporal variability in transition probabilities is required in order to adequately capture the dynamics of some processes. For example, the annual transition probabilities for wildfire in the model of Fig. 1 might be better represented as a random variable, rather than a single value, reflecting the pattern of interannual variability in the expected amount of wildfire on the landscape due to changes in climatic conditions. In addition to temporal variability, the transition probabilities in STSMs can also vary spatiallyit is possible to vary the transition probabilities for every cell and timestep in the landscape. Continuing the example of Fig. 1, one might expect wildfires to occur in patches (i.e. with a defined spatial autocorrelation). To accommodate this requirement, the transition probabilities in an STSM can also be expressed as random variables for each cell and timestep; as a result, it is ultimately possible to generate any spatial and temporal pattern of transitions on the landscape. Importantly, the transition probabilities for any one cell can be a function of the past and current state of the entire landscape, as represented by the values of the state variables associated with all of the landscape's cells (Fig. 2).
To simplify the representation of spatial and temporal variation in transition probabilities, a two-step process is often used with STSMs. First, a base probability for each transition type is defined as a random variable. Transition multipliers are then specified in order to scale the base probabilities over space and time. These multipliers are defined as a stochastic process for each transition type and cell, the distribution of which can also be space and time inhomogeneous. During a simulation, the realized values for the transition probabilities are the product of the realized base probabilities and transition multipliers, with concurrent probabilities renormalized such that the sum of all probabilities does not exceed 1. For example, to reproduce historical wildfire patterns in an STSM, the base wildfire probability is often estimated as a function of the long-term mean fire cycle, while the relative frequency distribution of area burned each year can be used to estimate the multipliers (e.g.
Software
A key component of any modelling framework is its supporting software. While the concepts behind STSMs are simple enough that they could be coded from scratch in a spreadsheet, a robust software environment leads to more efficient model development, particularly for larger, more complicated models. To this end, a software product called ST-Sim, first released in 2013, has been created to support the development of STSMs (ApexRMS 2016). While ST-Sim is based on concepts originally developed in the TELSA software platform (Kurz et al. 2000), there are some important differences between TELSA and ST-Sim. First, ST-Sim is consistent with the general STSM framework outlined in this paper, while TELSA was developed as a specific model to support forest management in British Columbia, Canada. Secondly, TELSA uses a polygon-based representation of space, while ST-Sim is raster-based. This shift to a raster-based approach, along with other data management and multiprocessing extensions, has allowed ST-Sim to handle simulations across larger (i.e. >10 6 cells) landscapes (e.g. Costanza et al. 2015b). Finally, ST-Sim users can integrate external models (e.g. developed in R or Python) to dynamically generate transition probabilities within each timestep of a simulation, a key element of the framework. Details on how to download the ST-Sim software are provided in Appendix S1 (Supporting information).
Case study example
To illustrate the STSM method, we present a simple model of the dynamics of LULC for the state of Hawai'i (USA). The purpose of this modelling effort is to explore the interactions between possible future changes in LULC, combined with projected shifts in plant communities due to climate change, on the future spatial and temporal pattern of LULC across the state of Hawai'i. It is important to note that the description of the model and presentation of results here is purposefully brief, considering only a single 'business-as-usual' future scenario, in order to remain focussed upon the relationship between various features of the model and the STSM method presented above, rather than the broader context for the model's development and the interpretation of results. Additional details regarding the model parameterization, including ST-Sim software files, can be found in Appendices S1 and S2.
S T A T E V A R I A B L E S A N D S C A L E S
The spatial extent for this model is the terrestrial portion of the state of Hawai'i, covering 16 416 km 2 . The landscape was divided spatially into simulation cells, each of which is 1 9 1 km in size. Simulations were run for 50 years, with an annual timestep, using initial conditions corresponding to the year 2011; all simulations were repeated for 100 Monte Carlo realizations.
As in all STSMs, each cell in the simulation is characterized according to a suite of state variablesall of which are random variablesthat are calculated for each simulation timestep. The first state variable is the state type of each cell: a total of 21 possible state types were defined for this model, consisting of all unique combinations of seven LULC classes (Grassland, Shrubland, Forest, Plantation, Agriculture, Developed and Barren), crossed with three possible moisture zones (Dry, Mesic and Wet). The second state variable is the age of each As with all STSMs a set of all the possible transition pathways between state types is defined (Fig. 3). The processes represented by these transitions include expansion and contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland, harvest of tree plantations and shifts in moisture zones due to climate change. The order in which the one-step transition probabilities for each of the transition types are applied is randomized for every timestep and Monte Carlo realization. Transitions due to agricultural expansion, agricultural contraction and urbanization are modelled using STSM transition targets. In order to represent the historical temporal variability in land-use change, the simulated annual transition areas are sampled for each year and Monte Carlo realization from a uniform distribution fitted to the corresponding historical land change data (Table 1). STSM transition multipliers are used to further characterize the spatial pattern of these transitions; based upon existing zoning maps (State of Hawaii 2015), static transition multipliers were generated to (i) restrict agricultural expansion to areas zoned for agriculture; (ii) prevent agricultural contraction from occurring in important agricultural areas; and (iii) prevent urbanization from areas zoned for conservation, and double the relative transition probability of urbanization in areas zoned as either urban or rural. In order to 'spread' transitions over time, transition multipliers are also generated (using an external model), for each cell, timestep and realization, such that (i) for agricultural expansion and urbanization, the relative transition probability increases linearly (from 0 to 1) as a function of the proportion of adjacent cells that are agriculture or developed, respectively; (ii) for agricultural contraction, the relative transition probability increases linearly (from 0 to 1) as a function of the proportion of adjacent cells that are forest/shrubland/grassland, and the relative probability of a cell transitioning to forest vs. shrubland vs. grassland increases linearly (from 0 to 1) as a function of the number of adjacent cells in each of these classes.
A separate wildfire submodel is integrated into the STSM to determine which cells incur wildfire transitions for each simulated year and realization (details in Appendix S2). This submodel aims to reproduce the spatial and temporal pattern of historical wildfires. The results of this submodel are then used to dynamically assign transition probabilities of 1 for those cells that transition each year and 0 for all other cells. The wildfire submodel generates two sets of these transition probabilities for each timestep and realization, one for each of two classes (high and low) of fire severity.
Shrub encroachment into grassland from neighbouring shrubland is hypothesized to occur in the absence of regular fires; however, there is ecological uncertainty regarding if or how long it might take for this shrub encroachment to occur. To capture this behaviour in the model, shrub encroachment is represented as follows: (i) transitions are restricted to cells in states associated with the Grassland LULC class where the time-since-fire is at least 10 years and the moisture zone is Dry or Mesic; (ii) at least one of the cell's eight neighbours must be in a state associated with the Shrubland LULC class; and (iii) the annual transition probability is sampled from a uniform distribution for each realization, where the bounds of this distribution were set to 0Á006 and 0Á0327, corresponding to a cumulative transition probability of 0Á95 from 10 to 500 and 50 years without fire, respectively. Tree plantations in Hawai'i are generally harvested starting at the age of 5 (Whitesell et al. 1992), although the stand age at harvest is quite variable, due principally to economic uncertainties (J. Jacobi, personal communication). To capture this dynamic, plantation harvest is modelled as follows: (i) transitions are restricted to cells in states associated with the Plantation LULC class where age is at least 5 years; (ii) harvest transitions reset the age to 0; and (iii) the annual transition probability is sampled from a uniform distribution for each year and realization, where the lower and upper bounds of this distribution were set to 0Á031 and 0Á259, corresponding to a cumulative transition probability of 0Á95 from age 5 to 100 and 15, respectively.
Finally, output from an existing analysis of the effect of climate change on shifts in moisture zones (Fortini, Jacobi & Price in press) was integrated into the STSM. The 100-year projections for the area that will transition between moisture zones were converted to annual transition targets for each of the four moisture zone transition types (Table 1). While no temporal variability was modelled for these transitions, transition multipliers were used to restrict the location of these transitions to cells within the zones predicted by Fortini, Jacobi & Price (in press) for each transition type.
I N I T I A L I Z A T I O N
The initial state type of each cell (i.e. in year 2011) was estimated by combining existing 30-m resolution maps of the LULC class (U.S. Geological Survey 2011) and moisture zone (Fortini, Jacobi & Price in press) for the state of Hawai'i, which were then aggregated using a majority algorithm to a 1-km resolution (Fig. 4). The same initial state was used for all realizations of the model.
The initial age for cells associated with the Plantation LULC class is modelled as a random variable with a uniform distribution between 0 and 100 (i.e. the maximum harvest age). The initial time-since-fire for cells in states associated with the Grassland LULC class is also modelled using a uniform distribution with a lower bound of 0. The upper bound of this distribution was set to the fire cycle (Van Wagner 1978); fire cycles were estimated from historical fire data (Eidenshink et al. 2007) as 69 and 502 years for the Dry and Mesic moisture zones, respectively. The initial age and TST of each cell are resampled for each realization of the model. (Buckland 1984), or as maps averaged over timesteps and realizations.
Our case study provides a sample of the type of output that can be generated with STSMs; note, however, that because we present only a single future scenario, the results shown here are a scenario projection for LULC in Hawai'i, and not a prediction of the likely future state. In our case study scenario, we see that levels of agricultural expansion/contraction and urbanization are projected to match the historical levels from Table 1 at least for the first 30 years (Fig. 5); this is to be expected, as we set the levels using transition targets. Agricultural contraction decreases beyond this point, however, due to an eventual shortfall in agricultural land. In contrast, the area projected to transition due to wildfire, plantation harvest and shrub encroachment emerges as a function of the dynamic state of the landscape, as these transitions were parameterized using probabilities. Variability is projected to be greatest for wildfire transitions, as our wildfire submodel was configured to reproduce the historical annual pattern of wildfire variability. The variability for other transitions, however, is likely underestimated given the limited historical data used to characterize their future variability. From Fig. 6, we see the projected spatial pattern for transitions: urbanization, for example, spreads out from existing developed areas, while wildfire occurs with greater probability in the dry grassland communities. Figures 7-9 summarize the corresponding state of the landscape that emerges as a result of these projected transitions. For our case study scenario, we see a projected loss of grassland and agricultural areas, and a corresponding increase in shrubland and developed areas, due to the combined effects of all transitions represented in the model (Figs 7 and 8). Uncertainty is greatest for the projections of future grassland and shrubland area, due to the high uncertainty associated with the future rate of shrub encroachment. Finally, we see a shift from Mesic to both Dry and Wet moisture zones due to the projected effects of climate change (Fig. 9).
Discussion
A key feature of STSMs is their combination of simplicity and generality: while STSMs are rooted in the intuitive principles of Markov chains, they have important adaptations that make them applicable to a wide range of landscape management questions. A second key feature is their explicit representation of uncertainty, an important consideration for most modelling applications. The Hawai'i case study presented here demonstrates these key features. The projections for states and transitions are expressed as distributions, rather than simply mean values, thus incorporating, through Monte Carlo simulations, the combined uncertainties of multiple model inputs . While several of the transition rates projected by the case study model purposefully match historical distributions, the added value of reproducing these rates in the STSM is that one is then able to reflect the combined consequence of multiple model inputs, including their uncertainties, on the future projections for model outputs. Note that, for this simple example, we did not attempt to account for all model input uncertainties during simulations, and as such, our results should be considered as only a single sensitivity analysis of the modelled system. The moisture zone analysis, in particular, has large uncertainties that were not modelled, highlighting the challenges of developing STSMs using the results of analyses that themselves do not incorporate uncertainty.
STSMs differ from other spatial modelling methods in a number of ways. In contrast to cellular automata (Balzter, Braun & Kohler 1998;White & Engelen 2000) and spatially interacting Markov chains (Baker 1989;Monticino, Cogdill & Acevedo 2002), STSMs track multiple state variables for each simulation cell. STSMs also differ from Markov chains in that they allow for multiple transition pathways between pairs of states and for transitions to be specified as target areas. Other methods differ from STSMs in that they track continuous rather than discrete state variables. One such example is coupled map lattices (Kaneko 1992), the continuous state variable equivalent of cellular automata (Fonstad 2006). Another example is the LANDIS model, which for most applications tracks either biomass (Scheller & Lucash 2014) or the number of trees (Wang et al. 2014) by species and age class as its state variables. The other major difference between STSMs and LANDIS is that LANDIS has been designed for use specifically with forested systems, relying on tree species life-history traits to drive its dynamics, whereas STSMs are not specific to any particular vegetation community. LANDIS has thus been used principally for applications where details regarding individual tree species by age cohort may be important (e.g. management of uneven-aged forests), and for which there is sufficient life-history data to parameterize the model. STSMs, on the other hand, typically track only the forest community (i.e. species assemblages) and age as state variables, similar to the approach used in most timber supply models (e.g. There are also similarities between STSMs and LULC change models. Like STSMs, LULC change models use a discrete representation of space, time and state. In general, LULC change models divide the simulation process into two steps every timestep (Mas et al. 2014;National Research Council 2014). First, they calculate the total amount of change between states over one or more regions; for example, several models do this by applying a Markov chain to historical data in order to generate a matrix of transition probabilities between states (Soares-Filho, Cerqueira & Pennachin 2002;Pontius & Malanson 2005). Next, the models allocate this total change spatially across cells based on each cell's suitability for change; the suitability of each cell is often calculated as a relative probability, which can be influenced by both external factors and neighbourhood interactions. STSMs are well suited to capture these same dynamics. The amount of change between states can be represented through the specification of either target areas (i.e. top-down demand) or probabilities (i.e. bottom-up conversion) for transitions. As shown in the case study example, the distribution of these transitions can be made to follow any spatial or temporal pattern using the STSM transition multiplier feature. These multipliers can be calculated as a function of both external drivers and the state of neighbouring cells. A major difference between STSMs and most LULC change models, however, is the generality of the STSM method; STSMs focus on those elements of landscape dynamics that are common to most applicationssimulating changes in the state of spatially referenced discrete random variables over time as a function of probabilistic transitions. For example, the STSM method does not include specific routines for estimating transition probabilities over time, nor does it include routines for statistically fitting spatial suitability relationships or representing cell interactions. Rather, these tasks are accomplished through the development of external models, which are then used to generate STSM transition probabilities, either dynamically or a priori. There are several benefits of this generality: (i) a single, intuitive framework can be used for a wide range of applications, including modelling both vegetation dynamics and land-use change; (ii) STSMs are inherently stochastic, providing a framework for capturing uncertainty throughout the modelling process; and (iii) STSMs provide a common framework within which alternative approaches/ models can be compared.
State-and-transition simulation models currently have some limitations. The first is that STSMs are only able to track discrete state variables. While there are many systems for which this limitation is a reasonable and often useful approximation, there are other systems and questions for which continuous state variables may also be required; extending the STSM framework to include continuous state variables is an area we are actively pursuing. A second limitation with STSMs is the absence of any capability to integrate agent/individual-based models (Grimm & Railsback 2005;Matthews et al. 2007); these models have become an increasingly important approach for representing certain drivers of landscape dynamics (DeAngelis & Mooij 2005; National Research Council 2014). We believe that future efforts should explore possible ways to integrate these two approaches. Finally, while STSMs provide the opportunity to characterize model uncertainty, through the expression of states and transitions as random variables, characterizing this uncertaintyin particular the covariance between transition probabilitiesremains an important challenge.
|
2019-04-01T13:02:52.598Z
|
2016-11-01T00:00:00.000
|
{
"year": 2016,
"sha1": "fefc3567783534a41bd20235404161d816f54f52",
"oa_license": "CCBY",
"oa_url": "https://besjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/2041-210X.12597",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "087d08329d9c7959bc280139e87ffa6b1cad148c",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
31462105
|
pes2o/s2orc
|
v3-fos-license
|
A Conversational Logic : wa and ga
This paper (a) presents a fragment of a logic of conversation with some philosophical basis (b) attempts to model and explain differences and properties of wa and go, notably the so called Unagi-Bun, the comparative (contrastive) readings of wa and uniqueness (completelist) readings of go (c) brings inner piece to those who wisely do not react it. 1 Basic examples The fundamental intuition is that wa serves to emphasise the predicate whereas ga serves to emphasise the subject.' 1. Ga has a sense of uniqueness (complete-listing) that wa does not. Compare Sachi wa nihon ye kaerimashita (Sachi wa to Japan returned) with Sachi ga nihon ye kaerimashita which in some cases (more cases than with `wa') suggests that Sachi is unique in her returning, or at least that Sachi makes up a complete list of (relevant) people who have returned. This may be seen more easily if the predicate is one which demands uniqueness. Gabbay san wa ichiban yatsu hito desu (Mr Gabbay wa the most awful person is) is acceptable, but. *Or: Bakka na seiyojin ga nihon go wo narattara tohomonai koto Uzi narimasn. 'NOTE: Most data was collected or tested personally from numerous native speakers unlucky enough to be passing the author at the time. Only individual preferences were noted, hence, in this paper, a sentence may be said to be 'better' than another with little further qualification. Further qualification was not given by subjects of the survey mainly because they were not confident to do so. Translations are for the most part literal but neither stylised nor complete for that would be to beg the question. Gabbay san ga ichiban yatsu hito desu2 is preferred. Furthermore, not only is dare ga sonno use wo iimashita, ka? (Who wa such a lie told?) a much better formed question than the `wa' form but to either, the response Jitsuo ga iimashita (Jitsuo ga told [it]) is better formed than Jitsuo wa iimashita, ). Unagi-Bun: to say Watashi wa piza (I wa pizza) can be acceptable, maybe meaning 'I want a pizza', whereas the -a' form Watashi ga piza is apparently less useable. Use of `ga', if not meaning that one is identical with a pizza seems acceptable more in cases where a question 'Dare ga piza.' is asked. :3. Wa has a sense of making a comparison that it would appear ga lacks. Kore wa shiroi keredomo are wa kuroi (This wa white but that wa black) or with conjunction Kore wa shirokute are wa, kuroi (This wa white[ing]' 3 that wa black) a comparison is welcomed. With the use of ga however the first was said to be very strange and Kore ga shirokute are ga kuroi was said to sound like what teacher might say to a student of Japanese who had failed to understand the difference between `shiroi' and `kuroi'. The `ga' form of Motosan wa shinda (Mr Moto wa (lied) seems slightly preferred. But the `wa' form is likely to be expected to continue: keredomo Osakisan wa [mada] ikite iru ( ... but Osakisan wa [still] alive) It is assumed here that this type of comparison is brought on by a mechanism based on meaning of wa rather than one based on the meaning of the predicate. Here is a different class of comparison that is assumed here brought on by the meaning of the predicate: 'Also: Yoshiko ga inottomo uruwa,shii onna (Ia etc.. The gerund beini,g used where English uses 'and'. In English too, e.g. 'Peter running lane walked'. Yuji wa tatte iru no da, suwatte iru no ja nai (Yuji is standing but not sitting) also Yuji wa hashitte iru keredomo asette (wa) inlasen. (Yuji running is but hurrying is not) The thought being that it is wa that seems best for such sentences. 4. Kingyo ga iru (Goldfish ga is, i.e. there is a goldfish) which seems a little preferred to its ‘wa' form Of the . wa,' form was said strange, but acceptable if continued by something like keredomo inu wa nai' ( • • • but dog wa not. Compare with Watashi Ili wa, ie ga, aru (me to/at/ill wa house ga Which was said a more useable way of claiming to have a house than Watashi ni wa ie wa aru. To start with `watashi ni ga ' was said to be impossible. 5. Further, it seems that ga, is for the most part not used with a negative. ano hito wa nihonjin ja nai (that person wa Japanese is not) being held in much preference to its `wa' form.
Basic examples
The fundamental intuition is that wa serves to emphasise the predicate whereas ga serves to emphasise the subject.' 1. Ga has a sense of uniqueness (complete-listing) that wa does not. Compare Sachi wa nihon ye kaerimashita (Sachi wa to Japan returned) with Sachi ga nihon ye kaerimashita which in some cases (more cases than with `wa') suggests that Sachi is unique in her returning, or at least that Sachi makes up a complete list of (relevant) people who have returned. This may be seen more easily if the predicate is one which demands uniqueness.
Gabbay san wa ichiban yatsu hito desu (Mr Gabbay wa the most awful person is) is acceptable, but.
*Or: Bakka na seiyojin ga nihon go wo narattara tohomonai koto Uzi narimasn. 'NOTE: Most data was collected or tested personally from numerous native speakers unlucky enough to be passing the author at the time. Only individual preferences were noted, hence, in this paper, a sentence may be said to be 'better' than another with little further qualification. Further qualification was not given by subjects of the survey mainly because they were not confident to do so. Translations are for the most part literal but neither stylised nor complete for that would be to beg the question.
Gabbay san ga ichiban yatsu hito desu2 is preferred. Furthermore, not only is dare ga sonno use wo iimashita, ka? (Who wa such a lie told?) a much better formed question than the `wa' form but to either, the response Jitsuo ga iimashita (Jitsuo ga told [it]) is better formed than Jitsuo wa iimashita, ). Unagi-Bun: to say Watashi wa piza (I wa pizza) can be acceptable, maybe meaning 'I want a pizza', whereas the -a' form Watashi ga piza is apparently less useable. Use of `ga', if not meaning that one is identical with a pizza seems acceptable more in cases where a question 'Dare ga piza.' is asked.
:3. Wa has a sense of making a comparison that it would appear ga lacks. It is assumed here that this type of comparison is brought on by a mechanism based on meaning of wa rather than one based on the meaning of the predicate. Here is a different class of comparison that is assumed here brought on by the meaning of the predicate: 'Also: Yoshiko ga inottomo uruwa,shii onna (Ia etc.. The gerund beini,g used where English uses 'and'. In English too, e.g. 'Peter running lane walked'. Yuji wa tatte iru no da, suwatte iru no ja nai (Yuji is standing but not sitting) also Yuji wa hashitte iru keredomo asette (wa) inlasen. (Yuji running is but hurrying is not) The thought being that it is wa that seems best for such sentences.
4.
Kingyo ga iru (Goldfish ga is, i.e. there is a goldfish) which seems a little preferred to its 'wa' form Of the . wa,' form was said strange, but acceptable if continued by something like keredomo inu wa nai' ( • • • but dog wa not. Compare with Watashi Ili wa, ie ga, aru (me to/at/ill wa house ga Which was said a more useable way of claiming to have a house than Watashi ni wa ie wa aru.
To start with `watashi ni ga ' was said to be impossible.
5. Further, it seems that ga, is for the most part not used with a negative.
ano hito wa nihonjin ja nai (that person wa Japanese is not) being held in much preference to its `wa' form.
2 Proposed solution 2.1 Philosophical basis The most inaccessible term within the following philosophical basis is the term 'thought'. The logical counterpart of the term 'thought' in this paper is the term 'topic' or 'world'. tfse of world' here is distinct from use in theories of necessity. At any point in a conversation a speaker has a variety of thoughts available to him. Sentences are intended to convey thoughts and to Illanipulate the thoughts of interlocutors. It is left open in this paper how the intentions of a speaker relate to the meanings of his words. However, sometimes the intent of the speaker is strongly linked with the meaning.
We take a use of wa as the speaker intending or meaning to emphasise the predicate. We take a use of ga as the speaker intending to emphasise the subject. In order to model this we read a sentence like A wa B as something like: (1) Any thought involving A must be a thought involving B.
We read a sentence like A ga B as: (2) Any thought involving B must be a thought involving A.
Conditions on what thoughts the interlocutor may have will vary the affect of such an instruction. Our logic shall model this by means of a modal logic where the possible worlds (or topics) are meant to represent thoughts relevant to the context.
Basic semantics
Let be first order language containing no n 1-ary function symbols, an existence predicate and the symbols q and 0, we define a context to be the triplet C = (TV, L, V) where Definition 2.1 1. W is a set of worlds.
2. L is a mapping from 1/V into the powerset of r. Let 1"" denote the subset of ,C assigns to .
3. A is a set of functions that assigns to each world a valuation. Let f u, be the valuation assigned to w, then for every n-ary predicate symbol P of 1,, f,(P) is a set of n-tuples of constants in L.
Intuitively a context is a set of topics (thoughts) relevant to the conversation. Each topic contains relevant constants (people) and predicates. W is the set of topics, L says who and what predicate is relevant to each topic, and V says what is the case in each topic.
Definition 2.2 An assignment a, on a context C, is a mapping from the variables and constants off, into the set of constants of L such that says to what each variable and constant refers. For the most part it is as if we use, say, Yuukisan's name to represent Yuukisaii, in this case the name `Yuuki' is assigned itself.
We define when a formula of is satisfied by an assignment a at a world w.
is satisfied by a at w when all predicate and constant symbols of P(x1 are in 1", and (a(x i ) ...a(x")) is an n-tuple assigns to P 2. --a is satisfied by a at w when all predicate and constant symbols of a are in i", and Ka (x i ) ...a(x")) is not an n-tuple assigns to P 3. -V /3 is satisfied by a at 'w when all predicate and constant symbols of a and /3 are in 1", and either a is satisfied at w or q is satisfied at w 4. Similar definitions may easily be derived for the other truth functional connectives.
5. q a is satisfied by a at w when all predicate and constant symbols of a are in 1", and a is satisfied at every world in W.
6.
A:ra is satisfied by a at 1L when all predicate and constant symbols of a are in 1" and Axa is satisfied by some a' at w such that a' is x-alternate to a.
7. If a and b are constants then a = b is satisfied by a at w when a(a) and a(b) are in 1, and a(a) = a(b).
We shall use a notion of truth that is different from the 'satisfied at every world' notion. Our notion of truth is defined as follows: 1. a is true in (W, L, V) when a is satisfied at all w E W that contain all the constant and predicate symbols appearing in a and there is such a w E W.
2. a is false in (W, L, V) when n is not satisfied at all w E W that contain all the constant and predicate symbols appearing in (v and there is such a w E W. A sentence is true if it is satisfied in every topic (world) where it is relevant (all its constants and predicate symbols are part of that world).
Simple subject-predicate sentences using wa and ga
First we define new operators into our language.
Definition 2.5 • If p is a formula containing x as a free variable then q p a is satisfied by a at 'w when w satisfies ]..c(p and a is satisfied at every world in W that satisfies Ax p.
• If a is a constant then q a a is satisfied by a at w when w satisfies az, = a and a is satisfied at every world in W that satisfies ]x = a.
For example if a is a name then q a cv is satisfied at w when (-1, is satisfied in every topic at which a is relevant.
Definition 2.6 • 'A wa F' will translate as ]?1E1,(p, where a is the appropriate formalisation of the clause .4 and p is an appropriate formalisation of the clause F (where the free variable u is used, where necessary, as a marker for the subject) .
• A ga F will translate as 3u q ,,,u = a, where is an appropriate formalisation of the clause F.
We now try to formalise some aspect of the context. The intuition is that, in conversation, some thoughts (topics) should not be broken down any further, these characterise the context. For example an 'ordering food' context is characterised by the waiter being unable to break down any further a thought involving a customer and his order, nor should he think of the customer ordering something different (once the order has been placed). • If predicate symbol P is in 1,, and Pa l ...a" (or ...a") is satisfied in w then Pa l ...a" (or -Pa i ...a") is satisfied in every topic in which the a l ...a" are relevant.
• Every constant of lw satisfies a member of T.
we call such a w the singular topic of T.
A special case is if T is simply {x = a}, in this case we call the singular concept of a and the appropriate world, the singular topic of a).
The philosophical intuition for conditions (b) is that 'simple' idea is so fundamental (or basic) to the conversation the information within is unassailable.
If w is singular then, since (v may not occur in 1", or it does not follow that if cv is (or is not) satisfied in the singular topic w then cv is (or is not) not satisfied every topic in which a is relevant.
Sentence
Informal Formalisation watashi wa piza ]upwatashi[Piza( tt)] watashi ga piza 3u q piza[u = watashd watashi ni wa ie ga aril 3u q watashiP2( q iru(T)ie(u)1 In the table we used c aru(x)' and not `]y(y = x)' for we are treating existence as a predicate (unlike relevance, which provides a better interpretation of the existential quantifier here). The entire formalised form of the clause `ie ga arty occurs as a subscript.
Our theory provides a suggestion as to why we must say 'Ili wa' rather than 'Ili ga'. We can say that `ni' here is forcing `watashi' not to be singular, instead we must consider `watashi' plus some extra constants, then 'Ili; forces further predication to be on only these extra constants (e.g. that one of them is a house). Thus we can read `watashi ni wa ie ga aril, In every watashi-topic 4 there exists a house where the ni' forces the house and the `watashi' to be distinct. Now, since we are dealing with whether watashi' is singular or not we must use `wa'.
Semantic Results
We will now look the predictions this theory makes on the truth conditions of some sentences.
• If ]u q wataslii[Piza(u)] is satisfied at w then there is a u which satisfies, in every world where `watashi' is relevant, `piza(u)'.
• If we assume that `watashi' represents a singular concept then we obtain the result that `watashi' is a pizza. However suppose `watashi' is not singular, say if we are in a restaurant and the context demands that every person be considered with respect to what he ordered. The claim is that the notion of singular topics formalises the effect context can have on such sentences.
• It follows that there is an object u in w which satisfies `piza( t)' in every world wherè watashi' is relevant. In particular it satisfies it at w.
• Note that since `watashi' is not singular it need not be . watashi' that satisfies `piza(u)' in every workl where `watashi' is relevant. In the case of `watashi' not being singular watashi wa piza' means something like 'I have something to do with a pizza'. We can say (but we need not) that the idiom demands that watashi,b} be singular where b is something distinct (and maybe even owned by) `watashi'.
• `watashi ga piza' is formalised as ]uopiza[u watashi]
] uo piza[ u = watashi] is satisfied at w then there is a it which satisfies, in • every world where ]xpiza(x) is satisfied, 'a = watashi'.
• Assume that `piza,' is singular. So, by the definition of singular topic, `watashi' satisfies `piza,(x)'. Also it follows that `watashi' satisfies 'piza(x)' in every world where `watashi' is relevant (from Def 2.7).
• Thus we obtain the peculiar meaning that I am identical with a pizza. If Aza' is not singular then the result is much the same as for `watashi wa piza'.
• Where neither `watashi' nor `piza' are singular there is little difference, given by this semantics, between wa and ga. We would characterise the context of ordering in a restaurant by {piza(x), person (x)} as singular and maybe also {x watashi, = food} (here, 'food' is a constant, but we can (10 it differently), depending on what the context is precisely.
• We analyse the contextual effect of a question like 'what do you want?' as forcing the person (the 'you') to be no longer singular (as he cannot be now considered aside from the order). This means that `watashi wa piza' is a better answer, for `piza' may be singular thus making `watashi ga piza' peculiar. We analyse the contextual effect of a question like 'what do you want?' as forcing 'piza' to be no longer singular (not, each person as the question is not directed at any particular person). This means that `watashi wa piza' is a better answer, for watashi' may be singular thus making `watashi ga piza' peculiar. The uniqueness (or complete list) sense of `ga' may be seen by the following example.
To guarantee that `kore wa shiroi' (this is white) predicates `shiroi(x)' of `kore', `kore' must be singular. But then if someone says 'are ga shiroi' (that is white), from our analysis of `ga' this forces 'are' to be in all `shiroi(x)' topics, notably in the singular topic of `kore' (thus forcing the identity `kore = are'). Thus 'x ga A' will contradict any 'y wa A' where y is singular and y x is already established. So if what is in question is not what has been (lone by members of a group as a whole, but what has been done separately by each member then the analysis of this is to make each member represent singular concepts (topics). But in this case `Sachs ga nihon ni kaerimashita . has a uniqueness (complete list) sense. which is captured as above by the theory. `Sochi wa nihon ni ka,erimashita' does not have this reading so easily, neither does our theory so easily provide it. :3.
• `watashi ni wa ie ga aru' is formalised as ] u q watashi P it I:1 arufr) ie (111)] • Suppose it is true, then it is satisfied at a world W where all the appropriate predicate and constant symbols are relevant.
• Then there is a u which satisfies, in every world where `watashi' is relevant,R Lutilaruwie • A world w' satisfies 'ati°arti(x) [ie(u)]' when there is a u such that in every world where ]xaru(x), [ie(u)]. Or when something is a house in every world where something satisfies ant (x). Further 3xaru(x) will be satisfied at w' if ]a{par"0,)[ie( a)] is satisfied (from Def 2.5).
• The verb 'to be' is one of the most basic verbs there is and is taken here to be singular in most conversation. We can consider objects existing, apart from other predicates they might haver' • So ]tal t watashi aru ( ) w ie (")] is satisfied at when in every world where `watashi' wa is relevant, there is a u which is a house in every house-relevant world that satisfies ]xaru(x)'. In particular there will be a house in every world where `watashi' is relevant. For the formula au, aru(r)[ie ( ail is satisfied at every 'watashi'-relevant world, but for that formula, to be satisfied ]xa,ru(x) must be satisfied (from Def 2.)) and thus axie(x) must also be satisfied. In other words, there must be a house at every world in which `watashi' is relevant.
• Note however that in this case, if 'watashi' is singular then . watashi ni wa ie ga ant' implies that `watashi' is a house. But this is a bonus, the fact that we must say `watashi ni wa ... rather than `watashi wa suggests that the `ni' and is demanding that `watashi' is not singular and that we must consider `watashi' in relation to something, say, owned by 'watashi'. This provides an explanation of whỳ watashi wa ie ga, ant' is unacceptable.
3 Comparison and negation 1. Two forms of comparison have been noted with'wa. The form which appears to be brought on by the meaning of the predicate Yujisan wa tatte iru keredomo suwatte nai and another which appears to be brought on by the meaning of wo Kore wa shiroi keredomo are wa kuroi Cases of the first type have been accounted for above: two predicates which are incompatible or unexpected to hold of the same object may not be allowed into the singular topic of a particular individual. Thus `suwaru(x)' and `tatsu(.0' may no both be allowed into nless, perhaps, the conversation is about fictional and non-fictional houses, then 'aril' should not be singular.
Yujisan's singular topic. The point being that Yujisan's singular topic is only important if we use wa, this explains why such a comparative sentence uses wa.
As to a case of the second type, it seems that this is brought On 1)y something in the meaning of wa that requires a comparative reading. Notice that `kore wa, shiroi' formalises aS ]uo kore[shiroi ( u)], which means that there must be something that is white in every topic that is 'this' (kore). We have shown above what conditions there must be to force the predication of `shiroi(x)' on `kore'. However there is nothing to stop the predication of shiroi(x)' anything else that appears in a kore-topic. The logic deliberately leaves it open that any other object in a topic in which `kore' is relevant gets predicated as being white. Now, therein lies the comparison. The sentence ]u p k",[shiroi(u)] leaves it ambiguous as to what exactly satisfies shiroi(u). So, in order to remove the ambiguity, we must add another clause that rules out the valuation of shiroi ('a) to any other object that is relevant in a kore-topic. Thus we feel the need for a `keredomo are wa shiroi ja nai' or `keredomo are wa kuroi' (if 'are' is relevant).
With ga, as seen above, a phenomena similar to the comparison of the first type gives us the uniqueness that is implied by ga. Further, notice that a formalisation of `maiku ga riko' is ]acl riko [u maiku], but there can be only one `maiku" so there is no ambiguity in its satisfaction. Thus, in general, we do not so easily find a comparative reading of sentences that use ga.
2. With the exception of sentences implying non-existence like `kami ga nai' it seems that, in general, sentences using go do not take the negative. A sentence like Sono Eikokujin wa wakaranakatta (That Englishman wa didn't understand) is better that Sono Eikokujin ga wakaranakatta.
The first means simply that the Englishman does not understand, but the second seems useable only if a number of people are known to understand except one and we wish to know who. 6 Even in that scenario the sentence is strange. To see why this is consider the formalisation of the two. The first formalises to n -0 -U -EikokujinH wakatta ( u )], this is then operates as per normal.
However the negation of the second sentence would be 31tO wakattaH(U Eikokujin)]. It is the subject that ga emphasises and so it is the subject that is negated. The logic can be said to demand this if we stipulate that that particular type of negation negates within the scope of the modal operator (not external to the modal operator). For ]Uril wakattaH( = Eikokujin)] to hold there must be an object that is distinct from the Englishman in every topic where someone has understood. This is a strange and specialised meaning that can hardly ever be meant, which is why 'cid is less often used with the negative.
Note that negation can be treated in two ways under this system. Some negations may be external to the operator so we may have "-d3u n i ru [ 11 = Eikokujin], which states simply that the Englishman does not exist. Maybe we can be no more that descriptive here and "It seems that unless at least one person (other than the Englishman) does understand, the sentence (ga-form) is inappropriate. stipulate that `iru' takes the external form with ga whereas a verb like `wakaru' takes the internal form.
Thanks
Thanks to those whose names were and were not remembered (correctly or incorrectly):
|
2015-09-18T23:22:04.000Z
|
2000-01-01T00:00:00.000
|
{
"year": 2000,
"sha1": "a51917d34825999fb34cbeb91ead620cce028392",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "a51917d34825999fb34cbeb91ead620cce028392",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
18316151
|
pes2o/s2orc
|
v3-fos-license
|
Climate and Weather Impact Timing of Emergence of Bats
Interest in forecasting impacts of climate change have heightened attention in recent decades to how animals respond to variation in climate and weather patterns. One difficulty in determining animal response to climate variation is lack of long-term datasets that record animal behaviors over decadal scales. We used radar observations from the national NEXRAD network of Doppler weather radars to measure how group behavior in a colonially-roosting bat species responded to annual variation in climate and daily variation in weather over the past 11 years. Brazilian free-tailed bats (Tadarida brasiliensis) form dense aggregations in cave roosts in Texas. These bats emerge from caves daily to forage at high altitudes, which makes them detectable with Doppler weather radars. Timing of emergence in bats is often viewed as an adaptive trade-off between emerging early and risking predation or increased competition and emerging late which restricts foraging opportunities. We used timing of emergence from five maternity colonies of Brazilian free-tailed bats in south-central Texas during the peak lactation period (15 June–15 July) to determine whether emergence behavior was associated with summer drought conditions and daily temperatures. Bats emerged significantly earlier during years with extreme drought conditions than during moist years. Bats emerged later on days with high surface temperatures in both dry and moist years, but there was no relationship between surface temperatures and timing of emergence in summers with normal moisture levels. We conclude that emergence behavior is a flexible animal response to climate and weather conditions and may be a useful indicator for monitoring animal response to long-term shifts in climate.
Introduction
Changes in climate can affect animal and plant populations in numerous ways [1]. Much recent attention has focused on how increased warming correlates to changes in phenology [2,3] and its potential for de-coupling resource-consumer interactions [4,5]. Seasonal changes in climate at local and regional scales can also have profound influences on demographic dynamics of populations for species with narrow thermodynamic tolerances or those existing at range edges [6,7]. Both climate and weather likely have direct and indirect effects on animal populations [8] and understanding how animals respond to shifts in climatic conditions is important for determining long-term impacts of global climate change on ecosystems.
One limitation to understanding how climate affects animal behavior is lack of long-term datasets that adequately measure behavioral response at the time scales necessary to detect responses to shifts in climate. Use of remote sensing data to measure changes in primary productivity (e.g. Normalized Difference Vegetation Index) provide a means to assess changes in vegetation communities [9]. These data sources typically lack information on vertebrate or other consumer response, limiting ability to retrospectively analyze responses of animals to variation in climate.
The data archive maintained by the National Climatic Data Center (NCDC) of national networked weather radars (collectively known as NEXRAD) contains signals of animals aloft in the aerosphere going back to the early 1990s [10]. The NEXRAD radar archive is arguably one of the largest treasure troves of biological information that is relatively untapped by ecologists [11]. For populations that have large aggregations associated with known point localities on the ground (e.g. cave-roosting bats and colonially-roosting birds), the radar archive contains information on changes in daily behavioral patterns, such as when animals take flight, that can readily be used to assess changes in phenology [12] or response to daily or seasonal climate conditions.
Timing of emergence to forage by bats is an adaptive behavior that has important fitness consequences in terms of trade-offs between increased risk of predation or competition with diurnal aerial insectivores and forfeiting foraging opportunities during peak prey availability [13,14,15]. Bats that leave a roost early face greater risk of predation, but increase foraging time during crepuscular periods, when aerial insect availability may be high [14,16]. Several studies have demonstrated that foraging habits and reproductive condition of bats influences onset of emergence in ways that support this hypothesis [13,14,17,18]. In particular, lactating females are the most energetically stressed and therefore should emerge earlier if energetic demands outweigh costs of increased risk of predation. This pattern has been demonstrated in several species [14,17]. The hypothesis that increased physiological stress results in earlier emergence times leads to predictions about how climate variation may influence emergence behavior of bats. Specifically, if climate or weather conditions cause physiological stress then bats may emerge earlier during periods associated with environmental stress, such as drought.
Here, we test whether emergence behavior of Brazilian freetailed bats (Tadarida brasiliensis) during the maternity season is associated with variation in summer drought conditions over the past 11 years. Drought causes physiological stress for many bat species, particularly in summer months when bats are reproductively active [6,19]. Drought is associated with lower prey availability [20] and water balance stress in bats [19]. We predicted that Brazilian free-tailed bats would emerge to forage earlier during droughts if physiological stress from extrinsic climatic conditions has a strong influence on emergence behavior. Timing of emergence may also vary with daily weather conditions, such as surface temperature. Nocturnal moth activity is generally positively correlated with temperature, such that hotter nights should correspond with higher prey availability [21]. We predict that bats would emerge later on days with higher surface temperatures because foraging success should be higher with increased temperature if prey are more plentiful when it is warm and bats can emerge later in the evening and still meet energetic needs. The relationship between daily temperature and onset of emergence may depend on summer climatic condition, such that in drought conditions the influence of daily temperature may be different than in normal or unusually moist years. By analyzing variation in emergence behavior at a seasonal and daily scale, we aim to determine the flexibility of response to variation in weather and climate that leads to insights about how long-term climate shifts could impact animal populations.
Methods
To compare bat emergence behavior with daily and seasonal meteorological conditions, it was first necessary to establish a record of the time of emergence for a selection of bat colonies. Brazilian free-tailed bats disperse nightly in dense columns from cave and bridge roosts and forage at high altitudes (300-2500 m AGL) over large spatial extents that are regularly detected by the NEXRAD network of weather surveillance radars [22,23]. Although the NEXRAD network is designed to detect precipitation and weather events, these weather radars have the capacity to monitor and survey aerial animals, including birds, bats, and arthropods [10,11]. A long-running archive of NEXRAD data is available at NCDC (www.ncdc.noaa.gov), including all three conventional radar products: radar reflectivity factor (Z), radial velocity (v r ), and spectrum width (s w ). The measure of backscattered intensity, radar reflectivity factor (Z), can be directly related to the number of aerial organisms occupying the aerosphere [24], and therefore is the appropriate measure for identifying colony emergence.
We chose five maternity colonies of Brazilian free-tailed bats in south-central Texas, which are regularly detected by radar (Fig. 1). Because the altitude of the radar sampling volume increases with range from the radar, maternity colony sites were restricted to be within 110 km of a NEXRAD station to ensure adequate height coverage of emergences [25,26]. Bridge-dwelling colonies were not included to ensure consistency among samples and eliminate any influences introduced by anthropogenic roost structures [23].
NCDC stores NEXRAD radar products from individual radars in polar coordinates. To provide the best spatial coverage of the selected caves, we chose four of the surrounding NEXRAD installations (KSJT, KGRK, KEWX, and KDFX) for our analysis (Fig. 1). Using a radar-merging algorithm, we meshed radar reflectivity factor data from the four radars onto a common Cartesian grid [27,28,29,30]. From this three-dimensional grid of radar reflectivity factor values (Z), we projected the maximum value in height to the surface, a method known as radar compositing [31]. The result is a two-dimensional map of maximum reflectivity values in the vertical column, known as composite reflectivity (CREF). The spatial resolution of our final CREF values was 500 meters by 500 meters, and the temporal update time was five minutes. CREF data at coarser resolution (1 km61 km grid cells) covering the continental USA since 2008 at five-minute temporal resolution are available through the SOAR (Surveillance Of the Aerosphere using weather Radar, http://soar.ou.edu) web portal. Data generated for this analysis were processed by special request by the National Severe Storms Laboratory that hosts SOAR. We chose to focus on data from the period of June 15 through July 15, corresponding to peak lactation period for Brazilian free-tailed bats [17], from 2001 to 2011 to acquire a sufficiently long time series for our purposes. For meteorological applications, values of radar reflectivity factor are typically reported in logarithmic units, dBZ. To relate the reflectivity factor to bioscatter in the aerosphere, we converted to linear units of Z [24].
To determine emergence time for each colony on each day, we defined a 40 by 40 pixel (20 km by 20 km) spatial domain centered on each of the five cave locations (Fig. 2a). A broad spatial domain surrounding each cave was required because variability in flight direction during emergence sometimes results in bats literally flying ''under the radar'' causing horizontal displacement of where bats rise to detectable altitudes. The domain size was chosen, after visual inspection of radar imagery, to be large enough to allow for spatial variability in location where emergence was detected, while remaining small enough to avoid contamination from other nearby bat colonies. Each of the five cave domains consists of 1600 pixels (40640). Our analyses are based on linear values of radar reflectivity factor for each pixel. At each time step, we summed the 1600 Z values to obtain a single measure of the total biological density in the aerosphere over each of the five caves. By repeating this process at each five-minute time step, we obtained a time series of the index of airborne biological density over each cave (Fig. 2b). We define emergence time at each cave as the maximum increase in the index of total airborne biological density (dZ/dt) over the cave domain in the ten hours surrounding sunset. Biologically this should correspond to time of the peak emergence when the greatest exodus occurs. If a cave produced multiple emergences, then we defined emergence time as the maximum increase of the first emergence. We visually inspected radar images to ensure these maxima were indeed associated with emergences of bats as opposed to weather, clutter, or other signals. Nights in which emergences were obscured by weather were excluded from the analysis. We converted time of emergence to offsets in minutes from local sunset to normalize times across caves and dates.
We observed emergence of Brazilian free-tailed bats from Frio Cave on 10 nights from 22 June-1 July in 2011 to confirm radar observations. Visual estimates of timing of emergence were similar to those derived from radar. Radar reflectivity factor values can be assumed to derive primarily from Brazilian free-tailed bats given that other bat species that may use these caves occur in much lower densities, fly at lower altitudes, and do not emerge in dense columns.
Seasonal Climate and Emergence Behavior
To test our hypothesis that timing of emergence depends on summer climatic conditions, we averaged daily emergence time offsets across the 30-day study period for each site and averaged across sites to get a regional average of emergence time in each study year (2001-2011) ( Table 1). We represented summer climatic conditions using the Palmer Drought Severity Index (PDSI) to measure combined effects of precipitation and temperature. PDSI is a measure of long-term drought and weekly reports are weighted by conditions in preceding weeks [32]. PDSI values range from 24.5 (extreme drought) to +4.5 (extreme moist). We averaged weekly indices of PDSI reported in climate divisions (Texas divisions 6,7,9) for sites during the 30-day study period from 2001-2011 (available online from NOAA's drought monitoring program (http://www.cpc.ncep.noaa.gov/products/ monitoring_and_data/drought.shtml) ( Table 2). We averaged across divisions for sites bordering multiple divisions. We used least-squares linear regression with the regional average of timing of emergence offset as the response variable and regional average of PDSI as the explanatory variable to determine the relationship between summer climatic condition and emergence behavior of bats. No evidence of temporal autocorrelation among years was evident based on visual inspection of residuals using the autocorrelation function (acf) in Program R [33] Daily Temperature and Emergence Behavior We compared five a priori linear regression models using generalized least squares to determine how daily weather conditions influenced timing of emergence given yearly drought conditions. For this analysis, we calculated daily averages of emergence time offsets by averaging values for each of the five maternity colonies on each of the 30 days in 2001-2011 for the response variable. The five a priori models included a null model (emergence time constant), a main effects model with daily surface temperature as a predictor, a main effects model with a categorical variable of years classified as dry (PDSI score,21), normal (PDSI score = 21 to 1), or wet (PDSI score.1), and parallel and varying slopes models with daily temperature as a continuous predictor and type of year (dry, normal, wet) as categorical predictor (Table 3) Visual inspection of residuals using the auto-correlation function (acf) in Program R [33] suggested significant temporal autocorrelation in residuals of models with standard correlation structure. Following suggestions by Zuur et al. [34], we used the varying slopes model and fit five models with increasing complexity on correlation structure, including no auto-correlation structure, compound symmetry auto-correlation structure, auto-regression of order 1 structure, and two forms of moving average autocorrelation structure. Results from AIC model comparisons demonstrated strong support for much better fit with the model of auto-regression of order 1 structure (99% AIC weights). Therefore, we present results comparing our five a priori biological models described above using an auto-regressive model of order 1 as the alternative correlation structure to account for temporal autocorrelation [34]. We tested for effects of moonlight by comparing emergence timing on nights with full and new moon and found no evidence to support a lunar effect (t = 0.06, df = 19.13, p = 0.95).
Seasonal Climate and Emergence Behavior
Brazilian free-tailed bats emerged to forage significantly earlier in the evening during drought events than in years with normal to unusually moist conditions (p,0.01) (Fig. 3). The estimated slope coefficient equaled 16.38 minutes (95%CL: 8.0, 24.7), indicating that bats emerged roughly 16 minutes later in the evening with each unit increase (i.e. increasing moistness) in PDSI. During extreme drought events (PDSI = 24.5) bats emerged as early as 88 minutes before sunset (95%CL: 2129, 247), whereas in unusually wet years (PDSI = 2.65), bats emerged as late as 30 minutes after sunset (95%CL: 211, 71).
Daily Temperature and Emergence Behavior
The varying slopes model after accounting for significant temporal auto-correlation in model residuals was the best fit according to AIC (AIC weight = 0.98) and indicates that timing of emergence was significantly different in dry, normal, and wet summers and that the relationship between daily timing of emergence and temperature depends on summer climate type ( Fig. 4; Table 3). The estimate of correlation of residuals separated by one day was 0.86 (95% CL: 0.79,0.90). The relationship between onset of emergence and daily surface temperature was steepest during dry years, when bats emerged 9 minutes (95% CL: 5,13) later for every 1uC increase in daily surface temperature (Fig. 4). The relationship between onset of emergence and daily surface temperature was similar in wet years, when bats emerged 7 minutes (95% CL: 3, 10) later for every 1uC increase in daily surface temperature (Fig. 4). There was no significant relationship between onset of emergence and daily temperature during years of normal summer climate conditions (slope coefficient = 3; 95% CL: 21,7, p = 0.17) (Fig. 4).
Discussion
Our results demonstrate a strong association between climatic conditions and emergence behavior in Brazilian free-tailed bats. Bats emerged earlier in years that were characterized by severe drought conditions and later in years with moist conditions (Fig. 3). This pattern matches our predictions and supports the hypothesis that timing of emergence in bats is an adaptive tradeoff between meeting foraging needs and decreasing risks of predation and competition [13]. Drought conditions are associated with lower insect availability [20] and have been linked to lower reproductive success [6] and lower annual survival [37] in some bat species. Our results suggest that bat colonies respond to variation in extrinsic conditions that affect physiological stress by emerging to forage earlier, sometimes well before sunset.
Daily weather also influenced timing of emergence such that bats emerged later on hotter days in both dry and moist years (Fig. 4). Foraging success may be highest on hot days because of the underlying relationship with nocturnal insect activity and temperature [21]. This relationship was consistent in both drought and moist years, suggesting that bats responded similarly in both types of climatic extremes. Surprisingly, there was no relationship with daily temperature and onset of emergence in years with normal moisture levels (Fig. 4). In general, there was much more variance in timing of emergence during normal years. The results of our analysis of daily patterns of emergence correspond well with the results of our analysis on summer climatic conditions and emphasize the importance of the role of both longer-term seasonal climate and short-term daily weather on animal behavior. Phenotypic plasticity in response to climate can be an adaptive response that mediates impacts of changing climate on wild populations [38]. Emerging earlier is likely a signal of stressful conditions for bats, which has the potential to reduce individual fitness. Alternatively, early emergence could be a compensatory behavior such that bats respond to poor conditions by increasing foraging times without suffering loss of fitness. Understanding how variation in time of emergence relates to individual survival and reproduction or population declines is necessary to predict how climatic conditions will influence bat populations over the long term. Reproductive success and survival have been shown to vary with climatic conditions in other bat species [6,37], but how plasticity in emergence behavior affects fitness and ultimately population growth is unknown.
One way to determine if emergence behavior in response to climate conditions results in changes in population growth would be to estimate population sizes of bat colonies over the same time frame in order to test whether years following severe drought were associated with significant population declines (i.e. N t+1 %N t ). We are currently working on estimating aerial densities of bats directly from radar products [24]. The strength of the radar signal is related to the density of animals in the radar sampling volume and can be used to estimate animal densities, given certain assumptions [24]. Using radar reflectivity to estimate population sizes at these colonies will allow us to test how phenotypic response to climate influences population dynamics and will provide a useful means for long-term monitoring of bat population trends.
Past studies have investigated the functional significance of timing of emergence by assessing adaptive trade-offs, comparing foraging habits, and determining differences in age and reproductive conditions [13,14]. Because we estimated timing of emergence from radar signals, our measure of onset of emergence is not directly comparable to other reported measures, such as visual assessment of first appearance or median emergence time [13,39]. If anything, our measures of emergence timing may be biased late because there will always be a time lag between when bats leave the cave and when they are flying high enough to be detected by radar. Our radar-derived measure of maximum dZ/dt would be most similar to median emergence time, which is recognized as a better metric for measuring emergence behavior, than time of first appearance [39].
Brazilian free-tailed bats in our study emerged substantially earlier than reported emergence times of other bats. In a review comparing emergence times of bats, Jones and Rydell [13] provide timing of first appearance and median emergence for 66 species of bat from 11 families. In only four species, did time of emergence occur before local sunset and the earliest reported emergence was only 16 minutes before sunset [13]. Emergence was earliest in species like Brazilian free-tailed bats that have high flight speeds and depend on aerial insects [13]. Our results show that in moist years Brazilian free-tailed bats emerged 30 minutes after sunset and in dry years bats emerged as early as 1.5 hours before sunset (Fig. 3).
Our results were similar to emergence times reported for Brazilian free-tailed bats from Frio Cave in 1996 and 1997 [18] and from Davis, Frio and Ney caves in 2007 [17], supporting the efficacy of radar-based methods to measure emergence behavior in this species. Reichard et al. [17] reported emergence times for captured individuals in different reproductive classes during early summer. Median emergence time for lactating bats, which were the majority of captured bats (65%) in that season, was 47 minutes after sunset [17]. Average of median emergence times for Davis, Frio and Ney caves in our dataset in 2007, which likely roughly corresponds to the 'early summer' period used in the Reichard et al. [17] study, was quite similar at 41 minutes (means shown in Table 1). It may seem surprising that our data, which requires that the emerging column of bats has gained sufficient altitude to be detected, reports an earlier median time than data from individual bats captured at the cave entrance. The Reichard et al. [17] study Table 3. Model selection results from 5 a priori models of how timing of emergence by Brazilian free-tailed bats (Tadarida brasiliensis) responds to variation in daily surface temperature in south-central Texas in different summer climate conditions. had low samples size of nights as emergence was measured only twice monthly because of logistical challenges of being physically present and concerns about disturbing bats while capturing at entrances [17]. In contrast, we were able to measure emergence times for most days for the 30-day period of interest without any disturbance (Table 1). Our analysis on daily variation on emergence behavior shows that there is considerable daily variation in timing of emergence (Fig. 4), which could explain reported differences between the two studies due to sample sizes. Our results confirm suggestions by both Reichard et al. [17] and Lee and McCracken [18] that timing of emergence in Brazilian free-tailed bats is influenced by environmental cues, such as climate and weather conditions. Our study is the first to use a sufficiently long yearly time series to assess how annual variation in climate conditions influences emergence behavior in bats. Annual variation in emergence times demonstrates that plasticity in emergence behavior of bats is a response to environmental cues by which bats can alter foraging strategies to meet energy needs. Our data suggest that bats respond to both daily and seasonal conditions and that drought conditions are associated with riskier behaviors of emerging earlier. Emergence timing may be a useful long-term indicator of response to climate change by bats, particularly in arid environments. Future studies should aim to link the fitness consequences of emergence behavior response to climate and weather patterns.
We used remote-sensing technology and freely available climatic indices to associate animal behavior with annual variation in climate and daily weather conditions. Numerous studies have investigated timing of emergence in bats, as it is an easily measured behavioral signal [13]. However, without a remote-sensing capability to measure timing of emergence, the ability to assess how daily weather or seasonal climatic conditions influence group behavior had yet not been attempted. By using the archived NEXRAD radar network to measure emergence timing, we were able to monitor animal behavior at a temporal and spatial scale concordant with determining how animal aggregations respond to annual and daily variation in weather conditions. In our analysis, we used 11 years of data because the entire 20-year NEXRAD archive has not yet been processed in a user-friendly format for biological research. Access to the entire NEXRAD archive in a mosaicked and composite format would facilitate future ecological research and support monitoring of animal response to weather and climate.
|
2016-05-12T22:15:10.714Z
|
2012-08-02T00:00:00.000
|
{
"year": 2012,
"sha1": "6ee5e00911f7906e55f7f5c95664d4d3b567b0dd",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0042737&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9c5f4e41fd0cc8f399ded59393b66bde617c868",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
229377384
|
pes2o/s2orc
|
v3-fos-license
|
Risk factors and prevalence of enteroparasitic diseases in Shellfish Pickers from a lake area in the Northeast of Brazil
Introduction: Intestinal parasitosis are a public health problem worldwide. There are several risk factors and a high association with some specific labor activities. Objective: The present study assessed the risk factors and prevalence of enteroparasitic diseases in shellfish pickers from one district of Maceió, Alagoas state, Brazil. Methods: Crosssectional study of 41 female shellfish pickers including parasitological tests in fecal samples and a questionnaire with objective and subjective questions. Sand samples from their working environment were also analyzed. Results: At least one species of parasite was found in 19.51% of the fecal samples. Pathogenic species of Giardia lamblia, Trichuris trichiura, Schistosoma mansoni, Ascaris lumbricoides, Enterobius vermicularis, from the Ancylostomatidae family, and non-pathogenic species of Entamoeba coli were found. Polyparasitism was diagnosed in 37.5% of the positive samples. A total of 57.14% of sand samples contained hookworm larvae. Regarding the risk factor, low educational level was statistically associated to the presence of parasites (p<0.05). Conclusion: Greater investment in basic education is needed to increase the knowledge about preventive measures against parasitic diseases and the promotion food-handling courses in order to change existing inadequate habits in the community. Basic sanitation is also essential in preventing environmental contamination.
INTRODUCTION
Intestinal parasitosis are a public health problem worldwide and present high morbidity indices in developing countries, where population growth is not accompanied by better living conditions [1][2][3][4] . The parasitosis can affect nutritional balance, interfering in nutrient absorption, inducing intestinal bleeding, reducing food ingestion and, in cases of overpopulation, may lead to death 5,6 . The warm environments of tropical countries, associated with malnutrition, lack of health care, poor sanitary conditions, inadequate personal hygiene, housing and peridomestic environments are associated with higher exposure of the population to infection 7,8 . Brazil has 12% of the world's fresh water, including 8.2 billion meters of water distributed into rivers, lakes, dams and reservoirs, in addition to environmental and climatic conditions favorable to making it one of the leading fishing producers in the world 9 .
The shellfish picking is a manual fishing technique in Brazil, exerted mainly by women called "marisqueiras", who harvest shrimp, "sururu", oyster, soft crab, and crab 10 . Shellfish pickers work both for subsistence and commercial purposes, being responsible for their own equipment and all the production stages 11 , from preparing the materials for shellfish harvesting to selling of the final product. These stages are conducted at home, and in peridomestic and extradomestic environments 12
Sand collection and analysis
Seven sand samples were collected from two areas of shellfish harvesting and manipulation. The samples were collected 5cm deep from the land surface and 1m distant from each other.
The sand samples were inserted in flasks with lids, without preservative, and evaluated in the same day. The methods adopted were: Hoffman, Pons & Janer 15 and Baermann-Moraes 17 . Two slides from each sample/technique were evaluated under bright-field microscopy (100 and 400⨉ magnification).
Statistics
Risk factors were identified via interviews consisting of multiple-choice questions, considering the economic situation, education, eating and hygiene habits and basic sanitation. The chisquared test (X 2 ) was used to determine the relation between the variables (risk factors) and prevalence of parasitic infections in the study population. p<0.05 was considered significant.
All the shellfish pickers received the laboratory results. Positive cases were oriented about parasitic prevention and referred to the appropriate health units for treatment.
DISCUSSION
The manual fishing, especially in urban areas, is commonly a casualty of environmental problems originated from irregular urban growing 18 . In recent years, environmental problems, such as contaminated water, air, soil, domestic and work environments, have had a significant impact on human health 19 . Moreover, considering the three cities shored by the Mundaú Lake, the percentage of houses with basic sanitation is low. The biggest city shored by Mundaú Lake has only 19% of the houses with basic sanitation and, consequently, waste can be disposed in the Mundaú Lake and urban rivers 14 . The present study stresses this problem, as 7.4% of the shellfish pickers waste directly in the Mundaú Lake and the remaining population interviewed have septic tanks, which means that there is no basic sanitation in the community. The E. vermicularis eggs that were found, without the need of the parasitic specific Graham test, might indicate that the real prevalence is higher than our present results. Furthermore, it is noted that an adult was positive for this specific parasite, considering that children are more frequently affected. Previous parasitic studies with adults food handlers and fishermen also reported cases of E. vermicularis 26,27 . Adequate hygienic sanitary habits are more important than basic sanitation to prevent the aforementioned parasite transmission, considering that it is commonly diagnosed in people with close proximity and transmitted by direct contact or through contaminated objects.
Three of the four positive sand areas were adjacent and located in a region where shellfish are handled and sold, contributing to food contamination and affecting more individuals. The larvae found in the sand were from the Ancylostomatidae family, and eggs of this parasite were found in the parasitological examination of feces, which increase the hypothesis that the shellfish pickers may have become infected at their work environment 28 .
The extreme precariousness of artisanal fishing increases the likelihood of fishers suffering accidents and contracting diseases due to the significant physical effort required, climatic variations and contact with pathological agents in an environment with not enough basic sanitation [29][30][31] . Thus, the age is a restrain factor for shellfish pickers, reducing their working period, which could explain the negative results in all "marisqueiras" aging above 60 years old.
The low education level, including analphabetism, was statistically significant as a risk factor among the shellfish pickers. Higher prevalence of intestinal parasites is found in lower socioeconomic classes with less education 32 . Education level is an important factor in understanding diseases, forms of transmission and prevention 33,34 . Other studies also found a relationship between low education levels and transmission of schistosomiasis 35,36 and other enteroparasites 37 .
Despite importance of artisanal fishing communities in the Brazilian fish production, these people are generally among the poorest groups in the population, and this may be due to their dependence on exploiting a limited natural resource and the inherent unpredictability of fishing 18,38 . In our study, the monthly family income of 33 shellfish pickers (80.48%) was below one minimum wage. Similar percentages were found in a parasitological study in the suburb of Manaus, where 90% of the people earned less than one minimum wage 32 . Such a low income precludes investing in personal protective equipment (PPE), important to minimize risks and infection during work-related activities 39 .
Considering that low education was the risk factor related to parasitic infection, a higher investment of the government in adult education is necessary, in order to facilitate the population knowledge about preventive measures for parasitic diseases.
Some of the diagnosed parasitic infections could be controlled if the shellfish pickers had access to courses focused in personal hygiene, food handling and environment contamination, with the main goal of changing the established habits and further protecting the shellfish pickers and costumers health. Furthermore, actions related to better basic sanitation and adequate wasting are fundamental.
ACKNOWLEDGMENTS
We offer our deepest thanks to Vieira de Lima Fishing Colony that provided support for the development of this study.
|
2020-11-26T09:06:54.252Z
|
2020-11-24T00:00:00.000
|
{
"year": 2020,
"sha1": "ee254b4bbcadaae3c7e94f577c354cc67a415b35",
"oa_license": "CCBY",
"oa_url": "https://www.portalnepas.org.br/abcshs/article/download/1332/1104",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5ee280ab2889e7a7dea10a74fdfc1178de6a6c45",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
261797745
|
pes2o/s2orc
|
v3-fos-license
|
Antiviral efficacy of nanomaterial-treated textiles in real-life like exposure conditions
Due to the growing interest towards reducing the number of potentially infectious agents on critical high-touch surfaces, the popularity of antimicrobially and antivirally active surfaces, including textiles, has increased. The goal of this study was to create antiviral textiles by spray-depositing three different nanomaterials, two types of CeO2 nanoparticles and quaternary ammonium surfactant CTAB loaded SiO2 nanocontainers, onto the surface of a knitted polyester textile and assess their antiviral activity against two coronaviruses, porcine transmissible gastroenteritis virus (TGEV) and severe acute respiratory syndrome virus (SARS CoV-2). Antiviral testing was carried out in small droplets in semi-dry conditions and in the presence of organic soiling, to mimic aerosol deposition of viruses onto the textiles. In such conditions, SARS CoV-2 stayed infectious at least for 24 h and TGEV infected cells even after 72h of semi-dry deposition suggesting that textiles exhibiting sufficient antiviral activity before or at 24 h, can be considered promising. The antiviral efficacy of nanomaterial-deposited textiles was compared with the activity of the same nanomaterials in colloidal form and with positive control textiles loaded with copper nitrate and CTAB. Our results indicated that after deposition onto the textile, CeO2 nanoparticles lost most of their antiviral activity, but antiviral efficacy of CTAB-loaded SiO2 nanocontainers was retained also after deposition. Copper nitrate deposited textile that was used as a positive control, showed relatively high antiviral activity as expected. However, as copper was effectively washed away from the textile already during 1 h, the use of copper for creating antiviral textiles would be impractical. In summary, our results indicated that antiviral activity of textiles cannot be predicted from antiviral efficacy of the deposited compounds in colloid and attention should be paid on prolonged efficacy of antivirally coated textiles.
Introduction
With the current outbreaks of infectious diseases there has been a significant increase in awareness of the importance of good hygiene practices.Due to the importance of surface transmission in facilitating bacteria or virus-related infectious diseases [1][2][3], there is a justified interest towards eliminating those microbes from high-touch surfaces, especially those present in common areas.In addition to the disinfection, a feasible solution to decrease the presence of microbes on surfaces would be the use of antibacterial or antiviral coatings or finishes [4,5] either on solid surfaces or on textiles.Indeed, inadequately laundered or disinfected textiles have been often related with healthcare associated infections [6].Although to date no direct evidence exists that microbe-contaminated textiles had been the cause of large-scale infection outbreaks in hospital settings or in public spaces, adequate control measures should be in place to decrease the presence of potential microbial or viral pathogens on textiles, particularly those used for sensitive population groups [6,7].
Interestingly, despite of significant increase in publications on antiviral compounds during recent years, reports on antiviral surfaces are still relatively scarce.A total of 67 articles were derived with keywords "antivir* coat*" and only 49 with keywords "antivir* surface*" in Clarivate WoS database by January 2023.Compared with antiviral surfaces, even less has been published on antiviral textiles.By January 2023, only 10 papers were present in WoS database for "antivir* textile*" and 8 articles on "antivir* fabric*".
Various compounds and techniques have been used to equip textiles with antiviral properties.Copper and its compounds are probably the most frequently used textile finishings due to their high antiviral efficacy [8].Antiviral effect was observed for cotton fabric to which copper nanoparticles were deposited by magnetron sputtering [9], for cellulose and polyester textiles with surface-deposited gallium-copper particles [10] and for CuI-based Cu + -ions releasing and reactive oxygen species (ROS) producing thin film created on a textile surface [11].In addition to copper, also other compounds have been used to create antiviral textiles.ZnO films have been used to coat nanofibrous electrospun silk-polyethylene oxide material for antiviral purposes [12] and ZnO nanoparticles coupled with (3-Glycidoxypropyl) trimethoxysilane have been used to create antiviral cotton fabric [13].Selenium nanoparticles as part of acrylate-based printing paste have been used to print antiviral polyester fabrics [14] and nano-graphene oxide coating was used in two studies that aimed to prepare antiviral PET textile and linen [15,16].Also organic antivirals, such as sodium pentaborate, triclosan and glucopon as well as liquid soap formulation have been used to achieve antiviral covering [17,18].
One of the central issue in antiviral textile research is proof of efficacy, which in case of commercialized products is strictly regulated.According to both, European as well as US regulations antiviral surfaces and textiles are required to exhibit at least 3 log decrease in infectious activity within 1-2 h [19][20][21].However, based on the current literature, 3 log decrease in viral infectivity has been observed only in case of one of the studied textiles that was coated with sodium pentaborate pentahydrate and triclosan [17].Interestingly, studies with copper-containing textiles have demonstrated only 1-2 log decrease in viral infectivity.CuI nanoparticles treated textiles decreased the infectivity of SARS-CoV-2 only up to 2.5 logs during 24 h incubation [11], Cu-impregnated cotton decreased the infectious titer of influenza A virus by ≥ 2 log during 30 min and other type of Cu-coated textile exhibited only a maximum of 1-2 log decrease in infectivity of vaccinia virus, herpes simplex virus type 1 and influenza A virus H1N1 during 2 h [9].Most of other nanomaterial-based textiles involving either ZnO, selenium or graphene oxide surface coatings, have shown antiviral activity between 1 and 2 log decrease of infectious units during 1-2 h [12][13][14]16].In terms of the duration of exposure, the shortest contact time during which significant (1.5 log) decrease of viral infectivity was observed was 1 min in case of liquid soap treated face masks [18].
It is worth noting that most of the antiviral assays with textiles have been performed in conditions where the viral suspension stock is wetting the textile fully.Mostly ISO 18184 protocol [22], which foresees wetting of 1 g piece of textile with 0.2 mL viral suspension, has been followed [9,10,13].However, as argued in several opinion articles and critical reviews, testing of antimicrobial efficacy should be carried out in application-relevant settings [23,24] and exposure of virus particles onto a textile in a layer of liquid can rarely be evidenced in real life.Most of the viruses are usually deposited onto surfaces, including textiles, via touch transfer or inside small respiratory droplets [25] which suggests that dry or semi-dry testing would provide more valuable information in terms of application relevancy.In some specific cases, where textiles are designed for filtering purposes, filtering of viral stock through the textile (e.g., in Ref. [16]) can be considered as a good measure for antiviral activity in application-relevant scenarios.Apart from moisture during exposure, also exposure medium may play a critical role in antiviral activity.Yet in most of the studies on antiviral textiles, the exposure medium has been relatively poorly documented.Only some studies define their used test medium, which has been either FBS [15], cell culture medium [16] or bacterial growth medium in case of bacteriophages [18].Indeed, the presence of organic soiling in antimicrobial testing has been considered as a factor significantly affecting antimicrobial activity results [26].Therefore, careful consideration of exposure media in antiviral efficacy assays is of utmost importance.
While the testing conditions can be followed from standard procedures, none of the standards limits the maximal amount of active agent on the surface of textiles.However, the use of textiles in practical applications may set limitations to the quantity of active agents, because textile properties, such as elasticity, durability, density, and in many cases even tactual sensations, should not be affected by the treatment.As this aspect is not always considered, it is possible that the practical applicability of some previously suggested antiviral compounds is debatable due to their too high loading on textiles.
In this study, we aimed at developing a nanomaterial-based antiviral treatment for textiles and test such textiles in applicationrelevant conditions against two coronaviruses, porcine transmissible gastroenteritis virus (TGEV) and severe acute respiratory syndrome virus (SARS CoV-2).CeO 2 NPs, and mesoporous SiO 2 nanocontainers loaded with quaternary ammonium surfactant hexadecyltrimethylammonium bromide (CTAB) were selected as potentially antiviral nanomaterials due to their antiviral properties determined in earlier studies [27,28].Nanomaterials were sprayed onto polyester textile along with copper nitrate and CTAB that were used as a positive antivirally active control and a control for CTAB-loaded SiO 2 nanocontainers, respectively.Antiviral efficacy of the A. Nefedova et al. textiles was tested in semi-dry conditions in small droplets and in the presence of organic soiling.In order to assess the potential loss of antiviral activity of nanomaterials and compounds after loading to textile surface, the efficacy of textiles was compared with the efficacy of the same nanomaterials and compounds in their colloidal form.To assess the real-life usability of the textiles, their stability was estimated by analyzing leaching of the antiviral components.
Synthesis of nanoparticles
NPs of CeO 2 were synthesized as described earlier in our previous article [27].By using two different synthetic techniques, NPs with positive and negative surface charge were obtained henceforth referred to as CeO 2 (+) and CeO 2 (− ).
CeO 2 (+) NPs were obtained by hydrolyzing diammonium cerium (IV) nitrate at high temperature in the presence of HMTA.For that, 0.189 g of (NH 4 ) 2 Ce(NO 3 ) 6 and 0.053 g of HMTA were dissolved in 50 mL of water and loaded into an autoclave vessel (100 mL).The vessel was sealed and heated to 180 • C for 30 min in the microwave-hydrothermal device (Berghof Speedwave 4, 2.45 GHz, 1000 W).After thermal treatment, the vessel was cooled down to room temperature in a water bath.The product was centrifuged for 10 min, the sediment was washed with water and redispersed by ultrasonication.These steps were repeated at least 3 times and the final product was redispersed in 5 mL of water by ultrasonication until opalescent pale-yellow colloid was obtained.
For the synthesis of CeO 2 (− ) NPs, cerium (III) nitrate was hydrolyzed at room temperature in the presence of ammonia with simultaneous oxidation by oxygen from the air.For that, 0.045 g of citric acid was dissolved in 25 mL of prepared in advance 0.05 M aqueous solution of cerium (III) nitrate.The solution was rapidly added into 100 mL of 3 M solution of ammonia and left to stir vigorously for 2 h during which the color of solution changed from colorless to yellow-orange.After that the colloid solution was centrifuged, washed and redispersed by ultrasonication.These steps were repeated at least 3 times and the final product was redispersed in 15 mL of water by ultrasonication until transparent dark yellow colloid was obtained.
Mesoporous SiO 2 nanocontainers loaded with dissolved CTAB, further referred to as CTAB@SiO 2 , were synthesized using a modified Stöber technique described in Ref. [29].For that, 5.2 mL of TEOS, 0.028 g of CTAB, 150 μL of ammonia and 1 mL of water were mixed in 50 mL of ethanol and left at room temperature for 24 h under vigorous stirring.Afterwards, thick white precipitate was separated by centrifugation, washed several times with water and ethanol and dried at 40 • C.
Characterization of nanomaterials
The mean size of CeO 2 particles and CTAB@SiO 2 nanocontainers was estimated from TEM images (JEOL JEM-2200FS with normal field emission gun, 200 kV acceleration voltage) for which nanomaterials were dispersed in ethanol using ultrasonication and drop casted onto Lacey Carbon Copper TEM grid.Hydrodynamic mean size and ζ-potential were measured in water colloids of 10 mg/L in water using dynamic light scattering (DLS; Malvern Zetasizer instrument).The concentration of component elements in NPs and nanocontainers was measured using elemental analysis by ICP-MS (Agilent 7700 ICP-MS) in case of Ce, Cu, Br and ICP-OES (Agilent 5100 ICP-OES) in case of Si.Br concentration was used to calculate the concentration of CTAB.For ICP measurements true and colloidal solutions were mixed with diluted HNO 3 and directly loaded into the device.
Deposition of nanomaterials onto textile
The used textile was a knitted fabric for mattress covering "Sareaux-C 24" (100% polyester, specific weight 240 g/m 2 ) provided by TAD Logistics (Estonia).Antiviral compounds (at concentrations indicated in Table 1) were deposited onto the textile by spraying with an air spray nozzle.15 mL of concentrated solution of the compound or suspension of nanomaterials was sprayed onto 0.0225 m 2 (15 × 15 cm) of vertically placed piece of textile.After spraying, textile pieces were dried at room temperature and textiles were designated as CeO 2 (+) textile, CeO 2 (− ) textile, CTAB@SiO 2 textile, CTAB textile and Cu textile (Table 1).The dried textile was cut to 4 cm 2 pieces (2 × 2 cm squares, mean weight of the piece 0.045 ± 0.008 g), that were further used for antiviral testing, as well as to determine the quantity of deposited material.
Characterization of antivirally treated textiles
The treated textiles were analyzed for their elemental composition, release of antiviral loading and physical appearance.The concentration of nanomaterials or compounds on textile samples and in their wash-offs (collected samples from antiviral assays; see below) was measured using ICP-MS (Agilent 7700 ICP-MS) in case of Ce, Cu, Br and ICP-OES (Agilent 5100 ICP-OES) for Si.Br concentration was used to calculate the concentration of CTAB as it was safe to consider the absence of any other sources of bromine in samples.To extract the compounds and nanomaterials from textile samples, 0.1 g of textile was placed in 3:1 mixture of HNO 3 and HCl ("reversed aqua regia") and treated using Berghof Speedwave Xpert device until full decomposition.In case of wash-offs the concentration of released nanoparticles and compounds was calculated back to the single 2 × 2 cm textile piece.
Imaging of textile surfaces was performed with SEM (Thermo Fisher Scientific Helios 5 UX device) after fixing the 2 × 2 cm pieces on a sample holder using conductive carbon tape.
Maintenance and preparation of viruses for antiviral testing
Antiviral assessment was carried out with two coronaviruses, transmissible gastroenteritis virus (TGEV, obtained from prof.L. Enjuanes at Department of Molecular and Cell Biology, National Center of Biotechnology, Madrid, Spain) and severe acute respiratory syndrome virus (SARS-CoV-2, recombinant virus based on the Wuhan-Hu1, MT926410 sequence [30] with S-protein containing amino acid mutations corresponding to the Delta strain).
SARS-CoV-2 was propagated in Vero E6 cells (virus growth medium (VGM) -DMEM supplemented with 0.2% BSA, 100 U/mL penicillin, and 100 μg/mL streptomycin) for 4 days at 37 • C, 5% CO 2 .Cell supernatant was collected, clarified by centrifugation at 3000×g for 10 min at + 4 • C, aliquoted and stored at − 80 • C. The virus titer was determined using immuno-plaque assay as follows: confluent Vero E6 cells on 96-well plates were infected using 25 μL virus dilutions at 37 • C, 5% CO 2 , humidified atmosphere for 1 h.Then, ~150 μL of 1% CMC in VGM was added to the plates and incubated further for 48 h at 37 • C, 5% CO 2 .The CMC layer was then removed by pipetting, and the cells were fixed using ice-cold 80% acetone/PBS for 1 h, − 20 • C. Acetone was removed and plates were dried for at least 3 h.The plates were blocked with 50 μL per well of Pierce™ Clear Milk blocking buffer (Thermo Scientific) for 30 min at 37 • C and after that stained with rabbit anti-SARS-CoV-2-nucleocapsid monoclonal antibody (82C3, ref.R1-179-100, Icosagen, Estonia), following staining with the secondary anti-rabbit IRDye800-conjugated (LI-COR) antibody.The plates were washed with PBS/0.05%Tween 20, dried and scanned using LI-COR Odyssey (LI-COR, USA) device for 800 nm signal; plaques (minimum three plaques per well) were counted from scanned images.At least three independent tests were performed to obtain the resulting titer of 5.73 × 10 5 PFU (plaque forming units) per milliliter.TGEV was propagated in ST cells (growth medium as above) by incubating the virus-infected cells for 4 days at 37 • C, 5% CO 2 .Cell supernatant was collected, clarified by centrifugation at 3000×g for 10 min at + 4 • C, filtered through a 0.2 μm filter, aliquoted and stored at − 80 • C. The virus titer was determined using plaque assay as follows: 100% confluent ST cells on 12-well plates were infected with 150 μL of virus stock dilutions for 1 h at 37 • C and 5% CO 2 in a humidified atmosphere with gentle rocking every 15 min.The medium was then removed, and 1.5 mL of 1% CMC in VGM was added.Cells were grown for 96 h at 37 • C, 5% CO 2 in a humidified atmosphere.Then, the CMC/VGM was removed, and plates were stained using crystal violet stain and the plaques visually counted.
Only wells with at least 3 plaques were considered.At least three independent tests were performed to obtain the resulting titer of 6.33 × 10 7 PFU/mL.
Antiviral activity assessment
Both, textiles and colloidal or true solutions of active agents were assessed for their antiviral activity.Antiviral assessment of textiles was carried out essentially following ISO standard 18184 using textile pieces of 2 × 2 cm (0,045 ± 0008 g) that were sterilized by autoclaving.Virus stocks in VGM were mixed 10:1 with 10x soil load (1x soil load: 1 g/L BSA, 1 g/L yeast extract, 0.08 g/L porcine gastric mucin in PBS) [21].A piece of textile was placed into a 50 mL screw cap tube, ten 2 μL drops of viral stock mixed with soil load were applied to the textile surface and the tube was closed.Viruses were washed off from textiles using 10 mL of SCDLP (30 g/L TSB, 1 g/L lecithin, 7 g/L Tween 80) by vortexing for 5 sec 5 times, at specified timepoints.For 0 h timepoint, virus was washed off from textile immediately.These wash-offs were either directly or after diluting in virus growth medium used to infect the cells for immunoplaque assay or plaque assay as described above.Before counting the plaques, the plates were checked for any cytotoxicity or other interfering effects by visually checking staining of the cell layer.Wells with visual signs of cytotoxicity were not counted.PFU per textile surface was calculated.
A. Nefedova et al.Wash-offs from textiles in SCDLP after 10 min UV treatment were also used for elemental analysis by ICP-MS (see section 3.2, Table 3).
In case of "no textile" experiment, similar experimental set-up as in case of textiles was used but no textile was added.Ten 2 μL drops of viral stock mixed with soil load were placed onto the surface of a 50 mL screw cap tube, the tube was closed and viruses were washed using 10 mL of SCDLP at specified timepoints.The resulting virus dilutions were then used to infect cells for immunoplaque or plaque assay.
For antiviral analysis of solutions of nanomaterials or compounds, solutions with different concentrations in water were mixed with equal amount of virus stock was added followed by 1 h incubation at room temperature.The resulting mixtures were further diluted with VGM and used to infect the cells for immunoplaque assay or plaque assay as described above.PFU per ml of compound or nanomaterial was calculated.
Each textile, chemical or nanomaterial concentration was tested at least in three replicates while performing at least three independent experiments.Minimum of three plaques could be reliably counted per one well and this served as a limit of detection (LOD).Results are presented as log PFU/mL for a piece of textile.
Statistical analysis
Statistically significant differences in antiviral experiments were confirmed by one-way ANOVA analysis for repeated measurements and post-hoc Tukey's range test with CLD output.P-values of less than 0.05 were considered statistically significant.
Characterization of nanomaterials
Detailed characterization of the ultrasmall, 3.2-3.3nm diameter CeO 2 NPs was performed in our previous paper [27] and most important parameters are summarized in Table 2. Fig. 1 A and B represent morphology of the synthesized CeO 2 NPs.Synthesized SiO 2 nanocontainers are characterized by the mean size of ~60 nm, spherical shape, and large amount of distinctly visible mesopores with the size around 2-3 nm (Fig. 1C).In case of CTAB loaded into SiO 2 (CTAB@SiO 2 ), these mesopores are filled with the reaction media of SiO 2 particles synthesis, which was a water-alcohol solution of CTAB.It is also expected that, as a surfactant, CTAB molecules are efficiently adsorbed onto the outer surface of SiO 2 particles, as well as on the inner surface of mesopores.It is expected that in aqueous environment CTAB@SiO 2 will slowly release CTAB due to the concentration gradient.
Characterization of textiles
General microstructure of the used polyester textile subjected to deposition of nanomaterials is presented on Fig. 2. The textile is knitted of threads with diameter ~400-500 μm, which in their part consist of fibers with diameter ~10 μm.Fibers are densely packed and stranded into threads with large free spaces between.
The two principally different types of nanomaterials loaded onto the polyester textile by spray coating were CeO 2 NPs that according to our previous studies demonstrated antiviral effect in suspension [27], and quaternary ammonium surfactant CTAB-loaded mesoporous SiO 2 (CTAB@SiO 2 ) that were considered antiviral due to previous studies [31] and because of wide-scale use of CTAB-like compounds in antimicrobial applications.To create a comparative sample for CTAB@SiO 2 textile, CTAB solution alone at approximately the same concentration that it was present in CTAB@SiO 2 , was spray coated onto the textile.
In order to prepare a positive control sample with presumably high antiviral activity, the textile was spray coated with Cu(NO 3 ) 2 (Table 1).The active component in resulting Cu textile was expected to be copper ions which are known for their high antiviral activity [8].
The microstructure of untreated and sprayed textile samples is shown in Fig. 3 A-H and the quantity of loadings of active agents on textile in Table 3.Interestingly, the loading efficacy of textile with CeO 2 NPs and CTAB@SiO 2 was very different: CeO 2 could be incorporated to the textile with about 100-fold higher efficacy than CTAB@SiO 2 .This could be due to the 20-time size difference between CeO 2 and SiO 2 particles (diameters 3 and 60 nm, respectively) and less efficient attachment of larger particles onto the textile threads.As the textile is composed of polyester lacking any functional groups capable of covalent bonding under the conditions used during spraying, we assume attachment of the particles only due to adhesion to the textile surfacevan der Waals forces and weak hydrogen bonding.Therefore, surface/volume ratio of the particles, as well as activity of their surface, play crucial role in their attachment to a thread.And it only can be expected, that smaller particles with more active surface are demonstrating better
Table 2
Main characteristics of synthesized nanoparticles.Mean size of the particles was calculated based on HR TEM data and hydrodynamic mean particle size was determined using DLS analysis.attachment.Also, from SEM images it can be seen that smaller CeO 2 particles form a continuous layer on the fiber surface (Fig. 3C-F), while larger SiO 2 nanocontainers tend to form "islands" probably due to stronger aggregation in the initial colloid (Fig. 3 G, H)).
In order to study the stability of the textiles, we measured the release of active components from treated textiles under the simulated conditions of an antiviral assay.The release profiles of potentially antiviral compounds from textile surfaces were notably different and dependent on the chemical nature of the deposited material (Table 3).The highest release from textiles was observed for Cu textile in which case after 1 h 23% of the initial Cu(NO 3 ) 2 loading was released and after 24 h 50% of the loading was leached out.On the other hand, the level of CeO 2 NPs release was negligible (0.16%) and according to existing data on CeO 2 low solubility [32], this release was probably only due to detachment of NPs from textile surface.Interestingly, the level of CTAB release from CTAB textile surface was relatively low, reaching only 3.5% of the initial loading.It was also surprising that CTAB release from CTAB@SiO 2 textile was comparable with that of CTAB textile (Table 3).Theoretically encapsulation of CTAB in SiO 2 mesopores could ensure a slow but a 4 cm 2 or 0.045 ± 0.008 g textile piece.
b Si in wash-off not detected, below limit of detection.continuous release of CTAB.Therefore, we suggest that the first wash-off of CTAB that was analyzed in our release assay was most likely determined by the rate at which SiO 2 surface-attached CTAB was released and CTAB "hidden" in mesopores did not release with this first burst.In case of CTAB@SiO 2 textile is also noticeable an extremely low content of SiO 2 in the wash-offs (0.04% of the initial load on textile), which proves that SiO 2 nanocontainers were relatively tightly bound to the textile surface.Based on those findings we can conclude that textiles loaded with soluble copper salts such as Cu(NO 3 ) 2 are unstable and non-resistant to any wet treatments or moist environment and longer-lasting applications.At the same time, nanomaterial treatments on textiles are more stable and less prone to release the active components.However, the nanomaterial loadings used in this work are probably excessive for real-life application of textile materials and e.g., in case of CeO 2 treatment the color of textile was visibly changed to yellowish.On the other hand, the physical properties of CTAB@SiO 2 textile did not visibly change compared with untreated textile.Interestingly though, CTAB textile on which CTAB was added at approximately the same concentration as in CTAB@SiO 2 , turned "soapy" by the touch.
Considering that treatment of textiles should minimally change the visual appearance or tactual sensations, CTAB@SiO 2 surface treatment could be proposed as a promising candidate for future developments, in case of its sufficient antiviral activity.
Antiviral efficacy of textiles
For antiviral efficacy assessment, two coronaviruses, porcine transmissible gastroenteritis virus (TGEV), which can be considered as a model coronavirus transmitting through the fecal-oral route, and severe acute respiratory syndrome virus SARS-CoV-2 that can be considered as a model virus transmitting through respiratory route, were selected.In order to place the antiviral efficacy data of textiles into the context, all nanomaterials and compounds loaded onto the textiles were also assessed for their antiviral activity in their solution form (Fig. 4).Overall, the sensitivity profiles of TGEV and SARS-CoV-2 towards the tested compounds at 1 h exposure were relatively similar.Among the tested compounds CTAB was effective at lowest concentrations, but when loaded to mesopores of SiO 2 in CTAB@SiO 2 , lost significantly in its antiviral efficacy.Interestingly, CeO 2 NPs and Cu(NO 3 ) 2 showed antiviral efficacy at relatively similar concentrations, between 0.1 and 1 mM (Fig. 4 A and B).
Although the sensitivity profile of both viruses was generally similar, some small differences were observed.Compared with SARS-CoV-2, TGEV was inhibited by lower concentrations of CTAB (one log decrease in PFU obtained at about 0.002 mM in case of TGEV and 0.01 in case of SARS-CoV-2).Similarly, CTAB@SiO 2 did not affect SARS-CoV-2, but at highest CTAB concentration decreased the PFU of TGEV virus by about 1 log (Fig. 4 A).Higher sensitivity of TGEV towards CTAB was somewhat surprising but may be explained by potentially higher concentration of cholesterol in the lipid membrane of this virus [33] compared with SARS-CoV-2, which membrane contains more phospholipids with little representation of cholesterol or sphingolipids [34].As cholesterol provides high negative charge to lipid membranes [35] enabling to attract more cationic CTAB [36], such a difference between the surfaces of those viruses may cause the differences in sensitivity profiles towards cationic compounds.On the other hand, CTAB at higher concentrations was still effective towards SARS-CoV-2 (Fig. 4 B) as also demonstrated in a recent study by Guerrero-Bermea et al. [37].
Before testing the antiviral efficacy of treated textiles, survival experiments of TGEV and SARS-CoV-2 on untreated (control) textile was carried out in order to determine the meaningful duration for the antiviral test and to put the following antiviral efficacy data into the context.Previous reports have shown that the residence time of viruses on textiles can be relatively long on cotton and silk, and the survival time varies between 1 day up to 7 months [38].Also, temperature has been shown to affect the virus survival on surfaces, in that at lower temperatures the viruses seem to survive more efficiently [38].On our textile, the selected coronaviruses survived well during 1 h semi-dry exposure at room temperature.Also, after 24 h exposure, the number of TGEV PFU was not significantly different from the initial viral loading (Fig. 5A).Apparently, SARS-CoV-2 was more prone to drying and PFU of this virus decreased for about 1.5 log after 24 h of semi-dry exposure.This decrease was however similar to the decrease of PFU in semi-dry conditions without textile (Fig. 5B) and thus, was caused by simple drying and not due to any specific interactions between the virus and the textile surface.After 48 h of semi-dry exposure however, on textile surface most of SARS-CoV-2 infectious particles were inactivated, whereas in no textile experiment, 4 logs of PFU were remaining (Fig. 5B).After 72 h, all infectious SARS-CoV-2 particles were lost both, on textile as well as without textile.In case of TGEV, 48 h of semi-dry exposure decreased the number of infectious particles by about 1 log and only after 72 h on non-treated textile around 2 logs of infectivity was lost (Fig. 5A).These time-course virus survival experiments demonstrate that in general, antiviral compounds having an effect before 48 h of exposure could be considered useable.In case of TGEV, also antiviral treatments that would be active at or after 72 h could be used.Based on this result and suggestions in standard methods (e.g., Fig. 4. Effect of NP solutions or solutions of compounds on two coronaviruses, TGEV (A) and SARS-CoV-2 (B).The average decrease in log plaque forming units (PFU)/mL from at least three independent experiments with standard deviation is shown.Dotted line represents maximum decrease in log PFU/mL that could be reliably shown.ISO 18184), we exposed TGEV and SARS-CoV-2 viruses to the treated textiles for 1 and 24 h and compared the PFU counts on those test textiles with PFU counts on non-treated textiles at those timepoints.
Antiviral effect data on treated textiles after 0, 1 and 24 h exposure is shown in Fig. 6.Clearly the most antivirally effective was Cu textile that exhibited notable (>3 log decrease in PFU) antiviral activity already after 1 h of exposure, i.e., in case of short-time exposure scenario.After 24 h of exposure, i.e., a long-term exposure scenario, Cu textile decreased the infective viral counts on textile already by 4 logs.However, as discussed above, the results from stability and leaching experiments (Table 3) clearly indicated very effective release of copper from Cu textile and thus, such textiles could be recommended only for short periods in relatively dry conditions.
Although Cu(NO 3 ) 2 and CeO 2 showed relatively similar antiviral profile when tested in solution (Fig. 4), their effect on textile was very different; Cu textile showed very strong antiviral activity whereas no notable antiviral activity was observed for CeO 2 textile.It is important to mention that comparison of Cu and CeO 2 textiles in terms of their antiviral activity is justified due to their similar loadings on textiles (7.5 μmol of Cu(NO 3 ) 2 and 8.7 μmol of CeO 2 per piece; Table 3).It is proposed that the high antiviral effect of Cu textiles was driven by efficient leaching of copper from this textile whereas CeO 2 textile did not release any traceable amounts of the active NPs.This suggests that effective antiviral textiles may be obtained only in case sufficient amount of active agents will be released from the textile surfaces whereas the antiviral activity of surface-attached agents remains relatively low.
In addition to Cu textile CTAB@SiO 2 was the other textile demonstrating antiviral activity, however at significant (>3 log decrease in PFU) levels only after 24 h of exposure, i.e., in long-term exposure scenario.In case of SARS-CoV-2, CTAB@SiO 2 textiles showed a similar effect to CTAB textiles (Fig. 6B).Considering similar level of release of CTAB from both CTAB@SiO 2 as well as CTAB textiles (Table 3), the expected antiviral efficacy of those textiles could be indeed similar.Interestingly however, TGEV was less affected by CTAB textile than by CTAB@SiO 2 textile (Fig. 6A).Therefore, considering the efficacy of CTAB@SiO 2 textiles against both of the tested viruses and relative stability of the CTAB@SiO 2 textile due to low level of SiO 2 and CTAB leaching (Table 3), such a textile would have a clear potential for antiviral use.
Interestingly, earlier studies on antiviral textiles have not attempted to address the issue with potential loss of antiviral activity of their active compounds after loading onto textiles.In our study we were able to only broadly compare the efficacy of antiviral compounds before and after textile loading due several uncertainties, including potential variation in contact area between virus and surface and spatial differences in textile loadings.However, as already indicated, our results showed that compounds not dissolving or releasing from textile surfaces have generally lower antiviral activity than those showing fast dissolution profiles.Another possibility for low antiviral activity of e.g., CeO 2 textiles could be potential masking of ultrasmall NPs by the textile and their non-availability to viruses.However, our results on CTAB and CTAB@SiO 2 textiles suggest that the change in antiviral activity is not simply partial "surface masking" but is more complicated and sensitive to the nature of antiviral material.For example, in solution, CTAB was clearly more efficient towards both coronaviruses than after its loading to SiO 2 nanocontainers in the form of CTAB@SiO 2 (Fig. 4).On the other hand, on textile after 1 h of exposure the antiviral effect of CTAB and CTAB@SiO 2 were comparable while after 24 h exposure CTAB@SiO 2 showed even significantly higher antiviral activity towards SARS-CoV-2.This result requires further investigation; however, we may suggest that in semi-dry conditions the amount of intrinsic moisture in the antiviral material becomes important.Aqueous CTAB entrapped in SiO 2 mesoporous nanocontainers during their preparation (about 50% of particle weight by our rough estimation) is irrelevant in case of colloidal experiments, but becomes crucial on the textile surface, as it allows partial dissolution of CTAB in the conditions of severe deficiency of solvent.This hypothesis allows to explain also the crucial decrease in antiviral activity of CeO 2 nanoparticles when moving from colloids to textile surface.Due to high degree of crystallinity these particles contain very small amount of intrinsic moisture.Cu(NO 3 ) 2 shows reasonable activity even in semi-dry conditions, but this can be explained by its extremely good solubility and high hygroscopicity.
In a wider context these results put forward an important problem for antiviral treatment of textiles.As shown by us and other authors earlier, viruses may survive a rather long time in relatively dry conditions and therefore, antiviral compounds should be able to deactivate viruses even under very low humidity.However, many popular inorganic antiviral agents (copper and silver metal nanoparticles, zinc, titanium and copper oxide nanoparticles, etc.) are efficient only in presence of water, but contain very small if any intrinsic moisture.Thus, these compounds will most likely not be efficient in dry or semi-dry conditions.Nanocontainers, on the other hand, in a form of porous inorganic or polymeric nanoparticles, hydrogels, activated carbon, etc., are capable of containing significant amount of internal moisture.This opens a perspective for their usage in the antiviral treatment of textiles, that are not in direct contact with water and are operated in normal or low humidity conditions.
A
.Nefedova et al.
Fig. 5 .
Fig. 5. Infectivity of TGEV (A) and SARS-CoV-2 (B) expressed as log plaque forming units (PFU)/mL after semi-dry exposure to untreated textile or in a similar exposure without textile during 1-72 h.Lower case letters indicate statistically independent groups, as according to Tukey range test with P-value <0.05.
Table 1
Concentrations of antiviral nanomaterials and compounds used to spray the textile.
Table 3
Quantities of active compounds on textiles and stability (leaching) of the treated textiles as determined by ICP-MS analysis.
|
2023-09-14T15:15:11.467Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "b217a337871a2d7a9aa419b8f1a76a7d4a765871",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fce2d78884c8a0188de281d068aa2fd5d130ee61",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219087132
|
pes2o/s2orc
|
v3-fos-license
|
Detailed Assessment of Embodied Carbon of HVAC Systems for a New Office Building Based on BIM
: The global shift towards embodied carbon reduction in the building sector has indicated the need for a detailed analysis of environmental impacts across the whole lifecycle of buildings. The environmental impact of heating, ventilation, and air conditioning (HVAC) systems has rarely been studied in detail. Most of the published studies are based on assumptions and rule of thumb techniques. In this study, the requirements and methods to perform a detailed life cycle assessment (LCA) for HVAC systems based on building information modelling (BIM) are assessed and framed for the first time. The approach of linking external product data information to objects using visual programming language (VPL) is tested, and its benefits over the existing workflows are presented. The detailed BIM model of a newly built o ffi ce building in Switzerland is used as a case study. In addition, detailed project documentation is used to ensure the plausibility of the calculated impact. The LCA results show that the embodied impact of the HVAC systems is three times higher than the targets provided by the Swiss Energy E ffi ciency Path (SIA 2040). Furthermore, it is shown that the embodied impact of HVAC systems lies in the range of 15–36% of the total embodied impact of o ffi ce buildings. Nevertheless, further research and similar case studies are needed to provide a robust picture of the embodied environmental impact of HVAC systems. The results could contribute to setting stricter targets in line with the vision of decarbonization of the building sector. The BIM model has a high level of geometry at the end of the planning phase. The results show that the interior (ceilings, doors, furniture, railings), with 43%, has the highest embodied impact on the environment, followed by 24%
Introduction
In 2019, the unprecedented frequency of heatwaves and temperature extremes following an increase in sea level and further biodiversity loss brought climate change to the center of attention [1][2][3]. These alarming indicators require consistent effort and drastic changes towards decarbonization of all sectors. The building sector is a key player in the fight against climate change, as it is responsible for nearly 40% of the total global CO 2 emissions [4]. So far, both global and European strategies have focused on reducing the operational greenhouse gas (GHG) emissions through measures such as efficient building technologies and replacing fossil fuel-based energy carriers with renewable sources.
The global efforts towards zero operational GHG emissions shift the remaining emissions to so-called embodied carbon. Embodied carbon refers to GHG emissions that are released during the manufacturing, transportation, construction, and end of life phases of building materials. According to the UN Global Status Report, the embodied GHG emissions of the building sector are 11% of the total LCA has been used in many studies, mainly for residential and office buildings [11]. In recent years, BIM is increasingly used to facilitate establishing a bill of quantities of materials for the life cycle inventory (LCI) phase of LCA and reducing the amount of manual data input as shown in several recent review papers [20,22,23]. The accuracy of the LCI depends on the level of development (LOD) of the BIM model. The LOD defines the minimum content requirements for each element of the BIM at five progressively detailed levels of completeness, from LOD 100 to LOD 500 [24]. However, most papers on BIM-based LCA methods do not declare the LOD for the LCA [20].
Most studies examine the environmental impact of the structural elements and the building envelope [16]. Only a few studies focus on the impact of building services, including HVAC systems. However, there is increasing research interest in generating knowledge about the environmental impact of building services as they play a significant role in embodied carbon reduction [6,17,18]. One of the biggest challenges is that existing studies generally cannot perform a detailed assessment, mainly due Sustainability 2020, 12, 3372 3 of 18 to the lack of data and methods to estimate the embodied environmental impact, especially for HVAC components that are composed of various raw materials [6].
The impact of HVAC systems is usually assessed together with other building services like electrical equipment and plumbing [25,26]. It has been estimated that embodied carbon related to building services represents up to 10-12% of the total embodied carbon of a typical building [8,27]. In Switzerland, the GHG emissions during the manufacturing and maintenance of HVAC systems (heating, ventilation, heat distribution) account for about 13% of the total emissions of new office buildings [28]. However, the numbers are based on average values.
To date, only a few studies measure the embodied impact of HVAC systems [17]. Moreover, even fewer studies use BIM to assess the impact of HVAC systems [18,[29][30][31]. Only one of them [31] uses an as-built or nearly as-built BIM HVAC model. None of them implement such a high level of detail as this study.
In general, there are two types of LCA studies for HVAC systems. The first type of study conducts a comparison between two or more HVAC systems, to identify which system has a smaller environmental impact, or to assess how multiple parameters such as climate, energy mix, and local policies influence the resulting environmental impact [6][7][8]16,32,33]. These studies are not interested in a detailed assessment. In most cases, they are based on generic data and assumptions. Hence, the first type of study does not relate to the scope of this study and cannot be used for comparison reasons.
The second type of LCA for HVAC refers to the detailed assessment of an HVAC system and is aligned with the scope of this research. One of the main purposes of such studies is to provide transparency of the actual impact of HVAC systems and avoid simplifications that can lead to inaccurate and misleading environmental impact calculations. In the context of this research, only four studies [7, [29][30][31] have been found to follow a similar approach and are further discussed in more detail.
The study by Borg et al. [29] conducts a cradle-to-grave LCA for the ventilation system of a six-story office building in Trondheim. It uses a BIM model for the material quantity extraction, which was the basis for construction and thus very accurate, according to the authors. In terms of BIM and LCA methodology, it follows a unidirectional and fragmented process. The material information must be exported whenever changes are applied to the model (and thus the LCA calculations must be redone). The embodied impact results are based on an optimistic scenario for the European electricity supply mix and show that the impact of materials, construction, and demolition for ventilation is 33.7 kgCO 2 eq/m 2 . This information can be used for comparison reasons, although it represents only one aspect of the overall HVAC systems impact.
The second study by Dokka et al. [30], investigates a typical four-story office building in Norway modeled with BIM in order to identify the most critical parameters in the design of a zero energy building (ZEB). In terms of methodology, the material inventories are exported from the BIM model, and the calculations are performed in the LCA tool SimaPro, which is again a unidirectional and fragmented process. The total heated floor area (HFA) is 1980 m 2 , and the lifetime of the building is set to 60 years. The total embodied emissions are approximately 8.5 kgCO 2 eq/m 2 in annualized emissions, and the emissions due to ventilation are 0.4 kgCO 2 eq/m 2 per year. The study does not provide insights into the embodied impact of the cooling and heating system. Finally, the authors highlight that the technical systems used in the concept model have not been properly dimensioned, and only rough estimates have been used in the embodied emission analysis. Thus, the results might deviate from the actual impact.
The study conducted by Hollberg et al. [31] follows a cradle-to-grave BIM-based dynamic LCA similar to the one used in this research. It establishes automated and bidirectional links with the model and the LCA database, allowing for continuous and real-time LCA monitoring throughout the planning phase. The LCA is done for a new office building in Switzerland. The building has an HFA of 675 m 2 , and the reference service life (RSL) is 60 years. The BIM model has a high level of geometry at the end of the planning phase. The results show that the interior (ceilings, doors, furniture, railings), with 43%, has the highest embodied impact on the environment, followed by 24% Sustainability 2020, 12, 3372 4 of 18 for technical equipment, 21% for structural parts, and 12% for envelope. This study focuses more on the LCA methodology, and the accuracy of the data used for the environmental assessment of the technical installations is unclear.
[7] use a real case study of a new office building in Sweden to perform a detailed post-construction LCA using site-specific data provided by the contractors. This LCA has a similar scope, functional unit, and boundary conditions. Although it does not use a BIM model for material quantity extraction and the LCA data were adapted to the Swedish conditions, this study can be used for benchmarking purposes. The results relevant to the global warming potential (GWP) of HVAC systems show that the production and operation phase contribute most to the overall HVAC carbon emissions, with production reaching up to 38 kgCO 2 eq/m 2 and operation up to 100 kgCO 2 eq/m 2 . The study highlights the importance of assessing HVAC systems as they were proven to have a considerable impact, especially due to replacements.
BIM-based LCA is used in many studies as an attempt to move away from conventional LCA (manual-based calculations) and provide designers with a tool that facilitates environmental impact assessment of buildings during the design phase as well as for post-construction evaluation (i.e., for compliance with standards or for knowledge generation) [34][35][36][37].
Currently, a few commercial tools exist that combine LCA with BIM (e.g., Tally or OneClickLCA). The common practice is to extract information from the BIM model (usually in IFC or gbXML format) and later insert it into the LCA software where the assessment is performed. Another approach often suggested by researchers [38][39][40] uses BIM with a visual programming language (VPL). This method enables the automatic extraction of information from the BIM model and the creation of updatable links to LCA databases.
Although these two approaches have been used considerably, they neglect or oversimplify HVAC systems. There is a research gap in terms of embodied environmental impact but also of how to assess HVAC systems using a BIM model and the existing tools or methods.
BIM and LCA Workflow for HVAC Systems
In general, three types of information are needed to perform the BIM-based LCA, namely geometrical data, material data (quantity and name), and LCA data. This information can be acquired from different sources. Geometrical information is extracted directly from the BIM model. Material information can be extracted directly from the BIM model. If this information is not available in the model, then product datasheets are used instead. In some cases, material quantity information can be directly retrieved from the BIM model or product datasheet. In other cases, the quantity needs to be calculated by combining mathematical formulas with geometrical information coming from the BIM model.
The integrated BIM and LCA workflow for the HVAC systems is described in Figure 1. The schematic diagram shows the different data sources that are linked to the VPL. Specifically, information from the BIM model (1), the product datasheets (2), and the LCA database (3) are combined in the VPL environment where the impact is calculated, and the results are exported in the desired format.
The BIM software used in this study is Revit 2019, and the VPL is Dynamo 2.0. The VPL serves as the basis and performs the main functions that are necessary to conduct the BIM-based LCA. These functions are:
•
Directly extracting object and material data from the BIM model.
•
Establishing bidirectional linking between the BIM objects and the product datasheet/catalogs and between materials and the LCA database.
•
Performing LCA calculations and exporting results in an Excel file. The BIM software used in this study is Revit 2019, and the VPL is Dynamo 2.0. The VPL serves as the basis and performs the main functions that are necessary to conduct the BIM-based LCA. These functions are:
Directly extracting object and material data from the BIM model. Establishing bidirectional linking between the BIM objects and the product datasheet/catalogs and between materials and the LCA database. Performing LCA calculations and exporting results in an Excel file.
The flexibility of the tool makes it possible to customize the proposed workflow according to the desired (LCA) boundary conditions, the available data, as well as the data format and its structure.
Geometrical Data and Material Data
The BIM model is used as the main source for material and geometrical data. The analysis starts from the object level and refines calculations to the material level in three steps. Initially, an advanced grouping and sorting of the elements is done according to the level of complexity and the nature of the elements. For example, pipes are grouped together with the ducts because they are both linear elements and have a similar level of detail (complexity). At a second level, elements are further subdivided. For example, ducts are sub-grouped into rectangular and round ducts. The third and lowest level of grouping/sorting is done according to material type, for example, round steel ducts and round copper ducts. Four distinct groups are created according to common characteristics, namely geometry and level of complexity. These four groups are ducts and pipes, fittings, mechanical equipment and air terminals, and pipe and duct accessories.
Overall, three methods are applied for the material quantity calculations of the assessed groups. In the first method, the material and geometrical information are extracted from the BIM model, and the weight of the materials is calculated in Dynamo with the use of scientific formulas (BIM data combined with scientific formulas). In the second method, the material and geometrical information is extracted from the BIM model and combined in Dynamo with product datasheet information (BIM data combined with product datasheet data). In most cases, the weight information coming from the product datasheet is provided per object and not per material. Thus, a percentage share of the total weight to the various object materials must be assumed. The third method includes the direct mapping of objects to their weight information coming from product datasheets (BIM objects linked to product datasheets). Again, a percentage share of the total weight to the product materials must be assumed. If none of the abovementioned is applicable, as in the case of the fittings, then rule of thumb estimations are applied. The flexibility of the tool makes it possible to customize the proposed workflow according to the desired (LCA) boundary conditions, the available data, as well as the data format and its structure.
Geometrical Data and Material Data
The BIM model is used as the main source for material and geometrical data. The analysis starts from the object level and refines calculations to the material level in three steps. Initially, an advanced grouping and sorting of the elements is done according to the level of complexity and the nature of the elements. For example, pipes are grouped together with the ducts because they are both linear elements and have a similar level of detail (complexity). At a second level, elements are further subdivided. For example, ducts are sub-grouped into rectangular and round ducts. The third and lowest level of grouping/sorting is done according to material type, for example, round steel ducts and round copper ducts. Four distinct groups are created according to common characteristics, namely geometry and level of complexity. These four groups are ducts and pipes, fittings, mechanical equipment and air terminals, and pipe and duct accessories.
Overall, three methods are applied for the material quantity calculations of the assessed groups. In the first method, the material and geometrical information are extracted from the BIM model, and the weight of the materials is calculated in Dynamo with the use of scientific formulas (BIM data combined with scientific formulas). In the second method, the material and geometrical information is extracted from the BIM model and combined in Dynamo with product datasheet information (BIM data combined with product datasheet data). In most cases, the weight information coming from the product datasheet is provided per object and not per material. Thus, a percentage share of the total weight to the various object materials must be assumed. The third method includes the direct mapping of objects to their weight information coming from product datasheets (BIM objects linked to product datasheets). Again, a percentage share of the total weight to the product materials must be assumed. If none of the abovementioned is applicable, as in the case of the fittings, then rule of thumb estimations are applied.
Ducts and Pipes
Pipes and ducts are linear elements that have width, length, and area predefined parameters in Revit. For the material quantity calculation of the steel pipes, the Equation (1) Equation (2) should be used in the case of different materials (i.e., aluminum, copper). The resulting value is later multiplied with the total length of the pipes to calculate the total material quantity of the pipes. The calculation script is fully automated and generic in the sense that it can be used for any similar calculations. where: OD = specified outside diameter and W = specified wall thickness.
The ducts belong to the same group as pipes, yet some properties such as the wall thickness, which is required to calculate the material quantity, are not included in the predefined parameters of Revit (software-related restrictions). Thus, it is not possible to calculate the weight by solely extracting information from the BIM model. To calculate the material quantity of the ducts, information from a manufacturer's datasheet is extracted and mapped to the duct components. The mapping is performed based on the height and the width of the duct opening. Similarly, to the ducts, flex tubes are combined with the product datasheet information based on their diameter size.
The insulation type of the pipes and the ducts and its volume is extracted directly from the BIM model and multiplied with the density of the insulation material. The developed script initially extracts and groups the different types of insulation existing in the model. Then the volumes of each group are extracted and summed up to be combined with the density of the material.
Fittings
Fittings have complex geometry (polyhedrons). In this case, they were modelled as solid objects (no wall thickness, no openings). Furthermore, these objects had no length or area parameter, and no material was applied during the modelling phase. To calculate the weight of the fittings, first, a material is applied to their surface. Thus, it is possible to extract the surface area based on the material area. Second, the ratio of the openings to the total area is calculated for the different types of fittings (55 types). Finally, the weight of the fittings is calculated as a percentage of the total pipes and ducts weight based on area ratio, as shown in Equation (3). The resulting value was verified using empirical values.
duct area duct f itting area = duct weight duct f itting weight
Mechanical Equipment and Air Terminals
Mechanical equipment and air terminals are complex objects that can only be assessed if the necessary information from the manufacturer is provided or if the BIM object contains this information. When information is not available, similar equipment from other manufacturers is used instead.
Pipe and Duct Accessories
Pipe and duct accessories refers to all components that control air and water flow and pressure, such as valves and dampers. Most of these elements have a mechanical device (meter, actuator, sensor) that is operating the equipment. Like the mechanical equipment and the air terminals, these components require information from the manufacturers, or the information should be included in the BIM object. Control devices and meters contain materials such as solder alloys, TBBA, and FR-4, Sustainability 2020, 12, 3372 7 of 18 which are mainly part of the printed circuit boards (PCB) that cannot be found in the existing LCA databases. Therefore, they are calculated based on primary materials such as steel and brass.
LCA Data
The LCA data are retrieved from the KBOB and the Ecoinvent database. The KBOB [42] is an LCA dataset provided by the Swiss Conference of the Construction and Real Estate Organs of Public Builders (KBOB) for typical building materials in Switzerland, and it is based on Ecoinvent. The linking of the LCA values for the calculation of the embodied carbon is based on material mapping. The reason why material-based mapping is used is twofold; first, the case study model consists of around 60,000 elements. The complexity of these elements varies widely as well as the number of materials that they are composed of. In many cases, mainly mechanical equipment and air terminals, this information was not available in the provided model, and the material information had to be retrieved from the producers and inputted manually in an Excel file. Second, the LCA databases still lack a great deal of product-specific data for construction products. Thus, in many cases, similar materials must be used instead of the actual ones, ending up with 16 materials that describe the whole HVAC systems model.
To perform the material-based mapping, an Excel file was created containing information about the materials existing in the model and their LCA values, which were retrieved from the LCA database. Once the materials from the model were extracted and grouped together for the defined assessment groups (i.e., pipes and ducts, fittings, air terminals), the mapping/linking with the customized Excel LCA database was made possible in Dynamo.
Case Study
The case study is the Siemens International Headquarters office building in Switzerland. The office building was completed in 2018 and is part of the new Siemens campus in Zug. It has a LEED platinum certification and the Swiss Minergie label, which focuses on the building shell and energy consumption. The building consists of seven floors and has a gross floor area (GFA) of 32,000 m 2 , including two underground floors, which are used mainly as a garage. In terms of HVAC systems, the building runs on water to water heat pumps and uses water from lake Zug for heating and free cooling purposes. This is one of the first Siemens construction projects to use BIM. The BIM model for the HVAC systems ( Figure 2) has undergone extensive revisions after the completion of the construction. It has a level of development (LOD) above 300, although it was observed that some elements were more detailed than others. Furthermore, the project was well documented including product datasheet and detailed information for most of the HVAC equipment. It can be said that the post-construction BIM model combined with manufacturer's information fulfils as-built requirements. In terms of LCA, a cradle-to-grave analysis is conducted for HVAC products and services, and the system boundary is based on the lifecycle modules defined in EN 15804 [43]. The modules assessed in the study are A1-A3 (fabrication), B4 (replacement), B6 (operation), and C1-C3 (disposal). The rest of the modules were kept out of the scope of this study due to a lack of relevant data. The reference unit was set to 1 m 2 AE to comply with the Swiss standards for planning and construction. According to SIA 380/1 2016 [44], the AE stands for the energy reference area, which is the sum of all In terms of LCA, a cradle-to-grave analysis is conducted for HVAC products and services, and the system boundary is based on the lifecycle modules defined in EN 15804 [43]. The modules assessed in the study are A1-A3 (fabrication), B4 (replacement), B6 (operation), and C1-C3 (disposal). The rest of the modules were kept out of the scope of this study due to a lack of relevant data. The reference unit was set to 1 m 2 A E to comply with the Swiss standards for planning and construction. According to SIA 380/1 2016 [44], the A E stands for the energy reference area, which is the sum of all over-and underground floor areas that lie within the thermal building envelope and are heated or air-conditioned. The energy reference area equals the heated floor area and is 22,000 m 2 . The reference study period is 60 years, as defined in SIA 2032 [45]. The replacement frequency of the equipment is estimated using combined information from manufacturers and the ASHRAE standard. These estimations are based on a most-likely scenario. This study focuses on the impact of HVAC systems with regards to climate change. For this reason, global warming potential (GWP) is used to quantify the GHG emissions in units of CO 2 -equivalent.
Material Quantities
The implemented customized workflows, as well as the various groups and the materials used to perform the BIM-based LCA, are summarized in Table 1. It is concluded that the more complex the BIM objects, the higher the dependency on the manufacturer's information. This is inversely proportional to the flexibility of the tool.
The results from the material quantity extraction show that galvanized steel (66%), aluminum (13%), and mineral wool (10%) are the prevailing materials. As expected, steel is the most common material found in HVAC equipment, coming primarily from the pipes and the ducts, which are heavy elements used for water and air distribution. The total amount of steel, including galvanized steel, stainless steel, and steel, is 356 tons, which correspond to about 80% of the total material quantity (Figure 3). The results from the material quantity extraction show that galvanized steel (66%), aluminum (13%), and mineral wool (10%) are the prevailing materials. As expected, steel is the most common material found in HVAC equipment, coming primarily from the pipes and the ducts, which are heavy elements used for water and air distribution. The total amount of steel, including galvanized steel, stainless steel, and steel, is 356 tons, which correspond to about 80% of the total material quantity ( Figure 3).
Material Name
Weight (kg)
LCA Results per Component
These results become more intriguing when the whole life cycle of the building is considered, as the replacement of ductwork and mechanical equipment takes place. The resulting GHG emissions
LCA Results per Component
These results become more intriguing when the whole life cycle of the building is considered, as the replacement of ductwork and mechanical equipment takes place. The resulting GHG emissions in the replacement phase show that the impact of the mechanical equipment, the air terminals, and the insulation is increased compared to the manufacturing phase ( Figure 4). The impact of the mechanical equipment is almost doubled during the use phase (35 kgCO 2 eq/m 2 ) compared to its fabrication impact (15.3 kgCO 2 eq/m 2 ). This increase is related to the frequency of the equipment replacement during the building's use phase. For example, heat pumps are replaced every 20 years. Subsequently, they are replaced twice during the 60-year building's lifetime. The quantity of the material has an immediate effect on the resulting environmental impact, which in the case of the heat pump, would be double the fabrication impact. Overall, it is concluded that the mechanical equipment, together with the ducts and pipes, are the main contributors to the total lifecycle GHG emissions of the HVAC systems.
At a higher level, results about the investigated modules show that replacement (B4) with 1.70 kgCO 2 eq/m 2 annualized emissions is the most carbon-intensive building lifecycle stage, while fabrication (A1-A3) with 1.32 kgCO 2 eq/m 2 annualized emissions, and operation (B6) with 1.25 kgCO 2 eq/m 2 annualized emissions and are of similar importance. The operational impact calculations were based on energy consumption data from the utility company combined with the as-planned energy distribution diagrams from the energy engineer. The energy consumption of HVAC systems accounts for 57% of total energy consumption. The disposal impact is very little (0.4 kgCO 2 eq/m 2 ) coming mainly from insulation and secondary materials. study building, the filters should be replaced at least once per year during the 60-year service life of the building. For the big air handling units (AHU), which are part of the mechanical equipment ( Figure 5), this amount becomes considerable. Filters are responsible for 65% of the total replacement impact of the AHUs. When looking at the whole HVAC system, the total impact coming from the filters during the use phase of the building amounts to 11% of the total replacement impact. A critical finding of this study is that it is worth investigating the amount and the impact of the air filters existing in the HVAC equipment. According to the maintenance instructions for the case study building, the filters should be replaced at least once per year during the 60-year service life of the building. For the big air handling units (AHU), which are part of the mechanical equipment ( Figure 5), this amount becomes considerable. Filters are responsible for 65% of the total replacement impact of the AHUs. When looking at the whole HVAC system, the total impact coming from the filters during the use phase of the building amounts to 11% of the total replacement impact.
The various components of the mechanical equipment and the air terminals are shown in Figure 5. Among the mechanical equipment components, the hybrid ceiling panel is the most recurrent component and has the highest impact of 23.2 kgCO 2 eq/m 2 . Very close, with 21.9 kgCO 2 eq/m 2 , are the AHUs, which are massive blocks of equipment containing, among other air filters, heat exchangers, fans, and air coolers. The rest of the elements have around 95-99% fewer emissions. It is noteworthy that the emissions of one chiller are comparable to the emissions of 18 small-sized heat pumps. The air terminal components have a minimal impact, with no significant variations among the impact of the different component types. Accordingly, Figure 6 shows the components that belong to the pipe and duct accessories. It is shown that while the pipes have more types and a higher number of components, their overall GWP is only one-third of the ducts. This is mainly since the duct components are bigger and thus have more material. Among the pipe components, the butterfly shut-off valve has the highest impact of 1.38 kgCO 2 eq/m 2 . From the duct accessories, the multi-leaf dampers and volume flow controllers are the most emission-intensive elements.
Comparison to Benchmarks
The resulting GWP due to fabrication are compared to a similar case study in the Swedish context [7]. The two case studies are implemented for the new office building and have a similar LCA scope and boundary conditions. Nevertheless, the calculations are adjusted to country-specific conditions. To account for uncertainty, two scenarios are developed investigating how country-specific parameters, namely LCA values and HFA, can affect the resulting values measured in kgCO 2 eq/m 2 . The categories compared are ducts and pipes (ductwork and pipework), insulation (only mineral wool), and mechanical equipment and air-terminals. In the first scenario (Scenario 1), the same LCA values as in the Swiss case study of the Siemens International Headquarters office building (HQ) are applied to the Swedish case study. It is shown that the LCA values have only a minor effect on the total fabrication impact. However, there are differences in the share of the impact among the different HVAC categories (Figure 7). In the second scenario (Scenario 2), the HFA for the Swedish case is assumed according to the Swiss building regulations. The Swedish HFA is defined as the total floor area that is heated to at least 10 degrees Celsius, excluding outer walls. Adjusting the HFA to Swiss conditions was done using the method and the factors proposed in [46]. It is shown that the adjusted HFA does not differ a lot from the Swedish HFA, and thus the difference between Scenario 2 and the HQ base-case remains quite high. Specifically, there is a 40% deviation in the total emissions, with the pipes and ducts being the main contributors to this difference. This can be attributed to the use of different HVAC systems and distribution networks, as well as to quantities miscalculations and assumptions that were made due to lack of data.
Sustainability 2020, 12, x FOR PEER REVIEW 11 of 18 Accordingly, Figure 6 shows the components that belong to the pipe and duct accessories. It is shown that while the pipes have more types and a higher number of components, their overall GWP is only one-third of the ducts. This is mainly since the duct components are bigger and thus have more material. Among the pipe components, the butterfly shut-off valve has the highest impact of 1.38 kgCO2eq/m 2 . From the duct accessories, the multi-leaf dampers and volume flow controllers are the most emission-intensive elements.
Comparison to Benchmarks
The resulting GWP due to fabrication are compared to a similar case study in the Swedish context [7]. The two case studies are implemented for the new office building and have a similar LCA scope and boundary conditions. Nevertheless, the calculations are adjusted to country-specific conditions. To account for uncertainty, two scenarios are developed investigating how countryspecific parameters, namely LCA values and HFA, can affect the resulting values measured in kgCO2eq/m 2 . The categories compared are ducts and pipes (ductwork and pipework), insulation (only mineral wool), and mechanical equipment and air-terminals. In the first scenario (Scenario 1), the same LCA values as in the Swiss case study of the Siemens International Headquarters office building (HQ) are applied to the Swedish case study. It is shown that the LCA values have only a minor effect on the total fabrication impact. However, there are differences in the share of the impact among the different HVAC categories (Figure 7). In the second scenario (Scenario 2), the HFA for the Swedish case is assumed according to the Swiss building regulations. The Swedish HFA is defined as the total floor area that is heated to at least 10 degrees Celsius, excluding outer walls. Adjusting the HFA to Swiss conditions was done using the method and the factors proposed in [46]. It is shown that the adjusted HFA does not differ a lot from the Swedish HFA, and thus the difference between Scenario 2 and the HQ base-case remains quite high. Specifically, there is a 40% deviation in the total [47]. The embodied GHG emissions for the 2050 efficient scenario are 8.5 kgCO2eq/m 2 a. According to [28] which is based on the SIA 2040 technical book, the shares of heating, [47]. The embodied GHG emissions for the 2050 efficient scenario are 8.5 kgCO 2 eq/m 2 a. According to [28] which is based on the SIA 2040 technical book, the shares of heating, ventilation, and heat distribution impact on the total embodied GHG of new office buildings is 13%. Therefore, it can be concluded that the SIA 2040 target for HVAC systems is about 1.1 kgCO 2 eq/m 2 a. The embodied GHG emissions of the HVAC systems for this case study are 3.05 kgCO 2 eq/m 2 a, which is almost three times higher than the SIA 2040 emissions target.
Finally, the results are framed in the context of whole building lifecycle carbon emissions. For this reason, information from many studies on the embodied carbon emissions of office buildings was considered. The draft of the Royal Institution of Chartered Surveyors (RICS) "Practitioner's Guidance to Whole Life Carbon Assessments" [5] shows that the embodied impact of buildings can vary a lot (400-1000 kgCO 2 eq/m 2 ) depending on the building type, even by a factor of two with the highest values corresponding to highly serviced areas (i.e., hospitals, high rise apartments). For the specific case of office buildings, the information paper [10] summarizes embodied carbon case studies that were conducted from different companies. It is summarized that embodied impact for offices can range between 500 kgCO 2 eq/m 2 and 1200 kgCO 2 eq/m 2 . The total embodied impact for the HVAC systems of this case study is 183 kgCO 2 eq/m 2 . Compared to the existing knowledge for the total embodied impact of office buildings, the HVAC embodied impact would be in the range of 15-36%, which is significantly higher than previous studies and estimations.
Discussion
The proposed workflow allows automating the LCA by establishing links between the databases and the BIM model in the VPL environment. The responsive design of the tool enables real-time tracking of the environmental impact when changes are applied to the BIM model or the linked information. Furthermore, linking information instead of inputting data in the BIM objects solves the problem of creating "heavy" and inoperable BIM models.
The developed approach focuses on the material data collection for the LCA. Different methods for HVAC material quantity calculations are implemented based on the information existing in the BIM objects. It is shown that the flexibility of the tool depends on data availability and object geometrical complexity. Thus, the pipes and the ducts which have a simple geometry allow for multiple ways of calculation of their weight, and subsequently, their environmental impact. On the contrary, for mechanical equipment such as heat pumps, which are complex objects and consist of multiple components, it is not possible to calculate their weight based on the BIM model unless the object is modelled as 1-1 representation or if this information is included as an object property.
The realistic BIM objects are usually provided by the manufacturers and then imported into the model. When the BIM object is not provided, it can be modelled by the designer as a placeholder (rough geometry). A placeholder, although it is not an accurate geometrical representation of the product, can carry detailed information which can be shown as object properties. In this study case, the level of geometry of the equipment was varying as well as the level of information attached to the BIM objects. The post-construction BIM model has a LOD of 300 which alone does not fulfil the requirements of a detailed LCA. Most objects had a generic name (i.e., shut-off valve with motor), the dimensions, and occasionally the material name and the name of the manufacturer. Hence, a lot of effort had to be put into looking through the project documentation to identify the products, or the information had to be acquired directly from the producers. It should be noted that the quality of the material data retrieved from the various data sources can affect the accuracy of the resulting impact. Data challenges other than the level of development of the BIM model, refer to deficient or outdated LCA databases and the accuracy and granularity of the data provided by the manufacturers. A critical point for the success of the proposed method is to have the data in an appropriate format. Currently, most manufacturers provide their product information in PDF format. Adding to that, very few give information about the impact of their products. Especially for highly sophisticated manufactured equipment such as HVAC systems, access to machine-readable data is vital; otherwise, the proposed process could never be truly automated.
In the context of this study, all the files were stored and processed locally (PC). Moreover, the data were extracted directly from the BIM model that was created in Revit software. In the future, this "isolated" approach would be very limiting and not serving the complex tasks arising from AEC processes, which are characterized by high collaboration among all the involved parties (architects, engineers, contractors, etc.), processes integration, and cross-platform flow of information.
The major limitation of extracting information directly from the BIM model or performing the LCA within the BIM software is that the language can only be read and understood by this specific software and its users. For example, Revit categories and families are not used outside the Revit environment. Hence, interoperability can only be achieved within the Revit environment and the relevant Autodesk solutions. Nevertheless, most BIM software offers the possibility to export data in IFC (Industry Foundation Classes) format, making it possible to share information across the entire BIM ecosystem in a consistent and repeatable way. LCA databases should also be represented in a common data format like IFC in order to be reusable and understood by other software and users. Using the proposed tool to link building data to product LCA data based on open standard formats can enable enormous possibilities for the future of LCA, such as generating consistent knowledge regarding buildings and building products, providing transparency of processes among all involved project parties, and finally contributing to the improvement of existing LCA databases.
In terms of the LCA, the results show that the embodied impact of the case study is three times higher compared to the SIA 2040 embodied carbon target values for HVAC systems. This finding raises the question of whether SIA 2040 is underestimating the impact of HVAC systems or this difference is due to the high complexity of smart buildings. The former would imply that buildings should consider a higher embodied impact for HVAC systems, while the latter indicates that as the buildings become "smarter" and thus more energy-efficient, their embodied impact increases. In this case, the GHG savings as a result of the increased energy efficiency should be compared to the added material impact to understand the trade-offs between them. Further research is required to adequately address this question and understand the contribution of the HVAC systems to the embodied impact of new office buildings.
The presented study assessed HVAC control devices wherever this was possible. However, these devices include components such as printed circuit boards and transducers, which consist of materials with unknown impact. This hinders the detailed assessment of these elements. Furthermore, for this study, only information about the devices (motors, meters, actuators) of the duct and pipe accessories were available. Therefore, an assessment that includes all the HVAC electronic components could reveal their actual contribution to the impact and would eventually shift the impact even more towards the material side.
Noteworthy is the impact of the HVAC filters. The replacement frequency of the filters is high compared to the rest of the equipment. Thus, it makes sense to study their environmental footprint during the lifetime of the building. In this study only fiberglass was considered for the calculation of the filter impact, and only for the equipment that this information was available. Despite data limitations, it is shown that the replacement impact of the filters is substantially high compared the total HVAC replacement impact. Until now, LCA studies tend to overlook the filters' impact, which, according to the results of this study, is not insignificant, especially in the case of big commercial and office buildings.
Finally, it is crucial to define the data requirements for each BIM model even before the design phase and make sure that the requirements fulfil the project objectives. Pinpointing the end-use of the model is critical. For example, if the model is used for energy optimization, the data relevant for the optimization process must be inputted or linked to the model. One project might serve more than one purpose. This is not a problem if the requirements for each end-use are clearly defined and as long as the model can stay "lightweight" and operable. This can always be achieved by linking the information instead of inputting data at the BIM object level, as described in the proposed workflow.
Conclusions
This study focuses on the detailed environmental assessment of HVAC systems using BIM and proposes an integrated method to perform a complete LCA. The method is based on the following three functions: (a) Directly extracting object data from the BIM model; (b) establishing bidirectional links between BIM objects and product datasheets, and between materials and LCA databases; and (c) calculating the embodied carbon and exporting data. The application to a case study shows that the tool is flexible and can be adjusted/extended according to the available data. However, the more complex the objects, the less flexible and the more vendor dependent the tool becomes. It is concluded that using this or similar tools can enable transparency for the impact of HVAC systems in all stages of the building lifecycle.
The LCA results show that the embodied impact for the HVAC system is three times higher than the targets provided by SIA 2040. Although SIA targets that are based on a global factor and the heated floor area of the building seem to work as a tool to assess the impact of the HVAC system, they do not allow to reduce the impact by optimizing the HVAC system towards embodied carbon. The proposed approach allows for the first time to compare different variants in detail and empowers designers to perform product comparisons, identify hot spots in their HVAC design, and make environmentally conscious decisions.
Nevertheless, more research is needed to determine the environmental impact of different kinds of HVAC systems. Once this knowledge is generated, current embodied carbon targets should be reviewed, and the necessity of setting stricter targets should be assessed against the 2050 vision of net-zero embodied carbon.
|
2020-04-23T09:06:52.395Z
|
2020-04-21T00:00:00.000
|
{
"year": 2020,
"sha1": "7a85c81ba00ad3f855763fa6cffac42d182cd971",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/8/3372/pdf?version=1587465085",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "df24c14b34e1fcaa47c5c520f831d261d215d578",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
265347480
|
pes2o/s2orc
|
v3-fos-license
|
LncRNA AC125982.2 regulates apoptosis of cardiomyocytes through mir-450b-3p/ATG4B axis in a rat model with myocardial infarction
Background The occurrence and disability of myocardial infarction (MI) are on the rise globally, making it a significant contributor to cardiovascular mortality. Irreversible myocardial apoptosis plays a crucial role in causing MI. Long non-coding RNAs (LncRNAs) are key regulators of the cardiac remodeling process. Therefore, it is necessary to explore the effect of LncRNAs on cardiomyocyte apoptosis in MI. Methods The rat-MI model was constructed, LncRNA-Seq and qPCR analyses were used to determine differentially expressed genes obtained from heart tissue of rats in the MI and sham groups. The miRanda software was used to predict the binding sites of LncRNA-miRNA and miRNA-mRNA, which were futhrer verified by dual luciferase assay. The LncRNA-miRNA-apoptosis pathway was further validated using hypoxia-exposed primary cardiomyocytes. Results Compared to the sham group, 412 LncRNAs were upregulated and 501 LncRNAs were downregulated in MI-rat heart tissues. Among them, LncRNA AC125982.2 was most significantly upregulated in MI-rat heart tissues and hypoxic cardiomyocytes. Knockdown of AC125982.2 and ATG4B expression reversed hypoxia-induced apoptosis. In addition, transfection of mir-450b-3p inhibitor attenuated the protective effect of AC125982.2 knockdown. Moreover, we found that AC125982.2 modulated ATG4B expression by acting as a sponge for miR-450b-3p. Conclusion Upregulated AC125982.2 expression regulates ATG4B by sponging miR-450b-3p, promoting cardiomyocyte apoptosis and contributing to rat MI development.
Introduction
The prevalence of cardiovascular disease (CVD) remains high globally and is still the primary reason for mortality across the globe [1].There is an urgent need to explore the etiology and develop targeted drugs for CVD from the root cause to reduce the burden of CVD on global healthcare [2].Myocardial infarction (MI), the most common type of CVD, is the result of sudden and prolonged hypoxia and ischemia of cardiomyocytes, which eventually leads to myocardial cell death, apoptosis and inflammation and is irreversible [3,4].Cardiomyocytes are terminally differentiated cells that have lost the ability to mitigate, so the regenerative capacity of the adult mammalian heart is limited, and failure to open occluded vessels as soon as possible can lead to serious consequences, even death [4,5].Therefore, finding molecular targets at the onset of infarction to achieve targeted control and treatment may be an important part of reducing mortality in MI.
Long-chain noncoding RNAs (LncRNAs) are RNAs that do not encode proteins and are widely found in eukaryotes, where they have diverse biological roles.These roles involve gene imprinting, chromatin remodeling, cell cycle regulation, and competition with microRNAs for binding to endogenous RNAs [6][7][8].It is revealed that LncRNAs are involved in regulating physiological and pathological processes in the heart through a variety of molecular mechanisms and have become an important component of epigenetic and transcriptional regulatory pathways during cardiac development, as well as key to the initiation and progression of MI [9,10].Previous studies have demonstrated that lncRNA expression is most altered during the acute phase of MI compared to controls and that these dysregulated LncRNAs are highly correlated with the pathways of MI initiation and progression [10].LncRNA CAIF inhibited cardiac autophagy and attenuated MI by targeting and blocking p53-mediated myocardin transcription [11].Increased expression of LncRNA Snhg1 in human and mouse fetal and MI hearts improved cardiac function after MI by forming a positive feedback loop with c-Myc to maintain activation of PI3K/Akt signaling and promote proliferation of cardiomyocytes [12].More importantly, changes in LncRNA ZFAS1 independently predicted MI and were considered as a novel biomarker of MI, and knockdown of LncRNA ZFAS1 alleviated ischemic systolic dysfunction in MI [13,14].
An increasing number of competing endogenous RNAs (ceRNAs) have emerged that regulate various diseases including MI through miRNA inhibition [15][16][17].MiRNAs regulate gene expression by binding to complementary sequences at the 3′ end of mRNAs and inhibiting their translation or promoting their degradation, while LncRNAs can regulate gene expression by interacting with miRNAs as miRNAs sponges to inversely regulate the expression of mRNAs [18].It has been proposed that lncRNA Malat1 plays a vital role in the control of endothelial cell regeneration and can act as a ceRNA for miR-26b-5p, forming Mfn1 signaling to regulate mitochondrial dynamics and endothelial function and repair cardiac function after MI [19].It was determined that exosomes can mediate intercellular communication after MI, and LncRNA KLF3-AS1 present in exosomes released by human mesenchymal stem cells can regulate Sirt1 by sponging miR-138-5p as ceRNA, thereby inhibiting cell pyroptosis and attenuating the progression of MI [20].In diabetic myocardial ischemic mice, silencing of lncRNA AK139328 upregulated the expression of miR-204-3p and suppressed autophagy, thereby attenuating myocardial ischemia in diabetic mice [21].
In this study, we constructed rat MI model, took heart tissues for high-throughput sequencing of LncRNAs, screened differential LncRNAs by bioinformatics, and predicted miRNA action targets.The molecular mechanism of AC125982.2/mir-450b-3p/ATG4bregulation of heart infarction was validated in hypoxia-induced primary cardiomyocytes.
Construction of rat model of myocardial infarction (MI)
The MI rat model was constructed as previously described [22].SD rats aged 7-8 weeks and weighing 250 g were purchased from Guangdong Medical Laboratory Animal Center (production license SCXK018-0002).Twelve SD rats were randomly divided into two groups: sham group (n = 6) and MI group (n = 6).SD rats in the MI group were administered 1.5 % sodium pentobarbital (30 mg/kg) for anesthesia and were then artificially ventilated with a ventilator.Next, the chest was incised at the third rib space on the left side, and the left anterior descending (LAD) coronary artery was tied off near its origin to cause MI.The anterior wall of the left ventricle was observed in real time, and a successful infarct model was constructed when the anterior wall changed color from red to white, cyanotic and swollen.The Sham group of SD rats underwent the identical procedure, except for the absence of ligation of the coronary artery LAD.After four weeks, all SD rats were anesthetized and heart tissue was collected.The Institutional Animal Care and Use Committee of the First Affiliated Hospital of Guangzhou Medical University approved all animal experiments.
LncRNA sequencing
LncRNA sequencing was performed by Yongnuo Biotechnology Co., Ltd (Guangzhou, China).Briefly, total RNA was extracted from sham and MI-group rat heart tissues (n = 3) using Trizol reagent (Invitrogen, USA), and equal amounts of RNA extracted from the same groups were mixed.Then, ribosomal RNA was depleted by the Epicentre RiboMinus kit (Invitrogen).Paired-end reads were then prepared by NEBNext® Ultra™ Directional RNA Library Prep Kit (NEB, Beverly, MA, USA) for Illumina® following the manufacturer's instructions.After library construction, the LncRNAs were sequenced by Illumina HiSeqTM3000 high-throughput sequencing platform.The sequencing depth for each sample is approximately 9G.The Raw Reads were filtered to obtain clean Reads, and the clean Reads were Genome mapped using Hisat2 (version:2.0.4).P-values were corrected for multiple hypothesis testing, and the threshold of the p-value was determined by controlling the FDR.Differentially expressed genes were defined as P-value ≤0.05 and fold change>1.5.The AC125982.2-miRNAs and miR-450b-mRNAs interactions were predicted using miRanda 3.3 software (http://www.microrna.org).
M-mode echocardiography
SD rats in the Sham and MI groups were anesthetized using 2 % isoflurane gas.The left ventricular ejection fraction (LVEF) and left ventricular fractional shortening (LVFS) were determined by Transthoracic echocardiography (SonixTOUCH, Canada).
Isolation of primary cardiomyocytes
Neonatal rat hearts were collected within 1-2 days after birth.After rinsing, hearts were cut into small pieces and digested using 1 mg/mL collagenase IA and 0.12 % trypsin at 37 • C for 30 min.Subsequently, the digestion process was stopped by DMEM/F12 medium (Gibco, USA) with 10 % calf serum (Gibco, USA) and single cells generated by filtrating over a 40 μm filter.After centrifugation, cells were resuspended in DMEM/F12 medium with 10 % calf serum, 100 IU/mL penicillin, and 100 μg/mL streptomycin (Gibco, USA) and were cultured at 37 • C with 5 % CO2 in a humidified cell incubator.
Establishment of in vitro model of MI
The in vitro model of MI was established by hypoxia [23].Briefly, primary cardiomyocytes were placed at 37 • C, 5 % CO 2 in the incubator as the normal control group, and primary cardiomyocytes were placed at 37 • C, 3 % O 2 in the incubator as the hypoxic group.
RNA extraction and quantitative PCR (qPCR) analysis
The extraction of total RNA was performed according to the RNA-easy Isolation Reagent Kit (R701-01, Vazyme, China).For lncRNA and mRNA, cDNA synthesis was performed using the HiScript II Q RT SuperMix for qPCR kit (R222-01, Vazyme, China).For miRNA, cDNA stem-loop RT primers were used to produce cDNA for specific mature miRNAs.Primers are shown in Table 1.ABI QuantStudio 5 was used to conduct additional qPCR analysis with qPCR SYBR Green Master Mix (Q121-02, Vazyme, China).The 2 -△△Ct method was used to obtain the expression values, normalized LncRNA to the expression of GAPDH and normalized miRNA to the expression of U6.
TUNEL staining
Primary cardiomyocyte apoptosis assay was performed using the TUNEL kit (G3250, Promega, USA).The medium for cell culture was disposed of and rinsed with PBS once.Permeabilization was achieved by adding 0.2 % Triton X-100 for 5 min.In the meantime, the
Table 1
The list of primer sequences for qPCR.
Primer name
Sequence (5′-3′) equilibration solution was made and subsequently introduced to the cells followed by incubation at room temperature for 10 min.
Next, the cells were placed in a moist chamber at 37 • C and subjected to incubation with the TdT reaction mixture for 60 min.Following three rounds of PBS washing, the cell nuclei were labeled by adding Hoechst solution.Olympus fluorescence microscopy was used to examine the cells, and 3-5 fields of view were recorded in order to determine the quantity of apoptotic cells.
Protein extraction and western bolt experiments
Primary cardiomyocytes were cultured in cell culture dishes and treated with different conditions.After adding RIPA lysis solution (P0013B, Beyotime, China), the cells were scraped off with a scraper and transferred to 1.5 mL EP tubes for lysis on ice for 20 min, then centrifuged at 13,300 rpm for 20 min.Carefully collecting the supernatant, the protein concentration was determined by utilizing the Bradford kit (Pierce, USA).The protein samples of 30 μg were electrophoresed on SDS-PAGE gels and subsequently transferred onto PVDF.The non-specific sites were blocked by incubation for 1 h using 5 % bovine serum albumin (Sigma).The transferred membrane was cut and probed separately with antibodies overnight at 4 • C. The primary antibodies information was as follows: anti-ATG4B (ab154843, Abcam), anti-ATG4D (ab64734, Abcam), ani-Bax (23959, Santa Cruz Biotechnology), anti-caspase3 (9661, Cell Signaling Technology), anti-Bcl2 (3498S, CST).Equal protein loading was verified by probing with anti-GAPDH antibodies (25778, Santa Cruz Biotechnology).After washing with PBS, the corresponding secondary antibody was added and incubated at 37 • C for 1 h.Protein bands were visualized with ultrasensitive chemiluminescence reagents and the bands were analyzed densely by use of Image J software.
Statistical analysis
All experimental data were presented as mean ± standard deviation (SD) with at least three biological replicates for each experiment.Statistical analysis was performed using GraphPad Prism 8.0.Differences between groups were analyzed using Student's t-test or one-way ANOVA, and p values < 0.05 were considered statistically significant.
MI-rat model was successfully constructed
First, we established the MI animal model in SD rats utilizing the ligation of the proximal LAD coronary artery.As shown in Fig. 1A, the cardiac function of rats in MI and sham groups was examined by M-mode echocardiography.The results showed a normal rhythmic systolic/diastolic in the sham group, whereas a significant lack of contraction in the MI group.We evaluated the area of cardiac fibrosis with Masson-trichrome staining and showed that the area of fibrosis in the hearts of rats in the MI group was more than the sham group (Fig. 1B).In comparison to the sham group, the MI-rats exhibited notably reduced LVEF and LVFS (Fig. 1C and D).These data suggest that we successfully constructed the MI-rat model.
Screening and clustering of differentially expressed LncRNAs in MI-rat heart
Then we performed high-throughput sequencing of LncRNA in heart tissues of MI and sham group rats.The obtained LncRNAs were presented as heat and scatter plots, 412 LncRNAs were expressed up-regulated and 501 LncRNAs were expressed down-regulated in the MI group compared with the sham group (Fig. 2A and B, Supplementary Table 1).We validated the expression of the top five upregulated LncRNAs in the heart tissue of sham and MI rats using qPCR experiments.Notably, AABR07001555.1,AABR07007026.1,AABR07052523.2,and AC125982.2showed significant elevation in the heart tissues of MI-rats (Fig. 2C).
LncRNA AC125982.2 was upregulated in hypoxia-induced primary cardiomyocytes
In order to thoroughly examine the role of LncRNAs in MI, primary cardiomyocytes were obtained from mammary rats for investigation.TUNEL staining revealed hypoxia induced an increase in apoptotic primary cardiomyocytes, which was more pronounced after 36 h of hypoxia than after 24 h (Fig. 3A and B).Thus, we chose hypoxia-36 h in primary cardiomyocytes as an in vitro model of MI.The qPCR assay was then used to validate the expression of five up-regulated LncRNAs in hypoxia-36 h primary cardiomyocytes, where the expression of AABR07007026.1 was down-regulated, the expression of AABR07052523.2was elevated 1.8fold, and the expression of AC125982.2was elevated 2.4-fold (Fig. 3C).These results suggest that some deregulated lncRNAs in the animal model could be recapitulated in the cell model.AABR07052523.2and AC125982.2may play important roles in MI.Since AC125982.2was the most significantly altered lncRNA between the two groups, we selected it for further study.
LncRNA AC125982.2-mir-450b-3p-ATG4B plays important role in MI cells
We predicted the target genes of mir-450b-3p by bioinformatics methods, and the results are shown in Supplementary Table 2.Among them, we found that Atg4b and Atg4d, which were up-regulated in the sequencing results of myocardial tissue from rat myocardial infarction, may be the mir-450b-3p target genes.We used Western blot techniques to detect ATG4B and ATG4D protein levels.The results showed that ATG4B expression was elevated and ATG4D expression was decreased in hypoxic-primary cardiomyocytes.Interfering with AC125982.2under both normoxic and hypoxic conditions, ATG4B is down-regulated.However, ATG4D was only down-regulated after interfering with AC125982.2 in normoxia and was unchanged after interfering with AC125982.2 in hypoxia (Fig. 5A, Supplementary Fig. 1).The ATG4B level alteration met our expectations.Thus, we further examined the expression of ATG4B in the myocardial tissues of MI rats and found that it was significantly up-regulated in the MI group (Fig. 5B).These results suggest that ATG4B may be a target gene downstream of AC125982.2.In addition, we carried out a dual luciferase reporter gene experiment, and the results confirmed that mir-450b-3p directly modulates ATG4B (Fig. 5C).We also examined the effect of interfering with AC125982.2 on hypoxia-induced apoptosis by detecting the expression of apoptosis-related proteins.The results indicated the expression of Bax and cleaved caspase3 was elevated, while the expression of Bcl-2 was decreased in hypoxic-primary cardiomyocytes, indicating hypoxia-induced apoptosis in cardiomyocytes.After interfering with the expression of AC125982.2, the expression of Bax and cleaved caspase3 was down-regulated and the expression of Bcl2 was reverted, which suggests that interference with AC125982.2rescued hypoxia-induced apoptosis (Fig. 5D, Supplementary Fig. 2).We further investigated the effects of AC125982.2siRNA and mir-450b-3p inhibitor on each other's expression in hypoxic-primary cardiomyocytes by qPCR.The results revealed that the expression of AC125982.2and mir-450b-3p was significantly downregulated by the inhibitor, and the expression of both was negatively correlated (Fig. 5E).The effect of AC125982.2-miR-450b-3pon ATG4B was further probed in hypoxic-progenitor cardiomyocytes, where the expression of ATG4B was down-regulated after interference with AC125982.2and up-regulated after inhibition of mir-450b-3p, and the expression of ATG4B was suppressed after interference with both AC125982.2and mir-450b-3p compared to inhibition of mir-450b-3p (Fig. 5F, Supplementary Fig. 3).TUNEL results demonstrated that in hypoxia-primary cardiomyocytes, the number of apoptotic cells decreased significantly after interference with AC125982.2,while the number of apoptotic cells increased after inhibition of mir-450b-3p, and the number of apoptotic cells decreased with simultaneous inhibition of AC125982.2and mir-450b-3p compared with inhibition of mir-450b-3p (Fig. 5G).The above results suggest that LncRNA AC125982.2mediates cardiomyocyte apoptosis during MI via mir-450b-3p/ATG4B.
Interference with ATG4B attenuates hypoxia-induced apoptosis of cardiomyocytes
We interfered with ATG4B expression in hypoxic cardiomyocytes.Western blot assay revealed that hypoxia resulted in upregulation of ATG4B, Bax and cleaved caspase3 expression and downregulation of Bcl-2 expression and that interfering with ATG4B partially reversed the effect of hypoxia on the expression of apoptosis-related proteins (Fig. 6, Supplementary Fig. 4).The above results suggest that ATG4B plays an important role in cardiomyocyte apoptosis.
Discussion
Ischemic heart disease, is a serious heart-like disease that poses a serious threat to human life, in which MI caused by coronary artery obstruction is the most common ischemic heart disease [24].MI occurs when there is a decrease or stoppage of blood flow to the heart, resulting in cardiac fibrosis, reduced cardiac function, and ultimately death due to heart failure [25].It was demonstrated that ligation of the proximal LAD coronary artery in rats to induce MI resulted in a distinct decrease in cardiac function after four weeks post-myocardial infarction compared to the sham-operated group, along with myocardial fibrosis [22,26].In the present study, which continues the MI-rat modeling approach, we found that after four weeks of ligation of the proximal LAD coronary artery, MI-rats had The regulatory mechanisms of many functional LncRNAs have now been widely elucidated.In MI, LncRNAs have been shown to be involved in disease regulatory processes such as apoptosis and autophagy through multiple pathways [27].In hypoxia-induced cardiomyocytes, LncRNA ANRIL knockdown regulates IL-33/ST2-mitigated apoptosis in cardiomyocytes [28].AC125982.2 is an identified 353bp lincRNA located in chromosome 7 (rn6 chr7:130473912-130474067), consisting of two exons, and has no homology to known sequences from humans.We performed high-throughput LncRNA sequencing of MI-rat heart tissues and confirmed by validation that AC125982.2was significantly elevated in MI-rat heart tissues and primary cardiomyocytes induced by hypoxia.In hypoxia-induced primary cardiomyocytes, the expression of the pro-apoptotic factor Bax was down-regulated and the anti-apoptotic factor Bcl-2 was up-regulated after interference with AC125982.2,exerting a protective effect against myocardial injury.
Extensive evidence suggests that LncRNAs play a crucial role in numerous diseases by acting as sponges, thereby diminishing miRNA expression levels and consequently elevating the levels of target genes [29].Here, we used bioinformatics to predict miRNAs bound to AC125982.2 and downstream target genes.It was proved that mir-450b-3p expression was down-regulated in hypoxia-induced primary cardiomyocytes, while interference with AC125982.2up-regulated mir-450b-3p levels, which is consistent with the ceRNA hypothesis.The involvement of mir-450b-3p in different diseases has been verified.In gastric cancer (GC) tissues, mir-450b-3p was found to have low expression, and its upregulation hindered the malignant progression of GC by controlling KLF7 [30].Mir-450b-3p exhibited decreased expression in hepatocellular carcinoma (HCC) tissues, correlating with unfavorable overall survival and disease-free survival among HCC patients [31].
To investigate the downstream molecular mechanisms of mir-450b-3p in MI, we predicted the target genes of mir-450b-3p by bioinformatics.Among them, Atg4b and Atg4d were upregulated in the sequencing results of myocardial tissues from rat myocardial infarction.Their changes were consistent with the hypothesis of ceRNA, and so were chosen for further study.Atg4 is a crucial macroautophagy/autophagy-related cysteine protein family that either cleaves Atg8 homologs for their subsequent lipidation or delipidates Atg8 homologs from the autophagosome to regulate autophagy.There are four homologs, Atg4A, Atg4B, Atg4C, and Atg4D.Among them, mounting evidence points to Atg4B's better catalytic effectiveness toward the Atg8 substrate, its regulation of the autophagic process, and its significance in the emergence of a number of human malignancies [32,33].Overexpression of ATG4B promoted cisplatin-induced autophagy and inhibited cell apoptosis in osteosarcoma [34].The inhibition of miR-490-3p could promote autophagy to reduce myocardial ischemia reperfusion injury by upregulating ATG4B [35].MiR-139-5p significantly promoted cell apoptosis and inhibited cell autophagy by targeting ATG4D in myocardial ischemia and reperfusion [36].However, in a number of studies, excessive activation of autophagic processes resulted in apoptotic cell death [37].Atg4D overexpression induces autophagy and apoptosis in HeLa cells treated with hydrogen peroxide [38].Our results show that ATG4B expression was elevated and ATG4D expression was decreased in hypoxic-primary cardiomyocytes.This suggests that ATG4B and ATG4D may play different functions in hypoxia-induced cardiomyocyte death.Both ATG4B and ATG4D expression were down-regulated after interfering with AC125982.2expression under normoxic conditions, but ATG4B expression was down-regulated and ATG4D expression was unchanged after interfering with AC125982.2expression under hypoxic conditions.This means AC125982.2regulates ATG4B in the same way but regulates ATG4D differently under normoxic and hypoxic conditions.Dual luciferase assays and functional repertoire experiments confirmed that mir-450b-3p directly binds and mediates the regulatory effect of AC125982.2 on ATG4B.Unfortunately, AC125982.2 is a novel lncRNA in rats without having any homology to humans, restricting its usefulness in explaining human MI.Nevertheless, it is still important to study it.This will not only help to better understand the role of lncRNAs in mammalian biology, but also provide useful references for the study of human disease.
Fig. 1 .
Fig. 1.Pathological characteristics of rats with myocardial infarction (MI).Rats were subjected to sham operation (C) or myocardial infarction (MI) for four weeks.(A) M-mode ultrasound image of the heart in sham and MI rats.(B) Masson-trichrome staining and heart infarct area (%) in the hearts of rats in sham and MI groups.(C) Left ventricular ejection fractions (LVEF, %) in the hearts of rats in sham and MI groups.(D) Left ventricular shortening fraction (LVFS, %) in the hearts of rats in sham and MI groups.**p < 0.001, n = 3/group.
Fig. 2 .
Fig. 2. Screening and clustering of differentially expressed LncRNAs in MI-rat heart.After the rat model establishment, heart tissues were collected, followed by lncRNA sequencing and qPCR validation.Heat map (A) and scatter plot (B) of differential lncRNA expression in Sham and MI groups.Fold change >1.5, p < 0.05.(C) Upregulation of top 5 LncRNAs in differentially expressed LncRNAs verified by qPCR in heart tissues of rats in Sham and MI groups.*p < 0.05, **p < 0.01, n = 3/group.
Fig. 3 .
Fig. 3. Validation of differentially expressed LncRNAs in Hypoxia-primary cardiomyocytes.Rat primary cardiomyocytes were treated under a hypoxic environment in vitro for different times, mimicking hypoxia after MI in vivo.(A) TUNEL staining to detect apoptosis in primary cardiomyocytes under hypoxia.(B) Counting of the number of apoptotic cells in (A).Versus Control group, **p < 0.01, n = 3/group.(C) The expression of the top five up-regulatedLncRNAs was verified by qPCR in cardiomyocytes subjected to 36 h of hypoxia.*p < 0.05, **p < 0.01, n = 3/group.
Fig. 6 .
Fig. 6.Effect of interfering with ATG4B expression on cell apoptosis.Western blot assay for the ATG4B, Bax, cleaved caspase3 and Bcl-2 protein levels in primary cardiomyocytes treated with hypoxia for 36 h after 24 h of interference with ATG4B.**p < 0.01, ns represents no significant difference, n = 3/group.
|
2023-11-22T16:42:56.905Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "1b09730aaf873c197150156fbf92c7897a3d3155",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844023096755/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31d8309736605e15d47e7e1b0ed9bbfab75e91d1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18366518
|
pes2o/s2orc
|
v3-fos-license
|
Supersymmetry and Dark Matter
We examine supergravity models with grand unification at M_G possessing R parity invariance. Current data has begun to significantly constrain the parameter space. Thus for mSUGRA, accelerator data places a lower bound on m_{1/2} of m_{1/2}>~300 GeV while astronomical data on the amount of relic dark matter narrowly determines m_0 in terms of m_{1/2} (for fixed value of tanbeta and A_0) due to co-annihilation effects. Additional new data could fix the parameters further. Thus the parameter space is sensitive to the muon magnetic moment anomaly, \delta a_\mu, and if \delta a_\mu lies 1 \sigma above its current central value, it would exclude mSUGRA, while if it lies 1\sigma below (but is still positive) it pushes the SUSY spectrum into the TeV domain. The B_s ->\mu^+ \mu^- decay is seen to be accessible to the Tevatron RunII with branching ratio sensitivity of Br[B_s ->\mu^+ \mu^-]>6.5\times10^{-9} with 15 fb^{-1}/detector, and a value of 7(14)\times10^{-8} obtainable with 2 fb^{-1} would be sufficient to exclude mSUGRA for tan beta<50(55). Measurements of B_s ->\mu^+ \mu^- can cover the full mSUGRA parameter space for tanbeta>40 if \delta a_\mu>11\times10^{-10}, and combined measurements of B_s ->\mu^+ \mu^-, a_\mu and m_h (or alternately the gluino mass) would effectively determine the mSUGRA parameters for \mu>0. Detector cross sections are then within the range of planned future dark matter experiments. Non-universal models are also discussed, and it is seen that detector cross sections there can be much larger, and can be in the DAMA data region.
Introduction
It is generally expected that the Standard Model will break down at energies above LEP, and signals of new physics will occur for energies > ∼ 100 GeV -1 TeV. The nature of this new physics is one of the crucial question of particle physics. Simultaneously, astronomical data has determined with good accuracy the amount of dark matter in the universe, though the nature of that dark matter remains one of the crucial questions of astronomy. Supersymmetric theories with R parity invariance offer an explanation to both puzzles as well as a window on the cosmology of the very early universe at times t ≈ 10 −7 sec.
Unfortunately, supersymmetric models depend upon a large number of parameters, and even the simplest model, mSUGRA, depends on four parameters and one sign. But fortunately, supersymmetry applies to a wide number of phenomena, and it is now becoming possible to significantly restrict the parameter space. The general MSSM, with over 100 free parameters (63 real parameters) is not very predictive. We consider here therefore, models, based on supergravity grand unification at M G ≃ 2 × 10 16 GeV (which have both theoretical and experimental motivation). We examine first the current status of the simplest model, mSUGRA [1,2], and what might be obtained from future measurements of the muon magnetic moment, g µ − 2, the Higgs mass, and the B decay B s → µ − µ + . We will then look at non-universal models with non-universality in the Higgs or third generation of squarks and sleptons. For all these cases, the lightest neutralino,χ 0 1 , is the dark matter candidate, and this will also strongly constrain the SUGRA parameter space.
mSUGRA Model
We briefly review the mSUGRA model which depends on four parameters and one sign, and thus is the most predictive of the SUSY models. We take these parameters to be the following: m 0 (the universal scalar soft breaking mass at M G ); m 1/2 (the universal gaugino mass at M G ); A 0 (the universal cubic soft breaking mass at M G ); and tan β =< H 2 > / < H 1 > (the ratio of the two Higgs VEVs at the electroweak scale). The sign of the Higgs mixing parameter µ (appearing in the superpotential as µH 1 H 2 ) is the remaining parameter. (We note at the electroweak scale that theχ 0 1 and lightest chargino,χ ± 1 , masses are related to m 1/2 by mχ0 1 ∼ = 0.4m 1/2 and mχ± 1 ∼ = 0.8m 1/2 .) We examine this model with the following parameter ranges: m 0 ≤ 1 TeV; m 1 /2 ≤ 1 TeV; 2 ≤ tan β ≤ 55; and A 0 ≤ 4m 1/2 . The above bound on m 1/2 corresponds to a gluino mass range of mg ≤ 2.5 TeV, which is also the upper mass reach for the LHC.
In the early universe, the neutralino can annihilate via s-channel Z, and h, H, and A neutral Higgs bosons (h is the light Higgs, and H(A) are the heavy CP even (odd) Higgs), and also through t-channel sfermion diagrams. However, if a second particle becomes nearly degenerate with theχ 0 1 , one must include it in the early universe annihilation processes. This leads to the co-annihilation phenomena. In SUGRA models with gaugino grand unification, this accidental near degeneracy occurs naturally for the light stau,τ 1 .
One can see this analytically for low and intermediate tan β, where the renormalization group equations (RGE) can be solved analytically. One finds forẽ R , the right selectron and theχ 0 1 at the electroweak scale the results: where the last term of Eq(1) is approximately (37GeV) 2 . Thus for m 0 = 0,ẽ R becomes degenrerate withχ 0 1 at m 1/2 ∼ =370 GeV, and co-annihilation thus begins at m 1/2 ∼ =(350GeV -400GeV). As m 1/2 increases, m 0 must be raised in lock step (to keep mẽ R > mχ0 1 ). More precisely, it is theτ 1 which is the lightest slepton and this particle dominates the co-annihilation phenomena. In general, co-annihilation implies that one ends up with relatively narrow allowed corridors in the m 0 − m 1/2 plane with m 0 closely correlated with m 1/2 , increasing as m 1/2 increases.
Dark matter detection of Milky Way neutralinos incident on the Earth depends upon the neutralino -proton cross section. For detectors with nuclear targets containing heavy nuclei, this is dominated by the spin independent cross section. The basic quark diagrams involve s-channel squarks, and t-channel h and H diagrams. Thus σχ0 1 −p decreases with increasing m 1/2 and m 0 (which as we have seen above increase together), and also increases with tan β (due to the Higgs couplings to the d quark). Thus the maximum cross section will occur at high tan β, and low m 1/2 , m 0 .
In order to carry out calculations in SUGRA models accurately, it is necessary to take into account a number of corrections, and we list the important ones here: We use two loop gauge and one loop Yukawa RGE from M G to the electroweak scale (which we take as (t 1t2 ) 1/2 ), and QCD RGE below for the light quark contributions. Two loop and pole mass corrections are included in calculation of m h . One loop corrections to m b and m τ [3,4] are included, which are important for large tan β. Large tan β NLO SUSY corrections to b → sγ [5,6] are included. All stau-neutralino co-annihilation channels are included in the relic density calculation with analysis valid for the large tan β regime [7,8,9].
Note that we do not include Yukawa unification or proton decay constraints in the analysis as these depend sensitively on post-GUT physics, about which little is known. In fact in string or M-theory analyses with grand unification, while the unification of the coupling constants occur as in SUGRA models, the Yukawa unification or proton decay constraints do not generally hold [10].
Current Experimental Constraints
In order to see what the currently allowed parameter space is, one must impose all the present experimental constraints. However, three of these acting together produce significant limitations, and we mention these here: (1) Higgs mass. The current LEP bound on the light Higgs is m h > 113.5 GeV [11]. However, the theoretical calculation of m h [12] may have a 2 -3 GeV error, and so we will conservatively interpret this bound to mean m h (theory) > 111 GeV.
(2) b → sγ decay. There is some model dependence in extracting the b → sγ branching ratio from the CLEO data, and so we will take a relatively broad range around the current CLEO central value [13]: (3)χ 0 1 relic density. The relic density is measured in terms of Ω = ρ/ρ c where ρ is the mass density, ρ c = 3H 2 /8πG N and H = (100km/sMpc)h is the Hubble constant. Analyses of the cosmic microwave background now gives a fairly accurate measurement of the amount of CDM, i. e. Ω CDM h 2 = 0.139 ± 0.026 [14]. We take a 2σ range around the central value: These three constraints now combine to greatly restrict the mSUGRA parameter space. Thus the m h bound for low tan β and the b → sγ constraint for higher tan β produce a and consequently mχ0 1 > ∼ (120 − 160) GeV. This means that most of the parameter space is in theτ 1 −χ 0 1 co-annihilation domain in the relic density calculation, and thus to satisfy the relic density bound , m 0 is approximately determined by m 1/2 (for fixed tan β, A 0 ). This implies that as m 1/2 increases, so does m 0 , and so generally on has that σχ0 1 −p is a deceasing function of m 1/2 .
We consider first µ > 0. Figs. 1 and 2 exhibit the effects discussed above. Thus in Fig. 1 for tan β = 10, A 0 = 0, one sees that the Higgs mass bound requires m 1/2 > ∼ 300 GeV, and one sees the narrow m 0 band allowed by the co-annihilation effects. The short vertical lines show the expected dark matter detection cross sections of σχ0 1 −p = 5 −9 pb (left) and 1 × 10 −9 pb (right). Thus m 0 is determined by m 1/2 to within ∼ 40GeV. For µ < 0 an accidental cancellation in σχ0 1 −p occurs over a wide range of tan β [15,7] giving large regions where σχ0 1 −p < 10 −10 pb and hence probably inaccessible to future detectors. This is exhibited in Fig. 3 [7], where the minimum cross sections are plotted as one scans the allowed parameter space, for tan β =6 (dashed), 8 (dotted), 10 (solid), 25 (large dash). In this case the spin independent cross section can fall below the very small spin dependent cross section where such cancellations do not occur [16].
There exist now two other experiments that might restrict the parameter space even more: The BNL E821 [17] g µ − 2 experiment measuring the muon magnetic moment, and the decay B s → µ + µ − which may be observable at the Tevatron RunIIB (or possibly in B-factories), and we turn to consider these next.
Muon magnetic Moment Anomaly
The BNL E821 experiment has measured the muon magnetic moment with exceedingly high accuracy. When compared with the calculations of a µ expected from the Standard Model (with corrected sign of the hadronic scattering of light by light contribution [18]) there remains a small discrepancy: which is a 1.6 σ effect. While this is not enough to presume that a real effect has been discovered, it is still interesting to examine the effects such an anomaly would have for two reasons: first the errors will shortly be significantly reduced, and second SUGRA models imply the existence of such an anomaly of just this size. Much of the uncertainty in the calculation of a µ comes from the hadronic part, a µ (had). However, it is possible to express this by a dispersion relation in terms of integrals over the experimental cross section σ e + e − → had. Recent experiments from from CMD-2, VEPP-2M and Beijing [19,20,21] have re-measured these cross sections with greatly improved accuracy. Further, the BNL E821 experiment has about six times more data which they are currently analyzing which should greatly reduce the statistical part of the error. Thus one may expect the error in Eq.(5) to be reduced by a factor of two or more in the near future. If the central value of the anomaly were to remain unchanged, the effect would become more statistically significant.
From the theoretical side, it has been known for some time that mSUGRA predicts an important contribution to a µ [22,23]. This is illustrated in Fig. 4 which shows the allowed regions in the m 0 -m 1/2 plane and assumes that the entire a µ anomaly is due to SUSY. The upper right (blue) region corresponds to a µ less than 1σ below the current central value, while the diagonal line corresponds to the central value, which falls at the lower edge of the allowed part of the parameter space. The current central value thus corresponds to a relatively low mass SUSY spectra, easily accessible to the LHC. In fact, too big a value of delta a µ (i. e. delta a µ > ∼ 40 × 10 −10 ) would be sufficient to exclude Figure 5: Illustrated 95% C.L. limits on the branching ratio for B s → µ + µ − at CDF in Run II as a function of integrated luminosity. Solid (Case A) and dashed (Case B) curves are based on different assumptions on the signal selection efficiency and the background rejection power [30].
mSUGRA [24]. On the other hand, an a µ less than 1σ below the current central value would imply a heavy SUSY mass spectrum. Further, if δa µ is positive, then µ > 0 [25,26], and this would eliminate the very low dark matter detection cross sections shown in Fig.3. Thus the final value of δa µ will play an important role in deciphering the SUSY parameter space.
B s → µ + µ −
The B s → µ + µ − decay offers an additional window for investigating the mSUGRA parameter space. This process has been examined within the MSSM framework [27,28] and more recently using mSUGRA [29]. We consider here predictions for this decay for mSUGRA, but include all the current experimental constraints listed in Sec. 3 (which are necessary to see what predictions occur [30]). The B s → µ + µ − decay is of interest since the Standard Model prediction for the branching ratio is quite small [28](Br[B s → µ + µ − ] = (3.1 ± 1.4) × 10 −9 ), while the SUSY contribution can become quite large for large tan β. This is because the leading diagrams grow as (tan β) 3 and hence the branching ratio as (tan β) 6 . What further makes this decay interesting is that it is possible to find a set of cuts so that CDF (and probably also DO) may be able to observe it in Run2B (with 15 fb −1 data). Fig. 5 shows the CDF limit on the branching fraction as a function of the luminosity [30]. (The solid curve is a conservative estimate, [30]. and the dotted curve a more optimistic possibility.) One sees that CDF will be sensitive to a branching ratio of Br > 1.2 × 10 −8 (and the combined CDF and D0 data to Br > 6.5 × 10 −9 ). mSUGRA analysis then shows that CDF would be sensitive to this decay for tan β > ∼ 30. We can now examine the effects of the combination of all data. Figs. 6 and 7 [30][show the parameter space for tan β = 50 and tan β = 40 respectively for A 0 = 0 and µ > 0. One sees that there is an upper bound on the Br[B s → µ + µ − ] to be consistent with mSUGRA, and a branching ratio > 7(14) × 10 −8 would be able to exclude mSUGRA for tan β < 50(55). As can be seen from Fig. 5, such a branching ratio could be seen with only 2 fb −1 . More generally, Fig. 6 shows that the entire parameter space can be covered for tan β = 50, A 0 = 0 if a µ > 11 × 10 −10 , and from Fig. 7 also for tan β = 40 using the combined CDF and D0 data. The effect of varying A 0 is shown in Fig. 8 where A 0 = −2m 1/2 , and tan β = 40. Here again the full parameter space can be covered.
One sees from the above graphs that future measurements should be able to determine the basic parameters of the mSUGRA model. µ + µ − ] and m h intersect the allowed dark matter band at a single "point" for a given A 0 , this will determine m 0 (approximately), m 1/2 and tan β. (It may be better to use the gluino mass in place of m h since the parameter space is very sensitive to m h .) Predictions of the SUSY spectrum and dark matter detection rates would then follow.
Non-Universal Models
One can generalize mSUGRA by allowing non-universal soft breaking at M G in the third generation of squarks and sleptons and also in the Higgs sector. If universality of the gaugino masses is maintained, then stau-neutralino co-annihilation will still play an important role. However, new effects can occur, since the non-universality effects the size of the µ parameter. The µ parameter governs the Higgsino content of theχ 0 1 and as µ 2 decreases (increases), the Higgsino content increases (decreases). Since σχ0 1 −p depends on the interference between the Higgsino and gaugino parts ofχ 0 1 , σχ0 1 −p will correspondingly increase (decrease). A second effect also occurs. As the Higgsino content of theχ 0 1 increases, then theχ 0 1 -χ 0 1 -Z coupling is strengthened, allowing a new annihilation channel to become important (in addition to theτ 1 -χ 0 1 co-annihilation channel). As a simple example we consider the case where at M G one chooses m 2 H 2 = m 2 0 (1 + δ 2 ), and all other soft breaking masses universal. Fig. 9 shows the allowed region in the m 0 -m 1/2 plane for tan β = 40, δ 2 = 1, A 0 = 0, µ > 0. One sees the usual narrow stau-neutralino co-annihilation band a relatively low m 0 , but in addition there is a higher m 0 (and low m 1/2 ) region satisfying all constraints due to the new Z-channel annihilation process. Fig. 10 shows σχ0 1 −p as a function of m 1/2 for tan β = 40, A 0 = m 1/2 , µ > 0. The Z-channel corridor now reaches up to the DAMA data region [31] for low m 1/2 (mχ0 1 ≃ 0.4m 1/2 ), and so if the DAMA results are confirmed, it would point to a non-universality of this type.
Conclusions
We have considered here SUGRA models with R parity invariance and grand unification at M G ≃ 2 × 10 16 GeV. Current data has begun to constraint these models significantly. For mSUGRA (and many non-universal models), accelerator data (m h and b → sγ) place lower bounds on m 1/2 such that m 1/2 > ∼ 300GeV (or mχ0 1 > ∼ 120GeV), while astronomical data on the amount of dark matter narrowly determines m0 in terms of m 1/2 (for each A 0 and tan β).
Thus additional new data could begin to fix the parameters of mSUGRA parameters, and so determine the mass spectrum expected at the LHC, dark matter detection rates, etc. In particular, the muon magnetic moment anomaly a µ and the branching ratio Br[B s → µµ] could combine with m h or a measurement of mg to fix the parameters completely. A more accurate determination of a µ should be available shortly, and we have seen that the Tevatron RunII should be sensitive to the B s decay for Br > 1.2(0.65)×10 −8 for the CDF (or combined CDF and D0) data, and this would cover a large part of the parameter space for tan β > 30. Non-universal models in the Higgs and third generation soft breaking masses offer the possibility of a new Z boson s-channel annihilation in the early universe. Such possibilities can significantly increase the neutralino-proton detection cross section up the DAMA region of 10 −6 pb. = 0.4m 1/2 ) for tan β = 40, µ > 0, m h > 114 GeV, A 0 = m 1/2 for δ 2 = 1. The lower curve is for theτ 1 −χ 0 1 co-annihilation channel, and the dashed band is for the Z s-channel annihilation allowed by non-universal soft breaking. The curves terminate at low m 1/2 due to the b → sγ constraint. The vertical lines show the termination at high m 1/2 for δa µ > 11 × 10 −10 [24].
Acknowledgement
This work was supported in part by National Science Foundation Grant PHY-0101015.
|
2014-10-01T00:00:00.000Z
|
2002-04-16T00:00:00.000
|
{
"year": 2002,
"sha1": "c3b1d78852beae4c2cc29ab3b383f50124583fe5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0204187v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b390f83c1a56ba2ecd35047670e16f22bca9e07a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
73371147
|
pes2o/s2orc
|
v3-fos-license
|
Current Use of Steroids in Critical Care
The hypothalamic-pituitary-adrenal (HPA) integrity is a major factor of the host’s response to stress. During sepsis the HPA axis is activated and the ACTH that is released from the pituitary gland enhances adrenal activity, resulting in high plasma cortisol level. This state of continuous adrenal secretory activity leads to relative adrenal insufficiency (RAI). In view of the complexity of the stress response, the level of plasma cortisol present in the body is often insufficient to meet the demands of the inflammatory response and the resulting cortisol levels may be high, normal or low. This state is also known as critical illness-related corticosteroid insufficiency (CIRCI). Effects of CIRCI include alteration in systemic inflammatory response and altered cardiovascular function. Cortisol plays an important role in controlling vascular tone as it increases sensitivity to vasopressors. There are also a number of neuro-peptides which act on the HPA integrity. The most commonly known is vasopressin, which has shown to increase endogenous adrenal ACTH secretion. Apelin and co-peptin are two other neuropeptides that act on the pre-vasopressin molecule. They decrease the production of vasopressin, thereby contributing to the host’s response to stress and may account for the variable plasma cortisol level [1-6]. The altered HPA leads to a biphasic pattern during critical illness uncoupling of ACTH and cortisol that may account for an alternative pathway not mediated by ACTH. Cortisol activates the steroid receptor complex at the cellular level of many different genes and inhibits the synthesis of mediators of inflammation such as cytokines, IL, cell adhesion molecules, TNF-a, and NF-kB. In critical illness these pathways are impaired. Some proposed mechanisms include decrease production at all levels of the HPA axis, dysfunction of the glucocorticoid receptors, and structural damage to the adrenal gland from hemorrhage or infarction. Failure of clinical improvement in sepsis has been associated with failure to activate the steroid complex receptors that down regulate the transcription of inflammatory cytokines. Systemic inflammation induces corticosteroid resistance or tissue resistance, despite adequate cytoplasmic and serum cortisol levels [1,7-10].
The altered HPA leads to a biphasic pattern during critical illness uncoupling of ACTH and cortisol that may account for an alternative pathway not mediated by ACTH.Cortisol activates the steroid receptor complex at the cellular level of many different genes and inhibits the synthesis of mediators of inflammation such as cytokines, IL, cell adhesion molecules, TNF-a, and NF-kB.In critical illness these pathways are impaired.Some proposed mechanisms include decrease production at all levels of the HPA axis, dysfunction of the glucocorticoid receptors, and structural damage to the adrenal gland from hemorrhage or infarction.Failure of clinical improvement in sepsis has been associated with failure to activate the steroid complex receptors that down regulate the transcription of inflammatory cytokines.Systemic inflammation induces corticosteroid resistance or tissue resistance, despite adequate cytoplasmic and serum cortisol levels [1,[7][8][9][10].
RAI is defined as having a baseline cortisol level < 20cmg/dL.Since baseline cortisol levels in patients with severe sepsis can be high, using an incremental change in cortisol level of < 9cmg/ dL may be more useful in assessing baseline measurements.
A corticotropin stimulation test is used to assess RAI by administering 250cmg of corticotropin hormone and obtaining a basal plasma cortisol level.Samples are then drawn 30 and 60 minutes after the given dose to measure plasma cortisol levels.A Steroids have been studied for many decades.In the 1980's, some studies reported steroid replacement therapy as having a beneficial outcome in treating septic patients, particularly when routine high-dose methylprednisolone (up to 30 mg/kg) was
Abstract
Septic shock is characterized by an uncontrolled systemic inflammatory response that contributes to organ dysfunction, failure and eventually death.The importance of the adrenal glands for survival under conditions of physiologic stress has been known since the early 20th century.Clinical studies explored the potential therapeutic role of corticosteroids in the treatment of sepsis and septic shock.Despite controversies on the benefit-to-risk ratio, they are widely used.The longstanding adoption of corticosteroids in the treatment of severe sepsis likely relies on the prompt reversal of septic shock often seen at the bedside.This current review was designed to provide readers with a clear understanding and rationale for using corticosteroids, while presenting a review of the Surviving Sepsis Guidelines and the results from the implementation of the Surviving Sepsis Campaign.
In the early 1990's, there was renewed interest to study the effect of a prolonged course of low-dose steroids, incidence of RAI, and factors associated with mortality.There was also a special interest in evaluating cortisol levels and cortisol response to the corticotropin test.Two studies by Ananne sought to answer these questions.The first study based the prognostic value of cortisol levels on a short corticotropin stimulation test in patients with septic shock.In this study patients were randomized within 8 hours of the onset of septic shock.Based on their study results a 3-Level Prognostic Classification System was created.The combination of basal cortisol levels (< or > 34 cmg/dL) and the highest value of cortisol response to corticotropin (< or >9cmg/ dL) resulted in defining three different patterns of HPA axis activation of septic shock.These patterns were associated with three different outcomes: 1. 30% pts had adequate HPA axis activation with a basal cortisol level <34 and cortisol response > 9. Lowest risk of death and median survival >28 days.
2. 20% pts had basal cortisol level >34 with occult adrenal insufficiency (cortisol response<9).Highest risk of death and median survival time of 5 days.
3. 50% pts had basal cortisol level <34 and cortisol response <9.Intermediate risk of death and median survival 12 days.
This study concluded that at the onset of septic shock basal plasma cortisol level and plasma response to corticotropin were independent predictors of 28 day mortality [10].
The second study addressed the use of a 7-day treatment course with low-dose hydrocortisone and fludrocortisone.The cortisol response was defined as the difference between the highest of concentrations taken before and after the test.In this study, RAI (i.e., non-responders) was defined by a response of ≤9 mcg/dL.This trial found a significant reduction in the risk of death without an increase in adverse events in patients with septic shock and RAI.There were no differences in adverse events between the two groups [13].
In a multicenter, double-blind, placebo-controlled trial, the Corticus study randomized patients to hydrocortisone or placebo within 72 hours of the onset of septic shock.The Corticus study found that hydrocortisone treatment did not decrease mortality or time to shock reversal.In addition, hydrocortisone treatment was associated with an increased incidence of super infections, including new episodes of sepsis or septic shock.Adverse events such as hyperglycemia and hypernatremia were also noted.A difference in study design may explain the variation of study results between the two trials [3,9,14].
In the Annane study patients with a primary source of infection in the lungs were enrolled within 8 hours of the onset of septic shock.Patients remained hypotensive despite fluid resuscitation and vasopressor therapy.In contrast, the Corticus study enrolled patients with a primary source of infection in the GI tract within 72 hours of the onset of septic shock.For these patients, septic shock was manifested by their hypotension or vasopressor requirement for at least one hour.This led to a disparity in the severity of illness between the two trials, with the Annane group having the sicker patients as measured by SAPS 2 scores and the incidence of mortality in the controlled group.These observations not only raised the issue of timing, but questioned whether sicker patients are more likely to benefit from overall steroid therapy and earlier administration of steroid therapy [2,4,15,16].
In a published review of the risks and benefits of corticosteroid therapy, low-dose corticosteroid treatment (>5 days) was favored.It also demonstrated a reduction in 28 day mortality and an overall reduction in hospital mortality.The length of stay for patients in the Intensive Care Unit (ICU) was shorter however there was no difference in the hospital length of stay.These patients also showed no evidence of an increased risk of gastrointestinal bleeding (GIB), super infections or neuromuscular weakness, however there was an increase in the incidence of hyperglycemia and hypernatremia [1,17].
Another study addressing 3-day course versus 7-day course of low-dose hydrocortisone in patients with septic shock and relative adrenal insufficiency found no difference in mortality between the two groups with 28 day mortality as the primary endpoint showed [18].Following these trials, a multi-disciplinary force developed a consensus statement on the diagnosis and management of corticosteroid insufficiency in critical illness.This statement was incorporated into the Surviving Sepsis Campaign Treatment Guidelines in 2008.It concluded that hydrocortisone may be given to adult septic shock patients who are fluid and vasopressor unresponsive after one hour, with the following recommendations: 1.An ACTH test is not necessary to identify subset of adults with septic shock who should receive hydrocortisone.
2.
Dexamethasone should not be used.
3.
Fludrocortisone may be given if hydrocortisone is not available.
4.
Wean steroids when vasopressors are no longer required.
Since its implementation, the effectiveness of the guidelines and treatment bundles has been evaluated in a prognostic manner.One study performed the two-part sepsis bundle in a community hospital and found a positive impact on clinical outcome in the number of days on vasopressor therapy, dialysis and mortality [26].It is important to note that corticosteroid treatment is just one part of the Surviving Sepsis Campaign guidelines.Therefore, we cannot conclude that corticosteroid treatment alone is beneficial for survival, but rather the implementation of the whole package.
In 2012, the Surviving Sepsis Campaign published its results since the implementation of its guidelines.The adjusted hospital mortality was significantly higher in the group that received low-dose steroids for septic shock compared to those who did not.This increase in mortality was consistently higher even in patients who received corticosteroids within 8 hours of the onset of septic shock [16,27,28].Reports from two large international sepsis registries, EDUSEPSIS and PROGRESS, have not supported the use of low-dose steroids in sepsis and septic shock [29].Moreover, the Bayesian analysis, a statistical methodology used to solve clinical controversies, supports these findings but do not support the use of low-dose steroids in sepsis [30].
Varying practices on the use of low-dose corticosteroids in the treatment of septic shock remains.Many studies have reported a significant improvement in time to shock reversal after treatment with corticosteroids despite evidence showing no benefit in decrease mortality [8,16,31].Several meta-analysis have failed to reproduce the positive results seen in previous studies.The use of corticosteroids in the treatment of sepsis and septic shock in the setting of RAI remains controversial [3,15,27,32].Despite these observations the use of low-dose steroids continues to be incorporated in the management of sepsis and septic shock as corticosteroids have shown a faster time to shock reversal [8,[33][34][35].These large sepsis registries provide a description of the management practices and outcomes based on the guidelines and are therefore secondary analysis [27,31].The ambiguity remains.Large randomized controlled trials are needed to define the role of low-dose steroids, identify patients who would benefit and to define a recommended dose and duration in treating sepsis and septic shock.The Adrenal trial is underway and aims to address various prescribing practices of corticosteroids, as well as the outcome at 90 days, the patient's length of ICU stay and quality of life at 6 months [36].The collaboration of new trials and the Surviving Sepsis Campaign will provide new clinical evidence to incorporate in current guidelines.
3-level Prognostic Classification System is used to evaluate RAI:
|
2019-03-11T13:10:53.028Z
|
2014-03-12T00:00:00.000
|
{
"year": 2014,
"sha1": "5bc26198060e012932e900542044b146bf4702ba",
"oa_license": "CCBY",
"oa_url": "https://symbiosisonlinepublishing.com/anesthesiology-painmanagement/anesthesiology-painmanagement04.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5bc26198060e012932e900542044b146bf4702ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233721113
|
pes2o/s2orc
|
v3-fos-license
|
A new chapter for a better Bioscience Reports
Abstract As Bioscience Reports enters its fifth decade of continuous multidisciplinary life science publishing, here we present a timely overview of the journal. In addition to introducing ourselves and new Associate Editors for 2021, we reflect on the challenges the new Editorial Board has faced and overcome since we took over the editorial leadership in June of 2020, and detail some key strategies on how we plan to encourage more submissions and broader readership for a better and stronger journal in the coming years.
The Biochemical Society is one of the U.K.'s largest single-discipline learned societies, promoting the advancement of the molecular biosciences since 1911. Bioscience Reports is published on behalf of the Biochemical Society by Portland Press and is committed to the Society ideals by publishing sound science, providing a suitable home for valid data and findings in the life sciences. We welcome reproducible, appropriately replicated and controlled experiments, with conclusions adequately supported by the presented results. We encourage submissions in all areas of the molecular life sciences, both basic and applied.
Bioscience Reports is committed to the Biochemical Society aims of disseminating and sharing scientific knowledge, encouraging discourse and debate amongst scientists. To fully democratise our published research, Bioscience Reports has also been committed to full open access since 2012, with all papers published under the most permissive CC BY licence. All journal profits are returned to the Biochemical Society, supporting Society grant-funding and educational charitable endeavours. Since 2020, Portland Press has offered subscribing institutions a combined transformative 'Read & Publish' option, facilitating institutions towards full open access publishing [1].
Although Bioscience Reports covers a broad range of fields, the journal has maintained an impact factor of 2.942 (issued in 2020), with a 5-year impact factor of 3.112. We feel this puts the journal in an excellent position moving forwards, hopefully allowing us to continue to encourage quality submissions, but to also grow and expand into areas that are currently under-represented, or are becoming of increased topicality.
Changes
The start of this decade has seen many changes in the scientific community, not least following the global COVID-19 pandemic impacting on the ability of laboratories to conduct research, travel internationally to disseminate findings, and the direct effects of SARS-CoV-2 on scientists themselves, their friends and families. This has also been reflected in research featured in the journal, with a number of studies on SARS-CoV-2 recently published [2][3][4], including one of our most accessed review articles [5].
Bioscience Reports itself has undergone significant changes in 2020, not least the retirement of our previous Editor-in-Chief Wanjin Hong. We thank Wanjin for his excellent service to the journal for over 10 years, but we take this opportunity to introduce ourselves as the incoming Editor-in-Chief (Weiping Han) and Deputy Editor-in-Chief (Christopher Cooper), with our expertise being in the molecular basis of metabolic disorders and associated complications, and biochemistry and structural molecular biology of genome stability, respectively. To assist in handling increasing numbers of submissions, 2020 also saw a significant expansion of the number of Associate Editors, with 11 recruited to the Bioscience Reports . This has expanded the subject areas covered by the Editorial Board, including diverse topics such as stem cells, tumour microevironment, cell signalling, ADP-ribosylation, through to chemical biology and drug development. These appointments also diversify the Editorial Board's geographical distribution (Figure 1), reflecting the international readership of the journal. As the journal continues to grow, we hope to ultimately expand the Editorial Board to approximately 50 Associate Editors, increasing those from under-represented regions (particularly the Americas, Asia and Africa), alongside bringing the journal closer towards gender parity (currently male 65%, female 35%).
We are also broadening our journal scope by encouraging submissions in areas not frequently reported in Bioscience Reports, as we outline later. Whilst we encourage submissions reporting new data, we also welcome in silico studies. However, here we have recently refined our scope; to strengthen the soundness of 'omics' studies analysing existing publicly available datasets, experimental validation prior to peer review may be requested by the Editorial Board, to further increase the soundness of published material.
Challenges
A recent significant challenge faced by the journal has been the increased prevalence of falsified or fraudulent paper submissions, particularly involving manipulated images such as stock or invented images [6], or duplicated images found both within and between different papers, particularly those pertaining to Western blot, microscopy and flow cytometry data. A particularly worrying trend is the rising frequency of paper mill submissions [7]; the wholesale contract industrialisation or ghostwriting of fabricated papers, or the systematic falsification of research data, or the
. Paper acceptence rate versus submissions in Bioscience Reports
Data from 2020 includes approximately 300 papers that were rejected before assignment to an Associate Editor, received by Bioscience Reports between June and December 2020. Prior to June 2020, papers rejected before assignment to an Associate Editor were not counted in the reported statistics.
sharing of once potentially valid (or falsified) data across multiple unrelated publications (exemplified by the so-called 'tadpole paper mill', named for the Western blot bands resembling the eponymous under-developed amphibians). Whilst paper mills and methods to detect them have been excellently reviewed by colleagues elsewhere [6,8,9], they remain a clear and present threat to journals in all fields [7]. We especially note trends of paper mill submissions in the field of cancer biomarkers, including studies on small RNAs, bioinformatics analyses of public datasets and single gene knockout studies, as observed elsewhere [10]. We (as Editor-in-Chief and Deputy Editor-in-Chief), the Editorial Board, Portland Press and the Biochemical Society firmly believe in an accurate scientific record, and ensuring the integrity of the material in the journal is of the highest priority. Hence, following a sudden increase in community-driven notifications from PubPeer (and other sources) of published potentially fraudulent and paper mill articles in 2020, we acted rapidly to assess the severity of cases and issued Expressions of Concern to papers where it was felt the issues were both significant and unlikely to have an immediate resolution. This further alerted readers to these papers under investigation, and accordingly published corrections or retracted articles at the earliest timepoint, in accordance with our publishing policies and COPE guidelines. We believe Bioscience Reports has taken great steps towards combating such fraudulent submissions, with our other immediate responses including instigating more stringent submission requirements, including the insistence of raw Western blot data and inclusion of institutional email accounts. We continue to mandate ORCID identifiers for corresponding authors, and encourage all authors to include their ORCID ID.
In order to better identify issues during peer review, the Editorial Board has also received guidance from experts in the field of image manipulation such as Elisabeth Bik, and we are continually reviewing and updating our editorial processes and policies, with additional checks performed by the Editorial Office upon submission of all articles and again prior to acceptance. Portland Press' new Data Policy also guides authors in transparent research and data presentation procedures, alongside strengthening our editorial and peer review processes. Whilst Bioscience Reports has seen significant growth in the last 5 years, with an over seven-fold increase in submissions compared with 2016 ( Figure 2), we expect some of this growth reflects an increase in paper mill submissions. However, we feel the changes outlined here will help to maintain quality, with a current acceptance rate of <25%. Such combined efforts will help to prevent future publication of unsound research, which contributed to the rejection of 385 submitted papers with paper mill, image or authorship concerns in the second half of 2020 alone, illustrating the scale of this issue.
Looking to the future, and why should authors choose Bioscience Reports?
Apart from continuing to ensure the validity and soundness of published articles and rising to counter increases in fraudulent submissions, the Editorial Board hopes to see a continued growth of submitted papers and increase the geographic spread of submissions, yet maintain a suitably high quality. In order to achieve this, we hope to grow the Editorial Board and increase representation and contacts in strategic locations, such as China and North America to handle increased submissions from those regions. Moreover, we hope to especially solicit submissions on a number of topic areas that have traditionally seen fewer submissions to date, to expand the subject base and further reflect the interests of our broad readership. Particular topic areas (with examples of recent published papers) are: protein biochemistry [11], basic molecular biology [12], plant biology [13], microbiology [14], neuroscience [15] and structural biology [16]. By way of approaching this aim we hope to propose a new short protein structural biology report format and we have initiated our first Collection in several years. This Collection will be guest edited by Sven Petersen (Karolinska Institutet, Sweden; Nanyang Technological University, Singapore) and Naama Geva-Zatorsky (Technion, Israel) focusing on the microbiome, with a number of review articles already commissioned.
We feel such broad research topics potentially featuring interdisciplinary studies are of interest to our wider readership, facilitating the exposure of authors' findings outside of their immediate research field, compared with publishing in specialist journals. This combined with our fully open access publishing model and the values of the journal outlined here, we hope will encourage authors to submit their articles to Bioscience Reports.
The journal would be nothing without the publishing team, and we thank all the staff at the Portland Press Editorial Office, particularly Niamh Lynch and Zara Manwaring, the outgoing and incoming Managing Editors, respectively. We also thank the reviewers and readers too for their support in helping to build Bioscience Reports, and we look forward to a successful (and post-COVID-19) future.
|
2021-05-05T06:17:08.593Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "7275f896c69281cba0b08cafe6bb45a44ba4da68",
"oa_license": "CCBY",
"oa_url": "https://portlandpress.com/bioscirep/article-pdf/41/5/BSR20211016/912012/bsr-2021-1016.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "500dcef05af12bef93ba19ece928d7c788b84229",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
133604570
|
pes2o/s2orc
|
v3-fos-license
|
Metformin and 4SC‐202 synergistically promote intrinsic cell apoptosis by accelerating ΔNp63 ubiquitination and degradation in oral squamous cell carcinoma
Abstract Oral squamous cell carcinoma (OSCC) is the most common and aggressive epithelial tumor in the head and neck region with a rising incidence. Despite the advances in basic science and clinical research, the overall survival rate of OSCC remains low. Thus finding novel effective therapeutic agents for OSCC is necessary. In this study, we investigated the effects and mechanisms of combined metformin and 4SC‐202 in OSCC. Our results showed that metformin and 4SC‐202 synergistically suppressed the proliferation and promoted the intrinsic apoptosis of OSCC cells in vitro and in vivo. Importantly, the proteasome inhibitor MG132 impeded the ΔNp63‐decreasing effects after metformin and 4SC‐202 treatment, indicating that metformin and 4SC‐202 could promote the degradation of ΔNp63 protein. Moreover, ubiquitination level of ΔNp63 increased after metformin or/and 4SC‐202 administration. Furthermore, we revealed that ΔNp63 mediated anticancer effects of metformin and 4SC‐202, as overexpression or suppression of ΔNp63 could attenuate or facilitate the apoptosis rate of OSCC under metformin or/and 4SC‐202 treatment. Collectively, metformin and 4SC‐202 synergistically promote intrinsic apoptosis through accelerating ubiquitin‐mediated degradation of ΔNp63 in OSCC, and this co‐treatment can serve as a potential therapeutic scheme for OSCC.
| INTRODUCTION
Oral squamous cell carcinoma (OSCC) is the most common cancer of oral cavity, and it accounts for more than 90% of all oral tumors. 1 OSCC is a highly malignant tumor with a delayed clinical detection and poor prognosis. 2 Current therapeutic strategies for OSCC mainly include surgery, radiation therapy, and chemotherapy. However, despite advances in therapeutic strategies, survival rates of OSCC have not improved considerably in recent years. Therefore, it is necessary to identify novel effective therapeutic agents for OSCC treatment.
Protein acetylation modification plays a vital role in the epigenetic regulation of gene expression. Acetylation of histone generally results in activation of gene; however, deacetylation catalyzed by histone deacetylase (HDAC) results in chromatin condensation and downregulation of gene expression. 3 Imbalance in the acetylation and deacetylation is responsible for the development and progression of wide variety of cancer. 4,5 In OSCC, high expression of HDACs, such as HDAC1, HDAC2, HDAC6, was shown to associate poor prognosis, advanced stage, larger tumor size, and lymph node metastasis in patients, [6][7][8] indicating that HDACs plays vital role in OSCC progress and could be the potential therapeutic target. Histone deacetylase inhibitors (HDACis) increase the level of acetylated lysine residues of core histone which in turn reactivates the expression of silenced genes in the cancerous cell. 9 Histone deacetylase inhibitors such as suberoylanilide hydroxamic acid, apicidin, panobinostat, and valproic acid could inhibit the growth and induce the apoptosis in head and neck squamous cell carcinomas (HNSCC). [10][11][12][13] Moreover, as combined administration of chemotherapeutics could take the advantage of each drug, further evaluation of HDAC inhibitors in combination with other chemotherapeutics or potential chemotherapy drug in HNSCC may be justified.
Metformin, a low cost antidiabetic drug, has been widely used to treat diabetes by inhibiting hepatic gluconeogenesis and enhancing glucose uptake in skeletal muscle. 14 Several studies have revealed that metformin treatment to diabetic patients was associated with lower cancer incidence. [15][16][17] Furthermore, metformin was repurposed as anticancer therapeutics for different types of cancers such as breast cancer, ovarian cancer, prostate cancer, bladder cancer and HNSCC cells with low toxicity. [18][19][20][21][22] Intriguingly, several studies have demonstrated that metformin can increase oral cancer cell sensitivity to chemotherapeutic drugs such as 5-FU, gefitinib, improve treatment efficacy and lower doses and toxicity. 23,24 Collectively, metformin combined with other chemotherapeutics could be a potential candidate for the development of new treatment strategies for human OSCC. 4SC-202 is a novel selective class I histone deacetylase inhibitor. In vitro, 4SC-202 was found to inhibit survival and proliferation of several type of cancer cells including hepatocellular carcinoma cell, colorectal cancer cell, medulloblastoma cell, and urothelial carcinoma cell; [25][26][27][28] and phase I clinical trials for the treatment of hematological malignant tumor revealed that 4SC-202 is safe, well tolerated with signs of antitumor activity. 29 Thus, 4SC-202 seems to be a promising treatment strategy for oral cancer.
In this study, we evaluated the efficacy and mechanism of combined therapy with metformin and 4SC-202 in OSCC. Here, we found that metformin and 4SC-202 synergistically inhibited growth of OSCC in vitro and in vivo. Importantly, our results revealed that combined metformin and 4SC-202 treatment promoted intrinsic apoptosis by accelerating ΔNp63 ubiquitination and degradation in OSCC. These findings highlighted combined treatment of metformin and 4SC-202 as a promising potential therapeutic strategy for OSCC.
| Cell lines and cell culture
Human OSCC cell lines HSC6 were kindly provided by J. Silvio Gutkind (NIH, Bethesda, MD), and HSC3 was obtained from professor Qianming Chen (State Key Laboratory of Oral Diseases, Sichuan University, China). The cells had been tested and authenticated by DNA (STR) profiling. The HSC3 and HSC6 cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM, Gibco, Grand Island, NY) supplemented with 10% fetal bovine serum (FBS, Gibco). All cells were cultured at 37°C in a humidified atmosphere containing 5% CO 2 .
Information regarding reagents and antibodies are listed in Table S1.
| Cell proliferation assay
Cell proliferation was determined by the cell counting kit-8 (CCK-8, Dojindo, Kumamoto, Japan). Briefly, 2 × 10 3 cells were seeded into 96-well plates, and then treated with different concentration of metformin or/and 4SC-202 for 24, 48, and 72 hours, respectively. The absorbance was measured at 450 nm using a microplate reader (Genios TECAN, Männedorf, Schweiz). All experiments were performed in triplicate. The percentage of cell survival was calculated as follows: cell viability = OD (treated cells)/OD (control cells) 100%.
The IC 50 values of the two cancer cell lines were calculated using sigmoidal dose response curve-fitting models (Graphpad Prism, La Jolla, CA). The effects of combination were estimated using the CalcuSyn software (Biosoft, Cambridge, UK). The combination index (CI) was the ratio of the combination dose to the sum of the single-agent doses at an isoeffective level. A CI value less than 1.0 indicates synergy, and a CI value equal to 1.0 defines additivity, whereas a CI value larger than 1.0 shows antagonism.
| Colony formation assay
For colony formation assays, 5 × 10 2 cells were seeded into 6-well plates, and 24 hours later they were treated with 0.4 μmol/L 4SC-202 or/and 16 mmol/L metformin. Colonies cultured for 10 days were visible and then stained with crystal violet. Colonies with diameters above 1 mm were counted.
| Xenograft model
A total of 24 female BALB/c nude mice (Laboratory Animal Center of Sun Yat-sen University, Guangzhou, China), 4-6week-old and weighing 14 to 16 g, were divided into four groups (control, metformin, 4SC-202, and combination, n = 6). For subcutaneous injections, 6 × 10 6 HSC6 cells were injected into the right forelimb of each nude mice. Tumor volume (mm 3 ) was measured every 4 days by vernier calipers and calculated by the following formula: V = L × W 2 /2, where L represents the length and W represents the width. Then the mice were sacrificed, and the tumors were collected and weighed at the end of 25 days after injection. The livers and kidneys were collected at the same time.
All the animal procedures were conducted in accordance with the Guidelines for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee at Sun Yat-sen University.
| 4NQO-induced oral carcinogenesis mice model
A total of 24 female C57BL/6 mice (Nanjing Biomedical Research Institute of Nanjing University, Nanjing, China), 6-week-old and weighing 16 to 18 g, were divided into three groups (control, combination, and cisplatin, n = 8). Mice were fed daily with 50 μg/mL 4-nitroquinoline 1-oxide (4NQO, Sigma-Aldrich, Germany) in their drinking water for 16 weeks, and then fed with distilled water for an additional 6 weeks. Fresh 4NQO or water was supplied every week. At week 18, mice with visible lesions of tongue dysplasia were treated with different agents, solvent as negative control, metformin (100 mg/kg, intraperitoneal injection) and 4SC-202 (80 mg/kg, intragastric administration), cisplatin (1 mg/kg, intraperitoneal injection) as positive control for 4 weeks. All animals were euthanized on week 22, and tissue retrieval was done as described previously. All animals were monitored daily for general behavioral abnormalities, signs of toxicity, illness, or discomfort.
All the animal procedures were conducted in accordance with the Guidelines for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee at Sun Yat-sen University.
| Cellular apoptosis assay
For apoptosis assay, the Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) double-staining apoptosis detection kit (Roche Diagnostics GmbH, Mannheim, Germany) was used. The OSCC cells were collected and stained with 5 μL Annexin V-FITC and 5 μL PI after treating with 0.4 μmol/L 4SC-202 or/and 16 mmol/L metformin for 24 and 48 hours, according to the manufacturer's instructions. The acquisition and analysis of the apoptosis data were performed on a flow cytometer (FACS Calibur, BD Biosciences, USA). Basal apoptosis was determined using the same method in control cells.
| Terminal deoxynucleotidyl transferasemediated dUTP nick end labeling assay
Terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) assays were performed to identify the apoptotic cells using the FragEL™ DNA Fragmentation Detection kit (Calbiochem, EMD Chemicals Inc, Gibbstown, NJ) according to the manufacturer's instructions. Then the sections were stained with DAB solution and counterstained with hematoxylin. For the evaluation of the slides, 100 tumor or epithelial cells were counted per high-power field (original magnification, 400).
| Western blot
The cells were lysed with RIPA buffer supplemented with protease inhibitor (Abcam, Cambridge, MA, UK). BCA protein assay kit (CWbiotech, Beijing, China) was used to quantify the concentrations of the lysates. The lysates were then mixed with loading buffer (4:1; Cwbiotech) and denatured at 99˚C for 5 minutes. The lysates (40 μg/lane) were separated on 10%-12% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) gels and then transferred onto a PVDF membrane (Millipore, Billerica, MA). Then the membrane was blocked in 5% non-fat milk for 1 hour at room temperature and incubated with primary antibodies overnight at 4˚C. Subsequently, the membrane was washed in 0.1% TBST for three times and incubated with HRP-conjugated secondary antibody for 1 hour at room temperature. A highly sensitive chemiluminescence detection system (Millipore) was used to visualize the immunoreactive bands and ImageJ (Bethesda, MD) was utilized to analyze the immunoreactive bands by densitometry. Similar results were obtained from three independent experiments.
| Co-immunoprecipitation
HSC3 and HSC6 cells were lysed in low-salt buffer (20 mmol/L Tris-HCl, pH 8; 137 mmol/L NaCl; 2 mmol/L EDTA; 1% NP40) supplemented with protease inhibitor cocktail (Abcam). The total protein concentrations were measured by the BCA protein assay kit. Later, the equivalent protein lysates were immunoprecipitated as described above with appropriate ΔNp63 antibody or IgG antibody incubated overnight at 4°C, and then with 40 μL of protein A/G-Agarose mix (Millipore) at 4°C for 16 hours with gentle rotation. Immunoprecipitates were washed three times with wash buffer and subjected to SDS-PAGE electrophoresis, then were detected with ubiquitin antibodies.
| Statistical analysis
All statistical analyses were undertaken with SPSS 20.0 software (SPSS, Chicago, IL) or the GraphPad Prism 6.0 software (La Jolla, CA). All results shown represent the means ± standard deviation from triplicate experiments performed in a parallel manner, unless otherwise indicated. Statistical analyses were performed using one-way ANOVA or Krukcal-Wallis test, where appropriate. A two-tailed value of P < 0.05 was considered statistically significant.
| Metformin and 4SC-202 synergistically suppressed OSCC proliferation and colony formation in vitro and in vivo
To determine the effects of metformin or 4SC-202 on cell viability, CCK8 was performed. The results showed that metformin and 4SC-202 suppress the cell viability of OSCC in a time and concentration-dependent manner ( Figure 1A,B). IC [30][31][32][33][34][35][36][37][38][39][40] of HSC3 and HSC6 at 24 hours (metformin, 16 mmol/L; 4SC-202, 0.4 μmol/L) was selected for subsequent experiments in consideration of toxicology. Subsequently, CI index was calculated to determine the combination effects of 4SC-202 and metformin, which revealed that metformin and 4SC-202 synergistically suppress OSCC proliferation as CI < 1 ( Figure 1C,D). Furthermore, the colony forming efficiency was restrained after metformin and 4SC-202 treatment (P < 0.05) in OSCC when compared with single treatment, especially in combination group ( Figure 1E). Nude mice with HSC6 tumor xenografts were used to examine the antitumor activity of 4SC-202 or/ and metformin treatment in vivo. The combination treatment showed significant reduction in tumor volume and tumor weight (P < 0.05) compared to single treatment ( Figure 1F). In addition, the body weight of mice with metformin or/and 4SC-202 treatment remained unperturbed compared to control group (Figure S1A), and no obvious pathological alteration in the liver and kidney was observed ( Figure S1B). Overall, metformin and 4SC-202 synergistically suppress the OSCC growth in vitro and in vivo.
| Metformin and 4SC-202 synergistically promoted intrinsic cell apoptosis in OSCC
Subsequently, we investigated whether apoptosis was induced by metformin and 4SC-202 in OSCC. Flow cytometry analysis showed that both metformin and 4SC-202 increased the number of apoptotic cells significantly compared to untreated cells. Intriguingly, the combination of metformin and 4SC-202 had the maximum number of apoptotic cells (P < 0.05), as HSC3 apoptosis rate increased by 3.22 ± 0.05and 23.32 ± 3.71-fold after 24 and 48 hours (Figure 2A,B), and HSC6 apoptosis rate increased by 1.82 ± 0.12-and 5.88 ± 0.76-fold after 24 and 48 hours ( Figure 2D,E). Additionally, these results were further confirmed by western | 3483 HE Et al.
blot analysis. Combined treatment significantly increased the level of the intrinsic apoptosis makers such as P53, Bax, cleaved caspase-9, cleaved caspase-3, cleaved PARP, and decreased the protein level of Bcl-2 in both HSC3 and HSC6 cells, compared to single treatment ( Figure 2C,F). However, the extrinsic apoptosis key component caspase-8 showed no significant alteration after either drug treatment. In addition, TUNEL staining of tumor xenograft model further confirmed that metformin or/and 4SC-202 treatment increased cell apoptosis rate, and the combined treatment (10.19 ± 1.84%) had the most dramatical increase (P < 0.01) compared to control (1.49 ± 0.68%) ( Figure 2G). Thus, metformin and 4SC-202 synergistically promote intrinsic apoptosis in OSCC in vitro and in vivo. Nude mice received injection of HSC6 cells and was treated with metformin (100 mg/kg) or/and 4SC-202 (80 mg/kg) for 25 days, then tumor volume was measured and weighed. Data were shown as the means ± SD from three independent experiments. * P < 0.05, ** P < 0.01, *** P < 0.001 vs control (one-way ANOVA)
| Combined metformin and 4SC-202 treatment inhibited oral carcinogenesis in vivo
4NQO-induced mice OSCC model was utilized to investigate the effect of metformin and 4SC-202 treatment on the development of oral cancer ( Figure 3A). Our results revealed that metformin plus 4SC-202 or cisplatin reduced the lesions area of tongue (P < 0.001) compared to the control, and metformin plus 4SC-202 had a stronger inhibitory effect (P < 0.01) vs cispaltin ( Figure 3B-D). Importantly, histological results indicated that the number of dysplasia and squamous cell carcinoma (SCC) decreased significantly (P < 0.05) under metformin plus 4SC-202 or cisplatin treatment ( Figure 3E). Interestingly, there was no significant body weight loss both in combination group and cisplatin group compared to the control group, while the cisplatin group tended to had more weight loss (P < 0.05) vs combination group ( Figure S1C). Meanwhile, H&E staining indicated that there was no obvious histopathological alteration in the liver and kidney tissues after treatment with drugs ( Figure S1D). Overall, these results indicated that metformin and 4SC-202 suppressed the progression of oral carcinoma in 4NQO induced mouse model.
| Metformin and 4SC-202 combination promoted ΔNp63 degradation via ubiquitination
Overexpression of ΔNp63 isoforms of TP63 is observed in the majority of HNSCCs. 30 ΔNp63 acts as an oncogene which suppresses apoptosis but sustains proliferation, and aberrant expression of ΔNp63 is associated with the poor prognosis of OSCC patients. [30][31][32] Here, we detected the expression of ΔNp63 by western blot and RT-PCR after treating with metformin or/and 4SC-202. The protein level of ΔNp63 was decreased remarkably under 4SC-202 or metformin administration, especially in combined group, but the mRNA level of ΔNp63 remained unperturbed (P > 0.05) ( Figure 4A). Similarly, metformin plus 4SC-202 treatment reduced the level of ΔNp63 (P < 0.001) in vivo ( Figure 4B,C). Besides, the results revealed that cisplatin could also reduce ΔNp63 in vivo as well as metformin and 4SC-202 ( Figure 4B). Moreover, cisplatin was found to reduce the mRNA and protein level of ΔNp63 in HSC3 and HSC6 ( Figure S2). However, compared to metformin plus 4SC-202, cisplatin led to less decrease in ΔNp63 protein ( Figure S2B). Subsequently, we examined whether ΔNp63 protein stability was regulated by proteasome-mediated degradation by using the proteasome inhibitor MG132. As the results showed, the decreased of ΔNp63 protein under metformin or/and 4SC-202 treatment was attenuated by MG132 administration ( Figure 4D). Furthermore, the ubiquitination level of ΔNp63 with metformin or/and 4SC-202 treatment was determined by Co-immunoprecipitation (Co-IP) analysis. Metformin or 4SC-202 alone was observed to increase the ubiquitination level of ΔNp63, while the combination had the maximum increase ( Figure 4E). Moreover, the ubiquitination level of ΔNp63 was increased in cells with MG132 under metformin plus 4SC-202 treatment. (Figure 4F). Taken together, these data indicated that metformin and 4SC-202 increased ΔNp63 protein ubiquitination and subsequently decreased its stability and protein level.
| ΔNp63 mediated the apoptosispromoting effects of metformin and 4SC-202
To explore the role of ΔNp63 in antitumor effects of metformin and 4SC-202, ΔNp63 was overexpressed or knockdown under metformin or/and 4SC-202 treatment.
| DISCUSSION
In this study, the efficacy of metformin and 4SC-202 combination in OSCC in vitro and in vivo was evaluated. Our results indicated that metformin and 4SC-202 synergistically suppressed the growth and promoted the intrinsic apoptosis in OSCC. In addition, combined 4SC-202 and metformin inhibited oral carcinogenesis in vivo. Importantly, metformin or/and 4SC-202 triggered apoptosis of OSCC through accelerating the degradation of ΔNp63. These findings highlighted that this combination could serve as a potential therapeutic schemes for OSCC. Current chemotherapy treatments for OSCC are not satisfactory for drug resistance and side effects, which has become a challenge in the clinics. As a promising candidate to overcome these problems in cancer therapy, combination of chemotherapeutics could take the advantage of each drug and lower the dose and toxicity. Here, we evaluated the effects of metformin, 4SC-202, and their combination in OSCC, and showed dose-and time-dependent growth inhibitory effect of 4SC-202 and metformin. Notably, the combination of metformin and 4SC-202 showed synergistic growth inhibitory effects in OSCC cells. We applied a relatively low dose of metformin (16 mmol/L, IC [30][31][32][33][34][35][36][37][38][39][40] ) and 4SC-202 (0.4 μmol/L, IC [30][31][32][33][34][35][36][37][38][39][40] ) which inhibited tumor growth effectively. For xenograft or 4NQO mouse model, we chose a relatively low dose of metformin (100 mg/kg) and 4SC-202 (80 mg/kg) to lower possible toxicity. The tumor volumes and weights of nude mice that received metformin and 4SC-202 were smaller compared to control group without apparent body weight loss and liver and kidney impairment, indicating that metformin and 4SC-202 synergistically inhibited tumor growth and were relatively safe in vivo We were aware of the deficiency of using only one cell line HSC6, and we had tried the other cell line HSC3 in nude mice; however, we found that the tumors were easily to break and form ulceration, which made it hard to measure volume and weight as the ulceration usually caused loss of tumor cells. Meanwhile, 4NQO-induced mouse oral carcinogenesis further confirmed the inhibitory effect of combined treatment of metformin and 4SC-202 in vivo, and it may work better than traditional medicine cisplatin. Collectively, our results indicated that metformin and 4SC-202 synergistically suppressed tumor growth in vitro and in vivo. In most cases, anticancer therapies eventually result in activation of apoptosis. In mammals, there are two major apoptotic pathways, the extrinsic pathway (death receptormediated pathway) and the intrinsic pathway (mitochondrial-mediated pathway). Activation of caspases usually are initiated from two main entry points at the death receptor (extrinsic pathway) or at the mitochondria (mitochondrial-mediated pathway). 33 In previous works, metformin was found to induce intrinsic apoptotic pathway in oral cancer cells. 34,35 However, the effects of 4SC-202 in OSCC remains unclear. In hepatocellular carcinoma cells, intrinsic apoptotic pathway was activated under 4SC-202 treatment. 26 Our results displayed that combined metformin and 4SC-202 treatment synergistically induced cell apoptosis compared to single treatment in OSCC. The mitochondrial pathway proteins in the combination group changed significantly, while the key extrinsic apoptotic component caspase-8 showed no significant change, suggesting that intrinsic apoptotic pathway was regulated by the combination treatment. Metformin or/and 4SC-202 promoted the expression of P53 and Bax but reduced the expression of Bcl-2, and metformin plus 4SC-202 had the most dramatical effects. In addition, previous works showed that P53 negatively regulated Bcl-2 and positively regulated Bax by directly binding to its promoter; 36,37 therefore, the decrease level of Bcl-2 and the increase level of Bax may be the result of increased level of P53. Metformin was found to activate AMPK signaling, 21,38 and AMPK could phosphorylate SIRT1 or MDMX to stabilize and activate P53. 39,40 HDACis were found to activate P53 and promoted P53 acetylation which result in P53 stabilization and activation. [41][42][43] Thus, in OSCC cells, we speculated that metformin and 4SC-202 promoted the expression of P53 through AMPK and promotion of P53 deacetylation. Moreover, the apoptosis-promoting effects were further confirmed in xenograft model. Overall, our results indicated metformin or/and 4SC-202 triggered intrinsic apoptosis of OSCC cells in vitro and in vivo.
Ubiquitination is among the most common forms of posttranslational protein modification. Proteins modified with ubiquitin, a small 8.5 kDa protein, are targeted for degradation by the proteasome. 44,45 This process is executed by three classes of enzymes designated E-1, E-2, and E-3. 44 E-1 activation enzymes activate ubiquitin in an ATP-dependent manner, attaching it to a cysteine residue of an E-2 conjugation enzyme. 45 The E-2 conjugation enzyme coordinates with an E-3 ligase enzyme to attach ubiquitin to a lysine residue of a target substrate. 46 E-3 ligases are substrate-specific and recognizes the target protein. 45 Our results revealed that ΔNp63 protein level decreased without significant alteration in mRNA, indicating ΔNp63 might be regulated at the posttranslational level such as ubiquitination. ΔNp63 is targeted F I G U R E 5 A ΔNp63 mediated the antitumor effects of combination of metformin and 4SC-202. HSC3 or HSC6 cells were treatment with metformin (16 mmol/L) or/and 4SC-202 (0.4 μmol/L) for 24 hours after overexpression of ΔNp63 for 24 hours. (A-B) The apoptosis of HSC3 or HSC6 cells was evaluated by Annexin V-FITC/PI staining. (C) Expression levels of Bcl-2 and cleaved caspase-3 were detected by western blot analysis. β-actin was used as an internal control. Data were shown as the means ± SD for three independent experiments. * P < 0.05, ** P < 0.01, *** P < 0. 01 vs control (pcDNA3.1) (oneway ANOVA).
by multiple E-3 ligases such as WWP1, HDM2, FBXW7, Itch, and Pirh2, for ubiquitination and proteasome-mediated degradation, which acts as new key regulators of the P63 protein. 47,48 In our results, MG132 attenuated the downregulation of ΔNp63 and ubiquitination level of ΔNp63 increased significantly when cells treated with metformin or/and 4SC-202 in OSCC. Furthermore, we examined the ubiquitination level of ΔNp63 under metformin and 4SC-202 treatment with or without MG132, as the results revealed, MG132 treatment increased the ubiquitination level of ΔNp63 under metformin and 4SC-202 treatment. Taken together, metformin and 4SC-202 accelerated ubiquitination and proteasome-mediated degradation of ΔNp63.
Previous studies had identified ΔNp63 as an oncogene which suppresses apoptosis and sustains proliferation, and aberrant expression of ΔNp63 was associated with the poor prognosis of patients with OSCC. [30][31][32]49 ΔNp63 acts primarily in dominant-negative of P63, while the structure and function of full-length TA (transactivation) isoform of P63 has similarity to wild-type P53, the ΔNp63 acts primarily against nearly all family members of P53. 50 Overexpression of ΔNp63 isoforms is observed in the majority of HNSCCs. 30 In addition, high expression of ΔNp63 contributes to chemoresistance. 51,52 As our results showed, ΔNp63 protein level decreased after metformin or/and 4SC-202 treatment, and combined treatment led to significant decrease of ΔNp63 compared to single treatment. The ΔNp63-reducing effect of metformin and 4SC-202 treatment was further confirmed in 4NQO mice model. Intriguingly, we found that cisplatin treatment also led to the decrease of ΔNp63, which was observed in vitro and in vivo. As both mRNA and protein level alterations were observed, we speculate that cisplatin may reduce ΔNp63 through transcriptional regulation which was different from metformin and 4SC-202. In addition, compared to cisplatin, metformin plus 4SC-202 led to greater decrease of ΔNp63 protein in OSCC, indicating that metformin plus 4SC-202 had stronger inhibition effect on ΔNp63. Notably, ΔNp63 overexpression eliminated the proapoptotic effects of metformin and 4SC-202, while knockdown of ΔNp63 facilitated the proapoptotic effects. Collectively, ΔNp63 is a major target of metformin and 4SC-202 in their facilitation of apoptosis.
In conclusion, combined treatment of metformin and 4SC-202 synergistically inhibit cancer cell growth and induce intrinsic cell apoptosis through increasing ΔNp63 ubiquitination and degradation in vitro and in vivo. Combined metformin and 4SC-202 treatment could be a promising therapeutic strategy for OSCC.
|
2019-04-27T13:03:55.819Z
|
2019-04-25T00:00:00.000
|
{
"year": 2019,
"sha1": "e9d9e66d4ef01c34862e9a10cb627ca51b0dfbc1",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.2206",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9d9e66d4ef01c34862e9a10cb627ca51b0dfbc1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
92999112
|
pes2o/s2orc
|
v3-fos-license
|
Rapid Evaluation Methods for Quality of Trout (Oncorhynchus mykiss) Fresh Fillet Preserved in an Active Edible Coating
In this study different methods were used to evaluate the effectiveness of a carrageenan coating and carrageenan coating incorporating lemon essential oil (ELO) in preserving the physicochemical and olfactory characteristics of trout fillets stored at 4 °C up to 12 days. The fillet morphological structure was analyzed by histological and immunological methods; lipid peroxidation was performed with the peroxide and thiobarbituric acid reactive substances (TBARS) tests. At the same time, two less time-consuming methods, such as Attenuated Total Reflectance-Fourier Transformed Infrared (ATR-FTIR) spectroscopy and the electronic nose, were used. Uncoated trout fillets (UTF) showed a less compact tissue structure than carrageenan-coated threads (CTF) and coated fillets of carrageenan (active) ELO (ACTF), probably due to the degradation of collagen, as indicated by optical microscopy and ATR-FTIR. UTF showed greater lipid oxidation compared to CTF and ACTF, as indicated by the peroxide and TBARS tests and ATR-FTIR spectroscopy. The carrageenan coating containing ELO preserved the olfactory characteristics of the trout fillets better than the carrageenan coating alone, as indicated by the electronic nose analysis. This study confirms that both carrageenan and ELO containing carrageenan coatings slow down the decay of the physicochemical and olfactory characteristics of fresh trout fillets stored at 4 °C, although the latter is more effective.
Introduction
Fish is among the most perishable food commodities whose quality declines as a result of a complex mix of biochemical, chemical, physical, and microbiological phenomena. It is estimated that~10% of the fishery and aquaculture products is lost due to degradation (FAO report, 2018). Of primary concern is the development of off-flavors and odors caused by the production of ammonia, trimethylamine, dimethylamine, and other volatile amines, whose high levels lead to undesirable organoleptic characteristics [1]. Other volatile molecules which are mainly produced during the spoilage process are hydrogen (H2) "odorless", methane (CH4) "odorless", ammonia (NH3) "quaint pungent", hydrogen sulfide (H2S) "rotten eggs", and phosphane (PH3) "rotten fish". The Electronic Nose has been efficiently used to detect the molecules arising during decomposition of fish flesh, even when odorless [2,3]. Not less critical is the oxidation of fat, one of the most important mechanisms leading to food spoilage, causing changes in taste and odor and deterioration of muscle texture [4]. Tenderization begins in hours after death and continues during storage [5]. The structure that Station of Essential Oil and Citrus Products-Reggio Calabria (Italy). ELO characterization has been reported in Socaciu et al. [21]. Carrageenan solution was prepared by mixing 1 g of carrageenan with 100 mL of distilled water and stirred at a temperature of 100 • C until the mixture became clear. Before gelation (at~30 • C) 1% of ELO was mixed with the prepared carrageenan solution and stirred thoroughly. Trout fillets were divided into three groups and underwent to the following treatment. Group 1: uncoated trout fillets (UTF). Group 2: trout fillets coated with 1% carrageenan (coated trout fillets-CTF). Group 3: trout fillets coated with 1% carrageenan containing 1% ELO (active coated trout fillets-ACTF). Fillets were then stored in a refrigerator at 0-4 • C. During the 12-day storage, samples were randomly taken every 3 days for analyses.
Peroxide Value Measurement
Peroxide value (PV), expressed as meq of free iodine per kg of lipids, was performed according to the method Cd 8-53 (AOCS, 1998) of the American Oil Chemists' Society (AOCS). Briefly, 1.0 g of lipid sample was treated with 25 mL of solvent mixture (chloroform/acetic acid 2/3). After shaking the mixture, 1 mL of saturated potassium iodide solution was added. The mixture was kept in the dark for 5 min, thereafter 75 mL of distilled water was added under stirring. Starch solution (0.5 mL, 1% w/v) was added as an indicator. The peroxide value was determined by titrating the iodine liberated from the potassium iodide (standardized with 0.01 n sodium thiosulfate solution).
Thiobarbituric Acid Reactive Substances (TBARS) Determination
TBARS was evaluated according to Thiansilakul et al. [25]. The ground sample (0.5 g) was homogenized with 2.5 mL of solution containing 0.375% thiobarbituric acid (w/v), 15% trichloroacetic acid (w/v), and 0.25 M HCl. The mixture was heated in a boiling water bath (95-100 • C) for 10 min until it turned into a pink color, cooled with running tap water and centrifuged at 3600 g at 25 • C for 20 min. The absorbance of the supernatant was measured at 532 nm. A standard curve was prepared using 1,1,3,3-tetramethoxypropane at concentrations ranging from 0 to 6 ppm. Absorbance of TBARS was measured at 532nm and TBARS were calculated and expressed as mg malonaldehyde/kg sample.
Histology
Samples of 1 cm in length were taken from the central area of the fillet and placed in 2-metilbutane extra pure (Acros Organics, Fair Lawn, NJ, USA) for 5 s and then frozen in liquid nitrogen. At least three fillets were analyzed for each group. Frozen samples were serially cut on a cryostat (Leica, Wetzlar, Germany) in transversal and longitudinal sections of 10 µm. The sections were placed on slides and stained with common hematoxylin-eosin histochemical dyes. Hematoxylin staining (3 min) was followed by rinsing with deionized water and Tap water (to allow stain to develop). Acid ethanol was used to distain. Then, eosin staining (30 s) was carried out and was followed by rinsing with ethanol and Xylene. Coverslip slides were put in position by using Permount and allowed to dry. Slides were observed at an optic microscope (Leica DMRA2, Leica, Wetzlar, Germany) and images were acquired and photographed using a DC300F digital camera.
Electronic Nose (EN) Analysis
The EN (PEN 3), including the Win Muster software for data analysis, Airsense Analytics Inc. (Schwerin, Germany), was used to analyze olfactory characteristics of trout fillets as previously reported [26]. For sample withdrawal, the coating was gently removed except for uncoated samples, and three cube-shaped pieces of 1g were placed in airtight 45-mL glass vial right before analysis.
ATR-FTIR Spectroscopy
Trout fillets were frozen, lyophilized, minced, and placed directly on the germanium piece of the infrared spectrometer with constant pressure applied. In the case of coated trout fillets, the coating was removed before the lyophilization. The pressure of the ATR-FTIR acquisition was 80 ± 2 psi. The FTIR spectra were recorded in the mid-IR region (4000-650 cm −1 ) at resolutions of 4 cm −1 with 32 scans using Perkin Elmer FTIR Frontier coupled with DTGS (deuterated tri-glycine sulphate) detector (Perkin-Elmer Inc., Norwalk, CT, USA). Air background spectrum was recorded before each sample. Three samples for each group were analyzed and each sample was analyzed in triplicate. The spectra were baseline corrected and normalized on amide I.
Statistical Analysis
Values were expressed as mean ± standard deviation (SD) calculated using MS Excel. One-way repeated measures analysis of variance (ANOVA) was used to estimate significant differences (p < 0.05) during storage. All statistical analyses were performed using the STATISTICA 10.0 statistical package (Statsoft inc., Tulsa, OK, USA). To isolate the group or groups that differ from the others Multiple Comparisons versus Control Group (Holm-Sidak method) was used. For Electronic Nose, six independent measures were performed for each sample, with n = 3. Correlation Matrix (CM) of data was performed by using the Win Muster software. CM shows the quantitative assessment of classes severability. Values of discrimination index (DI) were in the range of 0 and 1. Values lower than 0.5 show poor severability of classes, while higher values indicate worthy severability of classes (Volpe et al. 2014) [27], values of DI ≥ 0.95 were considered significant.
Lipid Peroxidation
The Peroxide values in the uncoated and coated trout fillets are shown in Table 1.
One-way repeated measures analysis of variance showed a significant difference between treatments (F = 13.055 with two degrees of freedom, p = 0.003). Multiple Comparisons versus control group (Holm-Sidak method) showed a significant difference in the comparison ACTF vs. UTF (p = 0.002) and CTF vs. UTF (p = 0.021). Trout muscle tissue is rich in lipids, especially polyunsaturated fatty acids [19]. Among lipids, both free fatty acids and triglycerides are subject to oxidation, although the former are oxidized more readily. Considering that the lipid content of fresh trout is~2.121 ± 0.06 g, the content of peroxide is equal to 0.8 meq/kg of lipids at time zero, while the sample peroxide values were respectively 8.6 meq for UTF, 5.3 for CTF and 4.0 for ACTF after 12 days of preservation. Peroxide values in the uncoated trout fillets ranged from 0.8 to 8.6 meq/Kg of lipids with a maximum at 9 days equal to 9.32 meq/kg.
Peroxide values in CTF fillets ranged from 0.8 to 5.3 meq/Kg of lipids, while in ACTF fillets ranged from 0.8 to 4.2 meq/Kg of lipids. Peroxide values were significantly lower in CTF and ACTF than in UTF. This outcome shows that the active coating slows down the development of lipid peroxidation in trout fillets stored at 4 • C. These results are in agreement with previous studies [28,29], reporting that chitosan coating was able to reduce the content of primary lipid oxidation products in herring fillets stored at about 4 • C.
The oxidation of free fatty acids produces unstable lipid hydroperoxide that readily decompose to shorter chain products such as aldehydes, which can be detected as TBARS [30].
The thiobarbituric acid reactive substances (TBARS) values in the uncoated and coated trout fillets are shown in Table 2. One-way repeated measures analysis of variance showed a significant difference between treatments (F = 12.781 with 2 degrees of freedom, p = 0.003). Multiple comparisons versus control group (Holm-Sidak method) showed a significant difference in the comparison ACTF vs. UTF (p = 0.003) and CTF vs. UTF (p = 0.005).
A substantial increase in TBARS was observed in UTF samples with respect to CTF and ACTF trout fillets. The higher efficacy of ACTF with respect to CTF, in slowing down the production of TBARS was probably due to the antioxidant and antimicrobial activity of ELO [19]. Thus, the incorporation of ELO into carrageenan coating improved the antioxidant and antimicrobial property of the resulting coating solution. Ahmad et al. [1] reported that the incorporation of ELO into gelatin film could strengthen the antimicrobial and antioxidative characteristics of the film, resulting in an increase of the qualities and the shelf-life of the sea bass refrigerated fillets. It has also been reported that ELO is effective as a free radical scavenger and metal chelating agent. The antioxidant properties of essential oils have been ascribed to different mechanisms: impediment of radical chain initiation, binding of transition metal ion catalysts, decomposition of peroxides, and interaction with the free radicals [31].
In the sea bream (Sparus aurata) and Atlantic salmon (Salmo salar) natural plant extracts have been successfully employed to prevent lipid oxidation [32,33]. Similarly, damaging of lipids was slowed down by natural antioxidants derived from barley husks in the Atlantic salmon [34]. In the cold smoked sardine (Sardina pilchardus) a coating enriched with oregano or rosemary extracts lowered the lipid oxidation rate [35].
Histological and Western Blot Analysis
Fish fillets are composed of myomeres separated by connective and adipose tissues [36,37]. It has been proposed that in fish the post mortem modifications on the muscle structure and consistency are mainly due to the degradation process of the tissues between myomers rather than the muscle tissue. Ando and coworkers [5,38] demonstrated by light and electron microscopy that postmortem tenderization of rainbow trout muscle is mainly due to the disintegration of collagen fibers and the extracellular matrix in the connective tissues. In this study, both transversal and longitudinal sections of rainbow trout fillets were carried out in order to evaluate the muscle morphology. Three samples, belonging to different trout, for each treatment (UTC, CTF, and ACTF) were analyzed up to 12 days of storage at 4 • C. Transversal sections showed a compact structure with the muscle fibers firmly associated with the connective tissue at the beginning of the experiment ( Figure 1A). The progression of the storage period accompanied to the modification of the fillet structure with a consequent muscle fiber disorganization both in the control and coated (CTF and ACTF) samples. In particular, the muscle fibers gradually detached from the myocommata and the distance between myofibers increased, providing the tissue with a loose aspect. However, looseness was more pronounced in the control ( Figure 1B) than in the coated samples (CTF and ACTF) ( Figure 1C). In fact, the fillet texture was more conserved during storage in the fillets with coating and coating plus ELO. Moreover, in the longitudinal sections was evident the preservation of myofibrillar structure with the alternate dark and light bands both in the control ( Figure 2A) and coated samples (CTF and ACTF gave similar results) ( Figure 2B). In fact, the fillet texture was more conserved during storage in the fillets with coating and coating plus ELO. Moreover, in the longitudinal sections was evident the preservation of myofibrillar structure with the alternate dark and light bands both in the control ( Figure 2A) and coated samples (CTF and ACTF gave similar results) ( Figure 2B). In fact, the fillet texture was more conserved during storage in the fillets with coating and coating plus ELO. Moreover, in the longitudinal sections was evident the preservation of myofibrillar structure with the alternate dark and light bands both in the control ( Figure 2A) and coated samples (CTF and ACTF gave similar results) ( Figure 2B). This outcome is sustained by the evidence that the expression of titin was stable up to 12 days of storage at 4 • C ( Figure 3). This outcome is sustained by the evidence that the expression of titin was stable up to 12 days of storage at 4 °C ( Figure 3). Titin is an elastic protein, which joins the thick myosin filaments from their ends to the Z disc stabilizing the myosine in the center of the sarcomere [39]. Densitometric analysis (Figure 3) of the immunoreactive bands of titin was performed and β-actin (molecular mass of about 42 kDa), as an internal marker, was used to normalize the optical density.
We observed neither variations of titin expression nor degradation during the storage period, both in the control and coated fillets (CTF and ACTF). Our results are in agreement with a previous study by Hernandez-Herrero et al. [40], reporting a progressive degradation of titin in the cod (Gadus morhua) during ice storage only when the fish was in advanced decomposition. The evidence that trout fillets coated with carrageenan and carrageenan plus ELO were well preserved suggests that the presence of the coating with or without the ELO delays the degenerative processes. Such outcome may be related to the oxygen barrier properties of edible films and coatings. Titin is an elastic protein, which joins the thick myosin filaments from their ends to the Z disc stabilizing the myosine in the center of the sarcomere [39]. Densitometric analysis (Figure 3) of the immunoreactive bands of titin was performed and β-actin (molecular mass of about 42 kDa), as an internal marker, was used to normalize the optical density.
We observed neither variations of titin expression nor degradation during the storage period, both in the control and coated fillets (CTF and ACTF). Our results are in agreement with a previous study by Hernandez-Herrero et al. [40], reporting a progressive degradation of titin in the cod (Gadus morhua) during ice storage only when the fish was in advanced decomposition. The evidence that trout fillets coated with carrageenan and carrageenan plus ELO were well preserved suggests that the presence of the coating with or without the ELO delays the degenerative processes. Such outcome may be related to the oxygen barrier properties of edible films and coatings. Carbohydrates, such as carrageenan, are indeed excellent barriers to oxygen, because of their tightly packed, ordered hydrogen bonded network structure. Moreover, the addition of antioxidants, such as vitamins and essential oils, can entail further protection due to the enhancement of the oxygen barrier properties of the film and coating [41]. Meyer et al. [42] found that carrageenan coatings extended the shelf life of poultry pieces by acting as an oxygen barrier. Thus, it can be hypothesized that the coating used in this study protected the muscle from the oxidative processes that induce the production of free radicals, which are in turn responsible for muscle and intramuscular tissue susceptibility to proteases, with consequent postmortem tenderization of fish muscle [43].
Olfactory Analysis
The smell is an important sensorial attribute of a food, not only because it is a sign of pleasantness, but also because odor, usually unpleasant, is due to microbial and biochemical alterations during food storage [44]. It is therefore important to define the olfactory characteristics of a food, as an indication of microbial and biochemical deterioration. The electronic nose is a quick and reliable method for measuring the olfactory footprint of a food. The electronic nose is made up of electronic sensors capable of detecting volatile chemicals and is able to translate these substances into a recognizable and classifiable models capable of discriminating different types of samples [45][46][47]. The relationship between the freshness of the fish and the olfactory imprint determined with the electronic nose was used by Di Natale et al. [48]. Olafsdottir et al. [49] highlighted the correlation between the olfactory footprint and bacteriological composition during the deterioration of cold-smoked Atlantic salmon. In this study, we have detected the olfactory foodprint of trout fillets stored at 4 • C up to 12 days, without and with carrageenan coating, and carrageenan coating plus ELO. of electronic sensors capable of detecting volatile chemicals and is able to translate these substances into a recognizable and classifiable models capable of discriminating different types of samples [45][46][47]. The relationship between the freshness of the fish and the olfactory imprint determined with the electronic nose was used by Di Natale et al. [48]. Olafsdottir et al. [49] highlighted the correlation between the olfactory footprint and bacteriological composition during the deterioration of cold-smoked Atlantic salmon. In this study, we have detected the olfactory foodprint of trout fillets stored at 4 °C up to 12 days, without and with carrageenan coating, and carrageenan coating plus ELO. Figures 4-6 show the PCA of the response the 10 sensor array to the headspace of the samples. It is possible to observe that all the clusters appear distinct in Figure 4. On the contrary, Figures 5 and 6 show overlapping clusters corresponding to low values of discrimination indexes (Tables 3-5) between the classes observed in Tables 4 and 5.
The value of the first component is higher than 91.6% in the three PCA showing that the most of the variance is expressed along the x-axis. It is worth noting that the shift along x-axis of the clusters representing subsequent days of samples preservation corresponds to increasing values of discrimination indexes (DIs) of days 3, 6, 9, and 12, respective to day 0 of the correlation matrixes (CMs) (Tables 3-5). It is possible to observe that all the clusters appear distinct in Figure 4. On the contrary, Figures 5 and 6 show overlapping clusters corresponding to low values of discrimination indexes (Tables 3-5) between the classes observed in Tables 4 and 5. The value of the first component is higher than 91.6% in the three PCA showing that the most of the variance is expressed along the x-axis. It is worth noting that the shift along x-axis of the clusters representing subsequent days of samples preservation corresponds to increasing values of discrimination indexes (DIs) of days 3, 6, 9, and 12, respective to day 0 of the correlation matrixes (CMs) (Tables 3-5).
The DIs of the samples UTF 6d, UTF 9d, and UTF 12d correlated to UTF 0d are significant; whereas the CMs of CTF and ACTF treated samples did not show any significant value. It is possible to argue that CTF and ACTF preserved samples retained olfactory characteristics better than UTF samples during the evaluation period. The values of DIs of ACTF 3d, ACTF 6d, and ACTF 12d correlated to ACTF 0d were lower than CTF 3d; CTF 6d and CTF 12d correlated to CTF 0d.
Altogether, the data suggest that ACTF was the best gel coating formulation to preserve the olfactory characteristics of trout fillets, although samples treated with CTF formulation also showed a good performance respect to UTF samples.
ATR-FTIR Analysis
The ATR-FTIR analysis was carried out in order to achieve molecular information on the biochemical modifications induced by preservation of trout fillets. Figure 7 shows a representative spectrum of trout fillet in the region of 650-4000 cm −1 .
to argue that CTF and ACTF preserved samples retained olfactory characteristics better than UTF samples during the evaluation period. The values of DIs of ACTF 3d, ACTF 6d, and ACTF 12d correlated to ACTF 0d were lower than CTF 3d; CTF 6d and CTF 12d correlated to CTF 0d.
Altogether, the data suggest that ACTF was the best gel coating formulation to preserve the olfactory characteristics of trout fillets, although samples treated with CTF formulation also showed a good performance respect to UTF samples.
ATR-FTIR Analysis
The ATR-FTIR analysis was carried out in order to achieve molecular information on the biochemical modifications induced by preservation of trout fillets. Figure 7 shows a representative spectrum of trout fillet in the region of 650-4000 cm −1 .
Figure 7.
A typical ATR-FTIR absorption spectrum of the trout fillet in the 650-4000 cm −1 . The spectrum was baseline corrected and normalized for the Amide I. The peak assignment is reported in Table 6. Table 6.
The analysis of such region provides information on vibrational modes associated with the molecular composition of different functional groups belonging to lipids, proteins, and carbohydrates [50]. In this study, the contribution provided by carbohydrates was not taken into consideration due to the negligible carbohydrate content in the trout muscle tissue [51]. The peak assignment is reported in Table 6. Only the main peaks within the lipid and protein ranges are reported. To analyze lipid content and structure, particular attention was given to the spectral region of 2800-3100 cm −1 and, in particular, to the variation of the peak absorbance at 3011, 1743, 1451, and 1305 cm −1 . The peak at 3011 cm −1 is usually considered a marker of peroxidative processes [57,58], and therefore its increase is indicative of a higher amount of peroxidized fatty acyl chains. In Figure 8 are reported representative spectra of the peak absorbance in UTF up to 12 days. Spectra are normalized for amide I. It can be seen that the 3011 cm −1 peak is slightly visible at 0 day, while it becomes more pronounced with the progression of the storage time. Similarly, the peak at 1743 cm −1 , associated with peroxidation of fatty acid chains [59], increased over time of storage. 1305 cm . The peak at 3011 cm is usually considered a marker of peroxidative processes [57,58], and therefore its increase is indicative of a higher amount of peroxidized fatty acyl chains. In Figure 8 are reported representative spectra of the peak absorbance in UTF up to 12 days. Spectra are normalized for amide I. It can be seen that the 3011 cm −1 peak is slightly visible at 0 d, while it becomes more pronounced with the progression of the storage time. Similarly, the peak at 1743 cm −1 , associated with peroxidation of fatty acid chains [59], increased over time of storage. In Figure 9 are reported representative spectra of the peak absorbance of UTF, CTF, and ACTF at 12 days of storage, compared to the control at 0d. In Figure 9 are reported representative spectra of the peak absorbance of UTF, CTF, and ACTF at 12 days of storage, compared to the control at 0d. It can be seen that the increase in the absorbance of 3011 and 1743 cm −1 peaks was lower in CTF and especially in ACTF with respect to UTF, suggesting that the carrageenan coating and carrageenan plus ELO coating caused the slowing down of lipid peroxidation, as also indicated by the TBARS analysis.
As reported before, the postmortem tenderization of rainbow trout muscle is likely due to the degradation of the extracellular matrix around myomers [5,38]. Collagen is the major component of the extracellular matrix and improves strength and resistance [60]. As reported by Botta et al., [56], the integrity of the collagen triple helix can be monitored by analyzing the ratio of the absorbance of the amide III and the peak corresponding to the stereochemistry of the pyrrolidine rings. The amide III band is indeed related to CN stretching and NH, and is involved with the triple helical structure of collagen [61]. The integrity of the collagen secondary structure may be verified when the value of the ratio is greater than or equal to the unit. Changes in this absorption ratio indicate significant structural alterations in the collagen triple helix. In this study, the value of the ratio between amide It can be seen that the increase in the absorbance of 3011 and 1743 cm −1 peaks was lower in CTF and especially in ACTF with respect to UTF, suggesting that the carrageenan coating and carrageenan plus ELO coating caused the slowing down of lipid peroxidation, as also indicated by the TBARS analysis.
As reported before, the postmortem tenderization of rainbow trout muscle is likely due to the degradation of the extracellular matrix around myomers [5,38]. Collagen is the major component of the extracellular matrix and improves strength and resistance [60]. As reported by Botta et al., [56], the integrity of the collagen triple helix can be monitored by analyzing the ratio of the absorbance of the amide III and the peak corresponding to the stereochemistry of the pyrrolidine rings. The amide III band is indeed related to CN stretching and NH, and is involved with the triple helical structure of collagen [61]. The integrity of the collagen secondary structure may be verified when the value of the ratio is greater than or equal to the unit. Changes in this absorption ratio indicate significant structural alterations in the collagen triple helix. In this study, the value of the ratio between amide III peak (1305 cm −1 ) and the pyrrolidine rings peak (1451 cm −1 ) found at 0d was 1.00. The ratio decreased in UTF over time reaching a value of 0.65 after 12 days of storage. In the CTF and ACTF trout fillets the decrease in the ratio was less pronounced, reaching values of 0.76 and 0.85, respectively, after 12 days ( Figure 10). FTIR spectroscopy is a tool used to study the secondary structure of proteins [62]. In fact, the derived analysis of the amide region I, between 1600 and 1700 cm -1 , provides information about the α and β structure of proteins [63]. In particular, vibrational components in the area of 1620-1640 cm -1 are indicative of a β-sheet structure. The antiparallel β-sheet structure can also be identified by the presence of vibrational components in the area of 1670-1695 cm -1 , while α-helical conformation gives rise to infrared absorption in the range of 1650 to 1658 cm -1 [64]. However, it is necessary to obtain a good discrimination of the peaks in the area of the amide I through the calculation of the second derivative that amplifies the vibrational component separation [65]. The alterations in the frequency and intensity of the vibrational components or peaks are able to provide valuable information on the secondary structure and may reveal conformational changes deriving from the interaction of the protein with other molecules and with the surrounding chemical environment (pH, temperature, solvents, detergents, etc.) [63]. In this study, the second derivative of trout fillets at 0d highlights three peaks in the amide I region at the wavelengths of 1628, 1652, and 1687 cm -1 (Figure 11). FTIR spectroscopy is a tool used to study the secondary structure of proteins [62]. In fact, the derived analysis of the amide region I, between 1600 and 1700 cm −1 , provides information about the α and β structure of proteins [63]. In particular, vibrational components in the area of 1620-1640 cm −1 are indicative of a β-sheet structure. The antiparallel β-sheet structure can also be identified by the presence of vibrational components in the area of 1670-1695 cm −1 , while α-helical conformation gives rise to infrared absorption in the range of 1650 to 1658 cm −1 [64]. However, it is necessary to obtain a good discrimination of the peaks in the area of the amide I through the calculation of the second derivative that amplifies the vibrational component separation [65]. The alterations in the frequency and intensity of the vibrational components or peaks are able to provide valuable information on the secondary structure and may reveal conformational changes deriving from the interaction of the protein with other molecules and with the surrounding chemical environment (pH, temperature, solvents, detergents, etc.) [63]. In this study, the second derivative of trout fillets at 0d highlights three peaks in the amide I region at the wavelengths of 1628, 1652, and 1687 cm −1 (Figure 11).
The storage period of the trout fillets at 4 • C in the absence of coating shows the decrease in the intensity of the peak at 1628 cm −1 . The peak at 1652 cm −1 shows an increase in intensity at 6 and 9 days, and a decrease at 3 days of storage. The 1687 cm −1 peak shows limited variations in intensity, while a shift from 1687 to 1685 cm −1 was detected. In the presence of carrageenan, there were variations in both intensity and wavelength of the peaks at 1628 and 1652 cm −1 . In the trout fillets coated with carrageenan and ELO, the peaks at 1628 and 1652 cm −1 appeared, at all days of storage, very similar to trout fillets at 0 day, with the exception of the trout fillets at 12 days of storage, when both peaks appeared less intense in comparison to trout fillets at 0 day. The peak at 1687 cm −1 of the trout fillets at 9 and 12 days of storage showed a shift with respect to the fillets at 0 day and 3 and 6 days of storage. The comparison of the trout fillet spectra at 0 day and after 6 days of storage at 4 • C showed how in the presence of coating and coating plus ELO, the profiles were superposable to 0 day, while the profile of the untreated trout fillets clearly differed in the peak at 1628 cm −1 showing a decrease in intensity, in the peak at 1652 cm −1 showing an increase in intensity and in the peak at 1687 cm −1 showing a shift at 1685 cm −1 . Variations in intensity and shift of the peaks related to the secondary structure of fish tissue proteins have been reported during the surimi gelation [66]. ATR-FTIR spectroscopy showed a significant decrease in the α-helix /β-sheet ratio in surimi after 2 years of storage at −20 • C [67]. Recently, the rearrangement of protein hydrogen bonding has been reported during surimi gelation, involving a partial change of α-helix of myosin into β-sheet, β-turn, and random coil [68]. In this study, the decrease in the 1628 cm −1 peak intensity during the prolonged storage at 4 • C, in both uncoated and coated trout fillets, may indicate modifications in the β-sheet structure, also confirmed by the shift of the 1687 cm −1 peak. The 1652 cm −1 peak seems to be more stable, indicating a substantial maintaining of the α-helix structure. The storage period of the trout fillets at 4 °C in the absence of coating shows the decrease in the intensity of the peak at 1628 cm −1 . The peak at 1652 cm −1 shows an increase in intensity at 6 and 9 days, and a decrease at 3 days of storage. The 1687 cm -1 peak shows limited variations in intensity, while a shift from 1687 to 1685 cm −1 was detected. In the presence of carrageenan, there were variations in both intensity and wavelength of the peaks at 1628 and 1652 cm −1 . In the trout fillets coated with carrageenan and ELO, the peaks at 1628 and 1652 cm −1 appeared, at all days of storage, very similar to trout fillets at 0 d, with the exception of the trout fillets at 12 days of storage, when both peaks appeared less intense in comparison to trout fillets at 0 d. The peak at 1687 cm −1 of the trout fillets at 9 and 12 days of storage showed a shift with respect to the fillets at d0 and 3 and 6 days of storage. The comparison of the trout fillet spectra at d0 and after 6 days of storage at 4 °C showed how in the presence of coating and coating plus ELO, the profiles were superposable to d0, while the profile of the untreated trout fillets clearly differed in the peak at 1628 cm −1 showing a decrease in intensity, in the peak at 1652 cm −1 showing an increase in intensity and in the peak at 1687 cm −1 Figure 11. ATR-FTIR second derivative of the absorption spectra in the region 1700-1600 cm −1 , of uncoated (UTF) (A), coated (CTF) (B), and coated with ELO (ACTF) (C) trout fillets, after 12 days of storage at 4 • C. The comparison of the spectrum of trout fillet at 0d and after 6 days of storage is shown in (D). 1 = 1628 cm −1 peak; 2 = 1652 cm −1 peak; 3 = 1687 cm −1 peak. Spectra are representative of three samples. d = day.
Conclusions
The employment of carrageenan coating and carrageenan coating enriched with ELO extended the shelf life of trout fillets stored at 4 • C. In particular, trout fillets coated with the carrageenan coating enriched with essential lemon oil are preserved better than uncoated and coated with carrageenan alone fillets. Uncoated fillets showed a more disaggregated muscle structure due to the increase of the inter muscle fiber space. The peroxide value and thiobarbituric acid reactive substances increased in coated samples more slowly than in uncoated samples during the storage period. The electronic nose analysis showed that trout fillets coated with the carrageenan coating maintained the olfactory characteristics better than the uncoated ones. All together, carrageenan coating enriched with ELO was the best to preserve the morphological, physical-chemical, and olfactory characteristics of the fresh trout fillet. The obtained results can be a major interest topic about processing and storage of a high perishable food such as fresh fish.
|
2019-04-04T13:02:47.664Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "5d1fd7ede472dfa9f83b46624254de528a3ba034",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/8/4/113/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d1fd7ede472dfa9f83b46624254de528a3ba034",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
53450138
|
pes2o/s2orc
|
v3-fos-license
|
Numerical Simulation of an Aluminum Container including a Phase Change Material for Cooling Energy Storage
Thermal energy storage systems can be determinant for an effective use of solar energy, as they allow to decouple the thermal energy production by the solar source from thermal loads, and thus allowing solar energy to be exploited also during nighttime and cloudy periods. The current study deals with the modelling and simulation of a cooling thermal energy storage unit consisting of an aluminum container partially filled with a phase change material (PCM). Two unsteady models are implemented and discussed, namely a conduction-based model and a conduction-convection-based one. The equations systems relative to both the models are solved by means of the Comsol Multiphysics finite element solver, and results are presented in terms of temporal variation of temperature in different points inside the PCM, of the volume average liquid fraction, and of the cooling energy stored and released through the aluminum container external surface during the charge and discharge, respectively. Moreover, the numerical results obtained by the implementation of the above different models are compared with experimental ones obtained with a climatic chamber. The comparison between numerical and experimental results indicate that, for the considered cooling energy storage unit, free convection plays a crucial role in the heat transfer inside the liquid PCM and cannot be neglected.
Introduction
A properly designed thermal energy storage system can improve the exploitation and profitability of many renewable and conventional energy sources.For instance, in solar thermal systems, thermal storage can allow to overcome the mismatch between supply and demand.In conventional natural gas-fueled cogeneration systems, thermal storage can be used to produce electricity when it is more economically convenient, namely for self-consumption or when the selling price is high, without wasting thermal energy, which is instead accumulated for a later use.As concerns the storage materials, water is the most used, mainly because water has a high specific heat, is not toxic, and has practically no cost.However, in the last years phase change materials (PCMs) used as thermal energy storage materials have attracted great attention, essentially because, in general, they are characterized by high thermal energy storage densities, and permit to store thermal energy in a narrow temperature range.
Many works have addressed the use of PCMs for storing thermal energy from the solar source for various applications, ranging from solar water heating to solar cooling by absorption or adsorption refrigeration systems [1][2][3][4][5][6].Charvát et al. [7] analyzed the use of a paraffin-based PCM as thermal energy storage material in a solar air-based thermal system.Kabeel at al. [8] investigated the effects of the presence of a paraffin wax in the bottom plate of a solar still for water desalination.Allouhi et al. [9] performed numerical simulations to characterize the melting and solidification processes of a PCM integrated in a solar collector.Zhao et al. [10] developed a control strategy and implemented different models to simulate different operation modes of a solar heating system, including a PCM-based storage tank, over the entire heating season.Moreover, many applications of PCMs have considered cold thermal energy storage [11].Aljehani et al. [12] simulated a phase change composite consisting of a paraffin wax and expanded graphite for cold thermal energy storage in air conditioning applications.They also performed an experimental validation of numerical results.Bejarano et al. [13] modeled and simulated a novel cold energy storage system based on PCMs.Cheng and Zhai [14] modeled and simulated a cold thermal energy storage system consisting of a packed bed with multiple PCMs.In this work, an experimental validation of numerical results is also reported.
Various models have been developed for the numerical simulation of PCM-based thermal energy storage systems, most of which have been reported in reviews [15][16][17].Farid et al. [18] successfully applied the effective heat capacity (EHC) method for simulating 2D heat transfer with phase change.Lacroix [19] developed a model to simulate a shell-and-tube thermal energy storage unit with the PCM on the shell side.Ng et al. [20] employed the finite element method to simulate the convection-dominated melting of a PCM in a cylindrical-horizontal annulus.Lamberg et al. [21] implemented both the effective heat capacity method and the enthalpy method to simulate the melting and solidification processes of a PCM.They also compared the numerical results, which were obtained using the FEMLAB solver, with experimental ones.Esapuor et al. [22] implemented the enthalpy method to perform 3D simulations of a PCM in multi-tube heat exchanger units.Allouche et al. [23] developed and validated a computational fluid-dynamic (CFD) model for the numerical simulation of a PCM slurry in a horizontal tank.Niyas et al. [24] developed a numerical tool to simulate a lab-scale PCM-based shell-and-tube thermal energy storage system by employing the EHC method.Neumann et al. [25] proposed and validated a simplified modelling approach for the numerical simulation of PCM-based fin-and-tube heat exchangers.Li et al. [26] proposed a numerical model to simulate the heat transfer inside an open-cell metallic foam filled PCM.
The current study focuses on the simulation of an aluminum container partially filled with a phase change material.A conduction-based model and a conduction-convection-based one are implemented for the purpose, and numerical results are compared with experimental ones obtained with a climatic chamber.The main contribution of this manuscript is that it presents an experimental validation of two different modelling approaches implemented to simulate the cooling energy charge and discharge of a real PCM-based cooling energy storage unit.
The description of the experimental apparatus and experimental results are presented in Section 2. The balance equations systems relative to both the models are detailed in Section 3. The numerical results and the comparison with experimental ones are discussed in Section 4, and the main conclusions are reported in Section 5.
Experimental Apparatus and Results
Figure 1 shows the aluminum cylindrical container used in the experimental test.Its height is 25.0 cm and internal radius is 6.9 cm.Furthermore, it is partially filled with 2.4 kg of a commercial bio-based phase change material, whose characteristics are shown in Table 1.
Temperature measurements inside the phase change material are all done on the same horizontal section, at a distance of 9 cm from the container bottom, by five T-type thermocouples of class 1.One measuring point is located on the container axis, while the other four points are located at a distance of 3.45 cm from the axis.These are arranged to form a cross as shown in Figure 1b.Temperature data acquisition is done with a sample time of 1 s, by means the of the National Instruments NI 9213 module, using the NI cRIO 9066 controller.Figure 2 shows the climatic chamber used to realize the experimental test.The container is put in the climatic chamber on a 2-cm thick rigid sheet of polyurethane foam for the thermal insulation of the container bottom side, with all the PCM in the liquid state, and at a uniform temperature equal to 23.8 °C (room temperature).Then, the following four steps are applied sequentially: The container is put in the climatic chamber on a 2-cm thick rigid sheet of polyurethane foam for the thermal insulation of the container bottom side, with all the PCM in the liquid state, and at a uniform temperature equal to 23.8 °C (room temperature).Then, the following four steps are applied sequentially: The container is put in the climatic chamber on a 2-cm thick rigid sheet of polyurethane foam for the thermal insulation of the container bottom side, with all the PCM in the liquid state, and at a uniform temperature equal to 23.8 • C (room temperature).Then, the following four steps are applied sequentially: 1.
one-hour temperature ramp is applied to bring the internal temperature of the climatic chamber to the cooling energy charge temperature Tc = 7 • C; 2.
the climatic chamber internal temperature is kept at Tc for 72 h; 3.
one-hour temperature ramp is applied to bring the internal temperature of the climatic chamber to the cooling energy discharge temperature Td = 23 • C; 4.
the temperature inside the climatic chamber is kept at Td until all the measured temperatures inside the PCM are well above the phase change temperature (15 • C).
As it can be argued, the main contributions to the cooling energy charge and discharge are represented by steps 2 and 4, respectively.Nonetheless, in the numerical simulation of the PCM thermal behavior, steps 1 and 3 are also simulated.
Experimental Results
Figures 3 and 4 show the temporal profile of the temperature relative to the measuring point on the container axis, which is indicated with T A , and the temporal profile of the average of the temperatures relative to the mid-radius measuring points, which is indicated with T MR,average , during the cooling energy charge and discharge, respectively.
Appl.Syst.Innov.2018, 2, x FOR PEER REVIEW 4 of 11 1. one-hour temperature ramp is applied to bring the internal temperature of the climatic chamber to the cooling energy charge temperature Tc = 7 °C; 2. the climatic chamber internal temperature is kept at Tc for 72 h; 3. one-hour temperature ramp is applied to bring the internal temperature of the climatic chamber to the cooling energy discharge temperature Td = 23 °C; 4. the temperature inside the climatic chamber is kept at Td until all the measured temperatures inside the PCM are well above the phase change temperature (15 °C).
As it can be argued, the main contributions to the cooling energy charge and discharge are represented by steps 2 and 4, respectively.Nonetheless, in the numerical simulation of the PCM thermal behavior, steps 1 and 3 are also simulated.
Experimental Results
Figures 3 and 4 show the temporal profile of the temperature relative to the measuring point on the container axis, which is indicated with TA, and the temporal profile of the average of the temperatures relative to the mid-radius measuring points, which is indicated with TMR,average, during the cooling energy charge and discharge, respectively.It can be noticed that, as expected, the charge phase is much slower than the discharge one.This is due to the formation of solid PCM on the internal wall of the aluminum container during the cooling energy charge, which acts as a thermal insulation layer for the heat transfer between the liquid PCM and the external cooled air (internal environment of the climatic chamber).Conversely, during the discharge phase, the convective mechanisms inside the liquid PCM, which forms on the internal wall of the container, accelerate the heat transfer towards the internal solid PCM.Moreover, Figure 3 shows that the effective solidification temperature is slightly lower than 15 °C, namely the one given by the PCM manufacturer reported in Table 1.As it can be argued, the main contributions to the cooling energy charge and discharge are represented by steps 2 and 4, respectively.Nonetheless, in the numerical simulation of the PCM thermal behavior, steps 1 and 3 are also simulated.
Experimental Results
Figures 3 and 4 show the temporal profile of the temperature relative to the measuring point on the container axis, which is indicated with TA, and the temporal profile of the average of the temperatures relative to the mid-radius measuring points, which is indicated with TMR,average, during the cooling energy charge and discharge, respectively.It can be noticed that, as expected, the charge phase is much slower than the discharge one.This is due to the formation of solid PCM on the internal wall of the aluminum container during the cooling energy charge, which acts as a thermal insulation layer for the heat transfer between the liquid PCM and the external cooled air (internal environment of the climatic chamber).Conversely, during the discharge phase, the convective mechanisms inside the liquid PCM, which forms on the internal wall of the container, accelerate the heat transfer towards the internal solid PCM.Moreover, Figure 3 shows that the effective solidification temperature is slightly lower than 15 °C, namely the one given by the PCM manufacturer reported in Table 1.It can be noticed that, as expected, the charge phase is much slower than the discharge one.This is due to the formation of solid PCM on the internal wall of the aluminum container during the cooling energy charge, which acts as a thermal insulation layer for the heat transfer between the liquid PCM and the external cooled air (internal environment of the climatic chamber).Conversely, during the discharge phase, the convective mechanisms inside the liquid PCM, which forms on the internal wall of the container, accelerate the heat transfer towards the internal solid PCM.Moreover, Figure 3 shows that the effective solidification temperature is slightly lower than 15 • C, namely the one given by the PCM manufacturer reported in Table 1.
Simulation Models
The axial symmetry of the PCM container and of the boundary conditions permits to implement 2D axisymmetric models, and thus to obtain a relatively low computational cost of numerical simulations.Therefore, two unsteady 2D axisymmetric numerical models are developed for simulating the cooling energy charge and discharge of the phase change material: a conduction-based model and a conduction-convection-based model.
Conduction-Based Model
This model is based on the following main assumptions: (i) the phase change material is homogenous and isotropic; (ii) the thermo-physical properties of the phase change material are considered to be constant and equal to the average values between the liquid and solid phases; (iii) the volume expansion/reduction during phase change is ignored; (vi) phase change during solidification/melting occurs in a temperature range; (v) negligible convective mechanisms.The energy balance equation is given by: where T is the temperature, t is the time variable, ρ PCM is the density of the PCM, and k PCM is the PCM thermal conductivity.The phase change is simulated by means of the effective heat capacity method (EHC).According to EHC, the material effective heat capacity c' p,PCM is expressed as a function of the latent heat of fusion of the PCM L h as follows: where c p,PCM is the average PCM specific heat, and ϕ(T) is a non-dimensional parameter, which is 0 in the solid phase, 1 in the liquid phase and between 0 and 1 in the transition zone.The latter can be expressed as: where T M is the melting temperature, and ∆T M is half the temperature phase change range that goes from (T M − ∆T M ) to (T M + ∆T M ).
Conduction-Convection-Based Model
In this case, two further assumptions are made as concerns the modelling of the liquid PCM flow, namely that liquid PCM is Newtonian and the flow is laminar.The continuity, momentum and energy balance equations are written as follows: where p is the pressure, µ PCM is the modified dynamic viscosity, and v is the velocity vector.
In Equation ( 5), F represents the Boussinesq approximation, which is added to the momentum equation for including the buoyancy effects, and it is evaluated according to Equation (7): where g and β are the gravitational acceleration and the isobaric thermal expansion coefficient, respectively.The effective heat capacity c p,PCM is calculated as previously described in Section 3.1, while the modified dynamic viscosity µ' PCM is evaluated according to Equation (8), in order to force zero velocity in the solid PCM.
where µ PCM is the dynamic viscosity of liquid PCM.The variable S is given by: In Equation ( 9), the constant δ, typically fixed to 10 -3 , serves to prevent null denominator, while the constant C affects the PCM flow into the phase transition zone, and it is usually between 10 3 and 10 10 .Table 2 reports the values of the parameters T M , ∆T M and C used in this work.This combination of values, which were chosen among different tested ones, is the one presenting the best match between experimental and numerical results.
Initial and Boundary Conditions
The simulations of cooling energy charge and discharge of the PCM are performed separately.With reference to the experimental test described in Section 2, as regards the implementation of the conduction-based model, the PCM initial temperature for the charge simulation is fixed equal to the temperature measured at the start-up experimental test, while, for the discharge simulation, the PCM initial temperature is fixed equal to the measured temperature at the start-up of step 3.The boundary conditions are set according to the experimental test.In particular, the bottom and top surfaces of the cylindrical computational domain relative to the PCM are considered to be adiabatic, while the boundary condition relative to the lateral surface is set according to Equation (10): where q l is the heat flux relative to the lateral surface, h l is the heat transfer coefficient relative to the lateral surface, T ∞ is equal to the air temperature inside the climatic chamber, and r and z are the radial and axial coordinates, respectively.The convective heat transfer coefficient h l is fixed to 30.2 W/(m 2 K) in the cooling energy charge, and to 29.1 W/(m 2 K) in the discharge.These values of h l were calculated by means of a correlation for cylinders subjected to transverse external forced flow [27], and were obtained using a measured average air velocity inside the climatic chamber of 3.3 m/s.The above conditions are also applied for the implementation of the conduction-convection-based model.In this case, the initial velocity is set to zero in both charge and discharge, while a no-slip wall boundary condition is applied to all the external surfaces of the computational domain delimiting the phase change material.
Numerical Solver
For both the implemented models, the governing equations are solved with the finite element simulation software COMSOL Multiphysics 5.3a.The non-linearities are resolved through a segregated approach.The backward differentiation formula is adopted for the time stepping, with the initial time step fixed to 10 −4 s and no-fixed maximum time step.Since the PCM volume variations during phase changes are not simulated for both the models, the the 2D computational domain, evaluated by means of the PCM weight and average density, remains fixed.It consists of a rectangle with a height of 17.73 cm and a width of 6.90 cm.Physics-controlled meshes are used, and for both developed model grid independence of results is assured.The simulations are performed with a Dell Precision T7610 workstation, equipped with two Intel Xeon E5-2687 w2 processors and a RAM of 64 GB and 1866-MHz clock.
Results
Figure 5 shows a comparison between the temporal variation of experimental temperatures inside the PCM, relative to the measuring points indicated in Section 2, and the corresponding numerical temperatures during the cooling energy charge.It can be seen that there is a good agreement between the numerical temperatures relative to the conduction-convection-based model and the experimental temperatures, while the temperature profiles resulting from the implementation of the conduction-based model fail to match the experimental ones during the first and last parts of the charging process.This is essentially because the conduction-based model does not permit to simulate the mixing of liquid PCM inside the aluminum container in the initial part of the cooling energy charge.Thus, the resulting temperature profiles present a slower decrease.Of course, this behavior at the initial part of the charge influences the entire charge process simulated by the conduction-based model.Indeed, in the last part of charge, the simulated temperature T A,cond presents a sensible deviation from the corresponding experimental temperature.
Figure 6 shows the comparison between the temporal variation of experimental and numerical temperatures inside the PCM relative to the cooling energy discharge.It can be seen that, in the initial part of discharge, the simulated temperatures relative to both the implemented models are are in good agreement with the experimental ones.This is because conduction heat transfer is the dominant heat transfer mechanism in the first part of discharge, when great part of the PCM is in the solid state, and the melted PCM is limited in a narrow layer close to the container internal wall.Heat transfer by free convection inside the PCM becomes higher as the melted layer thickness increases.Indeed, Figure 6 shows that, in the last part of discharge, the temperature profiles relative to the conduction-based model are very far from the experimental ones, differently from the ones relative to the conduction-convection-based model which present a better behavior in the last part of discharge.
From the above, it can be stated that the conduction-based model is not suitable for the present application.For this reason, only the main results obtained with conduction-convection-based model are reported and discussed in the following.Figures 7a,b shows the temporal variation of the average liquid fraction of PCM volume during cooling energy charge and discharge, respectively, obtained by the conduction-convection-based model.Figure 7a clearly shows that the PCM solidification rate is relatively high in the first part of the charge, before it slows down as the thickness of the solid layer at the container wall increases.Indeed, from the beginning of step 1, PCM volume average liquid fraction reaches 0.5 after 23 hours, whereas the complete solidification of PCM is reached after about 63 hours.Conversely, Figure 7b shows that, during the cooling energy discharge, the melting rate is initially relatively low, before it becomes higher as the melted fraction increases, or, in other words, as heat transfer by free convection inside the PCM becomes higher.In Figure 7b, it can be noted that free convection becomes decisive from the seventh hour, and also that the PCM is not completely melted at the end of the discharge simulation.This last result is not in contrast with the experimental observations, since actually the PCM was not completely melted at the end of the experimental test.However, the real liquid fraction at the end of the experimental test was not measured, and probably it was higher than the simulated one obtained with the conduction-convection-based model, since the model underestimates the temperatures at the end of the discharge process, as it is be seen in Figure 6.Figures 7a,b shows the temporal variation of the average liquid fraction of PCM volume during cooling energy charge and discharge, respectively, obtained by the conduction-convection-based model.Figure 7a clearly shows that the PCM solidification rate is relatively high in the first part of the charge, before it slows down as the thickness of the solid layer at the container wall increases.Indeed, from the beginning of step 1, PCM volume average liquid fraction reaches 0.5 after 23 hours, whereas the complete solidification of PCM is reached after about 63 hours.Conversely, Figure 7b shows that, during the cooling energy discharge, the melting rate is initially relatively low, before it becomes higher as the melted fraction increases, or, in other words, as heat transfer by free convection inside the PCM becomes higher.In Figure 7b, it can be noted that free convection becomes decisive from the seventh hour, and also that the PCM is not completely melted at the end of the discharge simulation.This last result is not in contrast with the experimental observations, since actually the PCM was not completely melted at the end of the experimental test.However, the real liquid fraction at the end of the experimental test was not measured, and probably it was higher than the simulated one obtained with the conduction-convection-based model, since the model underestimates the temperatures at the end of the discharge process, as it is be seen in Figure 6. Figure 7a,b shows the temporal variation of the average liquid fraction of PCM volume during cooling energy charge and discharge, respectively, obtained by the conduction-convection-based model.Figure 7a clearly shows that the PCM solidification rate is relatively high in the first part of the charge, before it slows down as the thickness of the solid layer at the container wall increases.Indeed, from the beginning of step 1, PCM volume average liquid fraction reaches 0.5 after 23 hours, whereas the complete solidification of PCM is reached after about 63 hours.Conversely, Figure 7b shows that, during the cooling energy discharge, the melting rate is initially relatively low, before it becomes higher as the melted fraction increases, or, in other words, as heat transfer by free convection inside the PCM becomes higher.In Figure 7b, it can be noted that free convection becomes decisive from the seventh hour, and also that the PCM is not completely melted at the end of the discharge simulation.This last result is not in contrast with the experimental observations, since actually the PCM was not completely melted at the end of the experimental test.However, the real liquid fraction at the end of the experimental test was not measured, and probably it was higher than the simulated one obtained with the conduction-convection-based model, since the model underestimates the temperatures at the end of the discharge process, as it is be seen in Figure 6.
Similar considerations as those made for Figure 7a,b can be made for Figure 8a,b, which report the temporal variation of total cooling energy stored by the PCM during cooling energy charge and released during discharge, respectively.
Finally, Table 3 reports the sensible and latent contributions of the total cooling energy stored and discharged at the end of charge and discharge, respectively.Finally, Table 3 reports the sensible and latent contributions of the total cooling energy stored and discharged at the end of charge and discharge, respectively.
Conclusions
In the present work, two different unsteady models are implemented in order to simulate the cooling energy charge and discharge of a cooling thermal energy storage unit consisting of an aluminum container partially filled with a phase change material: a conduction-based model and a conduction-convection-based one.The numerical results obtained by the implementation of the above different models are compared with experimental ones obtained with a climatic chamber.The Finally, Table 3 reports the sensible and latent contributions of the total cooling energy stored and discharged at the end of charge and discharge, respectively.
Conclusions
In the present work, two different unsteady models are implemented in order to simulate the cooling energy charge and discharge of a cooling thermal energy storage unit consisting of an aluminum container partially filled with a phase change material: a conduction-based model and a conduction-convection-based one.The numerical results obtained by the implementation of the above different models are compared with experimental ones obtained with a climatic chamber.The
Conclusions
In the present work, two different unsteady models are implemented in order to simulate the cooling energy charge and discharge of a cooling thermal energy storage unit consisting of an aluminum container partially filled with a phase change material: a conduction-based model and a conduction-convection-based one.The numerical results obtained by the implementation of the above different models are compared with experimental ones obtained with a climatic chamber.The main conclusions of the present work, argued by comparing the numerical and experimental results, are: - The conduction-based model is not appropriate for the considered cooling energy storage application since free convection plays a crucial role in the heat transfer inside the liquid PCM, and thus cannot be neglected; - The numerical results obtained by the implementation of the conduction-convection-based model are in good accordance with experimental ones; - The conduction-convection-based model underestimates the temperatures inside the PCM at the end of the cooling energy discharge phase.
Figure 1 .
Figure 1.Pictures of the cylindrical aluminum container: (a) Liquid PCM at room temperature inside the container; and (b) Thermocouples arrangement.
Figure 2 .
Figure 2. Picture of the climatic chamber used for the experimental test.
Figure 1 .
Figure 1.Pictures of the cylindrical aluminum container: (a) Liquid PCM at room temperature inside the container; and (b) Thermocouples arrangement.
Table 1 .Figure 1 .
Figure 1.Pictures of the cylindrical aluminum container: (a) Liquid PCM at room temperature inside the container; and (b) Thermocouples arrangement.
Figure 2 .
Figure 2. Picture of the climatic chamber used for the experimental test.
Figure 2 .
Figure 2. Picture of the climatic chamber used for the experimental test.
Appl.Syst.Innov.2018, 2, x FOR PEER REVIEW 4 of 111.one-hour temperature ramp is applied to bring the internal temperature of the climatic chamber to the cooling energy charge temperature Tc = 7 °C; 2. the climatic chamber internal temperature is kept at Tc for 72 h; 3. one-hour temperature ramp is applied to bring the internal temperature of the climatic chamber to the cooling energy discharge temperature Td = 23 °C; 4. the temperature inside the climatic chamber is kept at Td until all the measured temperatures inside the PCM are well above the phase change temperature (15 °C).
Table 2 .
Values of T M , ∆T M and C employed in the numerical simulations.
Figure 5 .
Figure 5. Temporal variation of experimental and numerical temperatures during cooling energy charge.
Figure 6 .
Figure 6.Temporal variation of experimental and numerical temperatures during cooling energy discharge.
Figure 5 .
Figure 5. Temporal variation of experimental and numerical temperatures during cooling energy charge.
Figure 5 .
Figure 5. Temporal variation of experimental and numerical temperatures during cooling energy charge.
Figure 6 .
Figure 6.Temporal variation of experimental and numerical temperatures during cooling energy discharge.
Figure 6 .
Figure 6.Temporal variation of experimental and numerical temperatures during cooling energy discharge.
Appl.Syst.Innov.2018, 2, x FOR PEER REVIEW 9 of 11 Similar considerations as those made for Figure 7a,b can be made for Figure 8a,b, which report the temporal variation of total cooling energy stored by the PCM during cooling energy charge and released during discharge, respectively.
Table 3 .
Sensible and latent contributions of total cooling energy stored and discharged.
Table 3 .
Sensible and latent contributions of total cooling energy stored and discharged.
Table 3 .
Sensible and latent contributions of total cooling energy stored and discharged.
|
2018-10-14T02:16:28.018Z
|
2018-09-04T00:00:00.000
|
{
"year": 2018,
"sha1": "382df000a7a07a3b1aff0e5ddfa862b9ba5edb91",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-5577/1/3/34/pdf?version=1536052643",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "382df000a7a07a3b1aff0e5ddfa862b9ba5edb91",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
55978602
|
pes2o/s2orc
|
v3-fos-license
|
Theorizing Ecclesial Ecocriticism: Pathetic Fallacy in Ecclesiastical Literature on Climate Change
The objective of this paper is to show the Catholic Church’s concern with ecology in its literature, its use of literary devices to enhance an effective response to the call for nature’s protection and to show to what extent one can hypothesize ecclesial ecocriticism as a theory different from its literary counterparts. The methodology that will be used is that of ecocriticism or green study; this paper is a stylistic investigation of the Catholic Church’s discourse on climate change, namely Pope Benedict’s encyclical letter Caritas in Veritate on Integral Human Development in Charity and in Truth (2009), and Pope Francis’s encyclical letter Laudato Si on Care of our Common Home (2015). Reading these works, one becomes aware that Catholic Church leaders are engaged in a particular type of ecocriticism. How is this different from literary ecocriticism? And for what purpose do church leaders use literary figures in their discourse? These are the questions around which the discussion will be held. The paper will argue that there is an ecclesial ecocriticism endowed with its special characteristics. Our hypothesis is that the use of personification and pathetic fallacy in the two popes’ works on nature leads to two types of pathetic fallacies, namely, humanization of nature and naturalization of the human being, thus strengthening this conception of nature as God’s creation and gift to humanity, and thus efficiently pleading the latter for nature’s protection.
Introduction
Concern for the environment or nature is increasingly the preoccupation of governments, politicians, scientists, writers, critics, and religious leaders. Newspapers headlines of the last two centuries are recurrent about "Oil spills, lead and asbestos poisoning, toxic waste contamination, extinction of species at an unprecedented rate, battles over public land use, protests over nuclear waste dumps, a growing hole in the ozone layer, predictions of global warming, acid rain, loss of topsoil, destruction of the tropical rain forest, controversy over the Spotted Owl in the Pacific Northwest, a wildfire in Yellowstone Park, medical syringes washing onto the shores of Atlantic beaches, boycotts on tuna, overtapped aquifers in the West, illegal dumping in the East, a nuclear reactor disaster in Chernobyl, new auto emissions standards, famines, droughts, floods, hurricanes, a United Nations special conference on environment and development, a U.S. President declaring the 1990s 'the decade of the environment,' and a world population that topped five billion" [1, p16].
This grim picture is timelessly relevant. Generations change but the pollution of nature remains identical or worsen. The solution to "the immediate problems of pollution, environmental decay and the depletion of natural resources," Pope Francis [2] says in Laudato Si (2015), a document on care for nature, demands a collaboration of many sciences and strategies, including "a distinctive way of looking at things, a way of thinking, policies, an educational program, a lifestyle and a spirituality which together generate resistance to the assault of the technocratic paradigm" [2, #111]. He is convinced that "to seek only a technical remedy to each environmental problem which comes up is to separate what is in reality interconnected and to mask the true and deepest problems of the global system" [2, #111]. His predecessor, Pope Benedict XVI, known as the 'Green Pope,' devoted a chapter on questions related to the environment in his Caritas in Veritate (2009). Prior to him, Pope John Paul II dealt with ecology in his encyclical letters Laborem Exercens (1981), Sollicitudo Rei Socialis (1987), Centesimus Annus (1991). In recent years, many Catholic bishops and Episcopal conferences [see 12] have issued ecological exhortations, excerpts of which appear in Laudato Si. Rarely do ecocritics consider such works, simply because of their religious context. It is against this hostile background that Pope Francis says that it is not "reasonable and enlightened to dismiss certain writings simply because they arose in the context of religious belief" [2, #199]. But with which critical tools can we assess these particular works as the tools of literary ecocriticism seem inadequate? This 156 Theorizing Ecclesial Ecocriticism: Pathetic Fallacy in Ecclesiastical Literature on Climate Change explains the necessity of theorizing ecclesial theory of ecocriticism. It may help us get a deeper understanding of religious classics or literary texts written by religious people than general theory of ecocriticism do. On the other hand, it might also be useful when reading non-religious texts, as it valorizes ecological aspects in a way that secular ecocriticism doesn't.
In all of these ecologically-minded religious works, literary devices, especially personification and pathetic fallacy, that is, "the attribution of human emotions to works of nature" [3, p297], play important roles. Using the theory of ecocriticism or green studies, which "is the study of the relationship between literature and the physical environment" [3, p18], this paper is an attempt at addressing these concerns, namely the inadequacy of secular ecocriticism and religious texts.
Ecocriticism "seeks to warn us of environmental threats emanating from governmental, industrial, commercial, and neo-colonial forces" [4, p4580]. Examining the pathetic fallacy has been a long-standing element of ecocriticism; indeed John Ruskin, who coined the term 'pathetic fallacy,' was, in Barry [4]'s words, "deeply eco-conscious, the first major British writer to record a sense that nature's powers of recovery might not be infinite, and that modern form of production and consumption have the potential to inflict fatal environmental damage" [p4667]. This secular ecocriticism and the discussion on the pathetic fallacy will help us not only define the contours and particularities of ecclesial ecocriticism applicable to Catholic writings but also show that the use of literary devices, especially that of pathetic fallacy, is the expression of the extension of the self into the surrounding nature.
Methodology
The discussion proceeds by textual analysis of ecclesial literature on climate change, informed by the literary theory of ecocriticism, in a comparative perspective. Such comparative analysis expands the theory of ecocriticism used for literary texts and argues for a new theory, ecclesial ecocriticism, that would be much more suitable for nonliterary texts, namely religious ones.
Results
General literary ecocriticism is concerned with literary writings. As literature traces its roots to the hermeneutics of religion [6, p78], there is, not only a place, but a necessity, for the theory of ecclesial literary ecocriticism applicable to religion-related literature. The characteristic features of this new theory, upon examination and stylistic analysis of ecclesial literature on climate change, are the restoration of human nature within the other natures, the interdependence of natures, the attribution of the authorship of natures to God, and drawing an intrinsic relationship between nature's protection and one's belief in its origin. This theory of ecclesial ecocriticism then gives rise to two types of pathetic fallacies: one consisting in ascribing human traits to inanimate nature, and the other in the attribution of nature's potentials to human beings.
The Particularities of Ecclesial Ecocriticism as Theory
The American term 'Ecocriticism', or the British equivalent 'green studies', is a relatively new theory in literary criticism. It started in the USA in the late 1980s, and in the UK in the early 1990s. Lawrence Buell, Ursula K. Heise, and Karen Thornber write, "Ecocriticism started as an organized movement within literature studies in the early 1990s, a scholarly generation later than the first such movements within the environmental humanities (in history, ethics, and theology). Ecocriticism as a Library of Congress subject heading dates from 2002" [5, p433]. Yet, the ideas behind ecocriticism had been in circulation for much longer; three major nineteenth-century American writers-Ralph Waldo Emerson (1803-1882), Margaret Fuller (1810-1850), and Henry D. Thoreau (1817-1862)-could be seen as the founders of ecocriticism [4, p4542].
Ecclesial ecocriticism has been mainly developed by the last three popes (John Paul II, Benedict, and Francis), whose encyclical letters and exhortations, some of which are under consideration in this paper, are ecologically focused. These popes' writings are essentially essays in literary terms, and as such, may help us theorize an ecclesial ecocriticism as a method for analyzing Christian writings, because even though religion alongside other disciplines in the humanities has been 'greening' since the 1970s [1, p16], no formal ecologically-informed criticism has been developed as ecclesial ecocriticism, with its own principles and criteria.
Ecocriticism, as opposed to structuralist, post-structuralist, and historicist theories that usually perceives the external world as linguistically and socially constructed, calls this traditionally-established perception into question. Defining ecocriticism as a literary theory, the critic Peter Barry [4] asserts that ecocriticism "repudiates the fundamental belief in 'constructedness' which is such an important aspect of literary theory" [p4600] in general. He explains that, "for the ecocritic, nature really exists, out there beyond ourselves, not needing to be ironized as a concept by enclosure within knowing inverted commas, but actually present as an entity which affects us, and which we can affect, perhaps fatally, if we mistreat it. Nature, then, isn't reducible to a concept which we conceive as part of our cultural practice (as we might conceive a deity, for instance, and project it out onto the universe)." [4, p4590] For example, social inequality can be 'naturalized', that is, disguised as natural or given. The ecocritic should not consider this kind of nature.
Sharing "the fundamental premise that human culture is connected to the physical world, affecting it and affected by it" [1, p19], one of the tasks of the ecocritic would be to demask such false natures and reveal the true one that is hidden under any culturally constructed nature. Hence, William Howarth defines the ecocritic from its Greek etymology (oikos=house and kritis=judge) as "a person who judges the merits and faults of writings that depict the effects of culture upon nature, with a view toward celebrating nature, berating its despoilers, and reversing their harm through political action" [6, p69].
The particularity of the Catholic Church's social teaching is that nature is not only "the setting for our life" (Benedict 48) but it includes human nature. While "Ecocriticism expands the notion of "the world" [from being synonymous with society] to include the entire ecosphere" [1, p19], the Church leaders expand nature to include human nature. Rather than considering the human being as separate from the world, the resources of which this latter uses, the Church deals with the individual as a component of it, that is, the individual in the environment. In nature we find human nature and many other natures that are different from each other. Pope Francis [2] lays a strong emphasis on the fact that "human beings too are creatures of this world" [#43] and that "nature cannot be regarded as something separate from ourselves or as a mere setting in which we live. We are part of nature, included in it and thus in constant interaction with it" [2, p139]. So, inclusion of human nature in nature can be considered as fundamental in ecclesial ecocriticism. This theory shows how the human might be seen as part of nature, without simply giving the human primacy.
The critic Evernden was critical of general ecocriticism for separating the human being from the environment. He says that "rather than thinking of an individual spaceman who must slurp up chunks of the world-'resources'-into his separate compartment, we must deal instead with the individual-in-environment, the individual as a component of, not something distinct from, the rest of the environment" [8, p18]. He further explains that it is only in the inclusion of human nature within the environment that one can account for literary metaphors and other literary devices such as personification and pathetic fallacy: "Once we engage in the extension of the boundary of the self into the 'environment' then of course we imbue it with life and can properly regard it as animate-it is animate because we are part of it. And following from this, all the metaphorical properties so favored by poets make perfect sense: the Pathetic Fallacy is a fallacy only to the ego clencher. Metaphoric language is an 'indicator' of 'place'-an indicator that the speaker has a place, feels part of a place." [8, p18].
The pathetic fallacy only seems wrong to those who want to see the ego, the individual, as completely autonomous and separate from the outside world. The Church leaders, in their writings, advocate a natural place of human nature within the environment, not a metaphorical one. The Church's teaching presents the human being as naturally part of nature. As an element among many others in the environment, and considering the interaction between natures, Pope Benedict XVI [7] could say that "the way humanity treats [other natures in] the environment influences the way it treats itself, and vice versa" [#51]. The protection of the environment and that of human life cannot be separated. Nature is cared for when man takes care of himself responsibly.
The second particularity of ecclesial ecocriticism is that of the interdependence of natures within the environment. Pope Francis [2] underlines many times in Laudato Si his conviction that "everything is interconnected" or interrelated [#16, 42, 70, 91, 117, 138, 240], and that "we are not disconnected from the rest of creatures, but joined in a splendid universal communion" [#220]. The Pope makes explicit here what was already said in the Catechism of the Catholic Church: "God wills the interdependence of creatures. The sun and the moon, the cedar and the little flower, the eagle and the sparrow: the spectacle of their countless diversities and inequalities tells us that no creature is self-sufficient. Creatures exist only in dependence on each other, to complete each other, in the service of each other" [2, #340]. Any human being lives in interaction with what Heidegger calls "the fourfold," that is, fellow mortals, the earth, the sky, and the divinities. Kate Rigby [9] explains that Heidegger's "fourfold comprises earth, understood as the land itself with its particular topography, waterways, and biotic community; sky, including the alternation of night and day, the rhythm of the seasons, and the vagaries of the weather; divinities, those emissaries or traces that yet remain of an absent God; and, last but not least, mortals, fellow humans" [p430]. Human life is interwoven with the earth, the sky, God, and other fellow humans. This interdependence of natures, according to the Creator's will, is one of the key components of ecclesial ecocriticism.
Interconnectivity within nature necessarily entails the presence of many different natures. Ecology applies then to the nature of any creature; hence, the ecology of the environment, of animals, of man and woman, and so on and so forth. It is in this perspective that Pope Benedict [7] speaks of an "ecology of man", grounded on the fact that "man too has a nature that he must respect and that he cannot manipulate at will" [#155]).
A third constitutive element of ecclesial ecocriticism is the attribution of a common author, namely God, to all natures. It follows that if all natures have a common genitor, they are all related to each other. We can speak of 'universal fraternity' in this sense. This calls for fraternal love between all creatures. Observing that "fraternal love can only be gratuitous," Pope Francis [7] says that "this same gratuitousness inspires us to love and accept the wind, 158 Theorizing Ecclesial Ecocriticism: Pathetic Fallacy in Ecclesiastical Literature on Climate Change the sun and the clouds, even though we cannot control them" [#228]. Consequently, the idea of a common creator leads to a sense of interconnection that is outside of hierarchy or traditional power schema.
A fourth and final feature of ecclesial theory of ecocriticism is linked to the theories around the origin of nature in general. In Caritas in Veritate, Pope Benedict XVI [7] makes the point that harm is done to nature as a result of the different conceptions people have about its origin. He includes the human being within nature, saying, "when nature, including the human being, is viewed as the result of mere chance or evolutionary determinism, our sense of responsibility wanes" [#48]. He establishes a relationship between belief in the origin of nature and one's protection of it. One's conception of the origin of nature determines one's handling of it. If one believes that nature is the result of evolution beginning with a big bang, one believes that whatever damage one inflicts will but contribute to the world's continuous evolution, whether it is positive or negative. If nature came to exist by chance, it may disappear by chance. Such conceptions lead to nature's destruction without somebody to blame. Pope Benedict XVI is convinced that the root of nature's destruction lies in the lack of faith in God, as the author of creation lies, in "the notion that there are no indisputable truths to guide our lives, and hence human freedom is limitless. We have forgotten that man is not only a freedom which he creates for himself. Man does not create himself" [2, #6]. The antidote to this destructive vision of an anthropocentric world is one, Benedict argues, in which faith is embraced and the interconnectedness of humans and the non-human world is revealed.
At an audience Pope Benedict XVI gave to Priests of Brixen, Karl Golser, a professor of moral theology in Brixen, and also director of the institute for justice, peace, and the safeguarding of creation, asked the Pope the following ecology-related questions: "What can we do to bring a greater sense of responsibility toward creation into the life of the Christian communities? How can we arrive at seeing creation and redemption increasingly as a whole?" In his answer, the Pope explicitly and clearly asserted that "The brutal consumption of Creation begins where God is not, where matter is henceforth only material for us, where we ourselves are the ultimate demand, where the whole is merely our property and we consume it for ourselves alone … I think, therefore, that true and effective initiatives to prevent the waste and destruction of Creation can be implemented and developed, understood and lived, only where Creation is considered as beginning with God" [10, emphasis mine].
Protection of the environment is thus conditioned by one's belief in who or that which is at the beginning of creation. The misuse of creation begins when human beings no longer recognize any higher instance than themselves alone, and thus using everything simply as their property.
Ecclesial ecocriticism is founded on a communal father or creator of all natures. Ecclesial ecocritics believe that "the world came about as the result of a decision, not from chaos or chance, and this exalts it all the more." [2, #77]. They believe in the environment or nature as God's gift. Pope Benedict XVI says that "in nature, the believer recognizes the wonderful result of God's creative activity, which we may use responsibly to satisfy our legitimate needs, material or otherwise, while respecting the intrinsic balance of creation. If this vision is lost, we end up either considering nature an untouchable taboo or, on the contrary, abusing it." [7, #48]. The way one thinks or believes dictates one's behavior. If nature comes from God and is given to humanity, then in the way we use it we have a responsibility towards future generations and towards God. Expatiating on this responsibility, the Pope [7] writes: "Human beings legitimately exercise a responsible stewardship over nature, in order to protect it, to enjoy its fruits and to cultivate it in new ways, with the assistance of advanced technologies, so that it can worthily accommodate and feed the world's population. (…) We must recognize our grave duty to hand the earth on to future generations in such a condition that they too can worthily inhabit it and continue to cultivate it. This means being committed to making joint decisions 'after pondering responsibly the road to be taken, decisions aimed at strengthening that covenant between human beings and the environment, which should mirror the creative love of God, from whom we come and towards whom we are journeying'(120)" [7, #50, italics are the author's].
Faith in God as the Creator of nature is then the fourth characteristic to be taken into account in ecclesial ecocriticism. Nature, as pointed out earlier, is God's gift to his children. The entire human family must handle it with care and responsibility, finding, through hard work and creativity, the resources to live with dignity, through the help of nature itself.
This aspect is so important that, even though Pope Francis [2] is "well aware that in the areas of politics and philosophy there are those who firmly reject the idea of a Creator, or consider it irrelevant, and consequently dismiss as irrational the rich contribution which religions can make towards an integral ecology" [#62], addressing his document not only to members of the Church but "to all people" [#3], he deems it necessary to "include a chapter dealing with the convictions of believers" [#62]. Further observing that "the majority of people living on our planet profess to be believers" Pope Francis [2] says that "this should spur religions to dialogue among themselves for the sake of protecting nature" [#201]. It is also to this effect that literary devices such as pathetic fallacies are used.
Pathetic Fallacy
One of the above-mentioned characteristics of the theory of ecclesial ecocriticism indicates that it includes human nature within nature in its analysis. The extension of humanity into the 'environment' makes possible an interaction whereby human features are attributed to nature Environment and Ecology Research 4(3): 155-160, 2016 159 and natural realities to human beings. Metaphoric language such as pathetic fallacy and personification then indicate that human beings have a place in the universe. Indeed, the "motive for metaphor may be as Frye claims, 'a desire to associate, and finally to identify, the human mind with what goes on outside of it' " [8, p19].
There are two types of pathetic fallacies: one consists in the "ascription of human traits to inanimate nature" [11, p122] or any nature in the environment; the other in the attribution of nature's potentials to human beings. In other words, it is a matter of humanization of nature, on the one hand, and of the naturalization of human beings on the other hand. General literary theory of ecocriticism usually minds the first while that of ecclesial ecocriticism takes both into account.
As a follow-up, this excerpt from ecclesial literature on climate change displays a mind-arresting pathetic fallacy of the first category whereby Pope Benedict [7] attributes some human qualities to nature: "Nature expresses a design of love and truth. It is prior to us, and it has been given to us by God as the setting for our life. Nature speaks to us of the Creator.
[…] It is a wondrous work of the Creator containing a 'grammar' which sets forth ends and criteria for its wise use, not its reckless exploitation" [#48].
Nature is endowed with human senses: it speaks, loves, has a grammar, and a lot of other things, similar to human beings. It is not just matter for us to shape at will. It has a dignity of its own, which we must respect and submit to its directives. Its language should be listened to and obeyed.
In Laudato Si, Pope Francis [2] is more pathetic when he says that "the violence present in our hearts, wounded by sin, is also reflected in the symptoms of sickness evident in the soil, in the water, in the air and in all forms of life." [#2] All elements in the environment are in direct correlation and interaction with one another to the extent that one of them, namely human nature, causes the others to suffer. The earth is ailing as any human being. She is referred to as a mother, "groaning in travail" [Rom 8:22].
Owing to this attribution of a common father to all natures and the belief in creation in the explanation of the origin of the world, all other elements in mother earth become human beings' brothers and sisters. Thus, we have "brother sun, sister moon, brother river" [2, #92]. This fraternity is so strong that morality comes into account in human beings' relationships or interactions with natures in the environment. For instance, Pope Francis [2] says that, "for human beings to contaminate the earth's waters, its land, its air, and its life -these are sins" [#8]. The Pope sees climate change as a moral issue of burning importance that puts creation into danger and places more burdens on poor people, and thus compromises the common good of all. He invites human beings, within the scope of universal brotherhood, to "feel the desertification of the soil almost as a physical ailment, and the extinction of a species as a painful disfigurement" [2, #89]. If the suffering of other elements in nature is reflected in human nature's feelings, then the interconnection between natures in the environment (second characteristic of ecclesial ecocriticism) is reciprocal. A misbehavior of one affects all others dangerously. On the other hand, when all behave, all live in perfect harmony together.
Pathetic fallacy of the second category, namely the naturalization of human beings (which is the first characteristic of ecclesial ecocriticism) appears in ecclesial literature on climate change. In Laudato Si, for example, Pope Francis [2] brings the naturalization of human nature into prominent salience when he urges us recognize that "the way natural ecosystems work is exemplary: plants synthesize nutrients which feed herbivores; these in turn become food for carnivores, which produce significant quantities of organic waste which give rise to new generations of plants" [#22]. The natural ecosystem should serve as example for men and women. Our industrial system should try to get inspiration from natural ecosystems in by adopting a circular model at the end of its cycle of production and consumption, developing the capacity to absorb and re-use waste and by-products. This emerges as the new type of pathetic fallacy, i.e., the ascribing of nature's traits to human beings that the theory of ecclesial ecocriticism sets forth.
The Church herself is an imitator of nature. For instance, with regards to family planning or population control, the Catholic Church advocates a natural method, as if to say "imitate nature, do as nature does," rejecting then abortion, artificial contraception and sterilization. The man-made methods can but lead to nature's destruction or disturbance of its natural ecosystem, with glaring offshoots of ageing population in some European countries addicted to such methods, or the outnumbering of boys over girls in China for example. Such artificial and widespread methods not only harm the environment in processing them, but also bring human nature close to chaos, following suit to the extinction of some species of animals or plants. The way out is to foster intimacy with nature, in other words, becoming natural.
The two types of pathetic fallacies are interconnected, especially in the ecclesial literature on climate change. One can see this connection in the fact that the Catholic Church's ecocriticism demands the reader to be capable of hearing "both the cry of the earth and the cry of the poor" [2, #49], as they echo or call for one another. In fact, ecology applies to both the earth and man alike. Traditional ecological approaches and sociological approaches go hand in hand in ecclesial ecocriticism. Pope Francis [2] claims that when human beings fail to consider "the worth of a poor person, a human embryo, a person with disabilitiesto offer just a few examples -it becomes difficult to hear the cry of nature itself; everything is connected" [#117]. Ecclesial ecocriticism is holistic.
Conclusions
Insofar as general literary "ecocriticism seeks to redirect humanistic ideology, not spurning the natural sciences but using their ideas to sustain viable readings, [as both] literature and science trace their roots to the hermeneutics of religion and law" [6, p78], there is a place for the theory of ecclesial literary ecocriticism. One is applicable to literary writings, and the other to religion-related literature. The specificities of the latter, upon examination of ecclesial literature on climate change, are the restoration of human nature within the other natures, the consideration of the interdependence of natures, the attribution of the authorship of natures to God, and drawing the attention that nature's protection is intrinsically dependent on one's belief in its origin. Ecclesial theory of ecocriticism can be seen as one of the comprehensive solutions to nature's problems. In Pope Francis [2]'s analysis, as "we are faced not with two separate crises, one environmental and the other social, but rather with one complex crisis which is both social and environmental" [#139], only comprehensive solutions that take into account the interactions within social systems and natural systems themselves are salutary options. Therefore, this theory of ecclesial ecocriticism illuminates the Bible and religious classics, thus forming believers' consciousness to take care of the earth, our common home.
|
2019-05-09T13:07:17.135Z
|
2016-05-01T00:00:00.000
|
{
"year": 2016,
"sha1": "6d70fe785b19567c887258959c0cf7faff416ddb",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20160430/EER7-14006019.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "248bdc31817543ae0085dda215b3b87e30f4ad87",
"s2fieldsofstudy": [
"Environmental Science",
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
233328745
|
pes2o/s2orc
|
v3-fos-license
|
Latent functional diversity may accelerate microbial community responses to temperature fluctuations
How complex microbial communities respond to climatic fluctuations remains an open question. Due to their relatively short generation times and high functional diversity, microbial populations harbor great potential to respond as a community through a combination of strain-level phenotypic plasticity, adaptation, and species sorting. However, the relative importance of these mechanisms remains unclear. We conducted a laboratory experiment to investigate the degree to which bacterial communities can respond to changes in environmental temperature through a combination of phenotypic plasticity and species sorting alone. We grew replicate soil communities from a single location at six temperatures between 4°C and 50°C. We found that phylogenetically and functionally distinct communities emerge at each of these temperatures, with K-strategist taxa favored under cooler conditions and r-strategist taxa under warmer conditions. We show that this dynamic emergence of distinct communities across a wide range of temperatures (in essence, community-level adaptation) is driven by the resuscitation of latent functional diversity: the parent community harbors multiple strains pre-adapted to different temperatures that are able to ‘switch on’ at their preferred temperature without immigration or adaptation. Our findings suggest that microbial community function in nature is likely to respond rapidly to climatic temperature fluctuations through shifts in species composition by resuscitation of latent functional diversity.
Introduction
Microbes are drivers of key ecosystem processes. They are tightly linked to the wider ecosystem as pathogens, mutualists, and food sources for higher trophic levels, and also play a central role in ecosystem-level nutrient cycling, and therefore, ultimately in global biogeochemical cycles. Temperature has a pervasive influence on microbial communities because of its direct impact on microbial physiology and fitness (Oliverio et al., 2017;García et al., 2018;Smith et al., 2019). There is therefore great interest in understanding how temperature fluctuations impact microbial community dynamics and how those impacts affect the wider ecosystem (Bardgett et al., 2008).
Temperature varies at practically all biologically relevant timescales, from seconds (e.g., sun/shade), through daily and seasonal fluctuations, to longer-term changes, including anthropogenic climate warming and fluctuations over geological timescales. Whole microbial communities can respond to temperature changes over time and space through phenotypic (especially, physiological) plasticity (henceforth, 'acclimation'), as well as genetic adaptation in their component populations (Bennett et al., 1990;Kishimoto et al., 2010;Blaby et al., 2012;Kontopoulos et al., 2020a). Microbial thermal acclimation can occur relatively rapidly (timescales of minutes to days) through processes such as activation and up-or downregulation of particular genes and alteration of fatty acids used in building cell walls (Suutari and Laakso, 1994). Adaptation is a necessarily slower process (timescales of weeks or longer) occurring either through selection on standing genetic variation in the population or that arising through recombination and mutation (Bennett et al., 1990;Padfield et al., 2016;Barton et al., 2020).
In addition, a third key mechanism through which microbial communities can respond to changing temperatures is species sorting (Leibold et al., 2004;Wu et al., 2018): changes in community composition through species-level selection where taxa maladapted to a new temperature are replaced by those that are pre-adapted to it. This can happen either relatively rapidly through the resuscitation or suppression of taxa that are already present (Lennon and Jones, 2011;Wisnoski and Lennon, 2021), or more slowly through immigration-extinction dynamics driven by dispersal from the regional species pool (Langenheder and Székely, 2011;Wu et al., 2018). Resuscitation may be an important mechanism driving species sorting in microbial communities in particular because many microbial taxa have the capacity to form environment-resistant spores when conditions are unfavorable, and then rapidly activate metabolic pathways and resuscitate in favorable conditions. This effectively widens eLife digest Most ecosystems on Earth rely on dynamic communities of microorganisms which help to cycle nutrients in the environment. There is increasing concern that climate change may have a profound impact on these complex networks formed of large numbers of microbial species linked by intricate biochemical relationships.
Any species within a microbial community can acclimate to new temperatures by quickly tweaking their biological processes, for example by activating genes that are more suited to warmer conditions. Over time, a species may acclimate or adapt to new conditions. However, the community as a whole can also respond to these changes, and often much faster, by simply altering the abundance or presence of its members through a process known as species sorting. It remains unclear exactly how acclimation, adaptation and species sorting each contribute to the community's response to a temperature shift -an increasingly common scenario under global climate change.
To address this question, Smith et al. investigated how species sorting and acclimation may help whole soil bacterial communities to cope with lasting changes in temperature. To do so, soil samples from a single field site (and therefore featuring the same microbial community) were incubated for four weeks under six different temperatures. Genetic analyses revealed that, at the end of the experiments, distinct communities specific to a given temperature had emerged. They all differed in species composition and the types of biological functions they could perform.
Further experiments showed that each community had been taken over by strains of bacteria which grew best at the new temperature that they had been exposed to, including extreme warming scenarios never seen in their native environment. This suggests that these organisms were already present in the original community. They had persisted even under temperatures which were not optimal for them, acting as a slumbering ('latent') 'reservoir' of traits and functional abilities that allowed species sorting to produce distinct and functionally capable communities in each novel thermal environment. This suggests that species sorting could help bacterial communities to cope with dramatic changes in their thermal environment.
Smith et al.'s findings suggest that bacterial communities can cope with warming environments much better than has been previously thought. In the future, this work may help researchers to better predict how climate change could impact microbial community structure and functioning, and most crucially their contributions to the global carbon cycle. their thermal niche to allow persistence in the face of temperature change (Lennon and Jones, 2011;Wisnoski and Lennon, 2021). In order for rapid resuscitation of dormant taxa to allow species sorting to drive community-level adaptation, there must be a wide source pool of species to select from. Indeed, sequencing studies have revealed the presence of thousands of distinct microbial taxa in small environmental samples, most occurring at low abundance (Lynch and Neufeld, 2015;Sogin et al., 2006;Thompson et al., 2017). There is also strong evidence that bacteria are often found well outside of their thermal niche. For example, thermophilic taxa are often found in cold ocean beds and cool soils (Marchant et al., 2008;Hubert et al., 2009;Zeigler, 2014). Thus, a significant reservoir of latent microbial functional diversity may be commonly present for species sorting to act upon (Lennon and Jones, 2011;Wisnoski and Lennon, 2021).
Understanding the relative importance of acclimation, adaptation, and species sorting in the assembly and turnover (succession) of microbial communities is key to determining the rate at which they can respond to different regimes of temperature fluctuations. For example, a combination of acclimation and species sorting through resuscitation would enable rapid responses to sudden temperature changes, relative to adaptation. A number of past studies have investigated responses of microbial community composition and functioning to temperature changes, showing that composition can respond rapidly to warming (Allison and Martiny, 2008;Aydogan et al., 2018), often correlated with responses of ecosystem functioning (Karhu et al., 2014;Melillo et al., 2017;Yu et al., 2018). However, a mechanistic basis of these community-level responses remains elusive, both in terms of how individual taxa respond to changing temperatures in a community context and the relative importance of acclimation, adaptation, and species sorting. The community context of the responses of individual microbial populations is important because interactions between strains can constrain or accelerate acclimation as well as adaptive evolution (Scheuerl et al., 2020). Also, while the importance of species sorting in microbial communities per se has been studied ( Van der Gucht et al., 2007;Langenheder and Székely, 2011;Székely and Langenheder, 2014), work on this issue in the context of environmental temperature is practically nonexistent.
A further consideration is whether differing temperature conditions, such as the frequency and magnitude of temperature fluctuations, may influence the life history strategies of the taxa in the community (Gilchrist, 1995;Basan et al., 2020), which will in turn alter the relative importance of sorting, acclimation, and adaption. In order to identify the life history strategies of bacteria, we must quantify their phenotypic traits, such as growth rates and yield (Malik et al., 2020). Quantifying these traits can allow us to identify growth specialists ( r -strategists) and carrying-capacity specialists ( K -strategists) (Marshall, 1986), and thus test whether these strategies are differentially favored in different thermal environments. By identifying life history strategies, we can consider the ecosystem implications of any adaptation-, acclimation-, or sorting-driven changes in microbial communities (Malik et al., 2020).
Here, we investigate whether species sorting and latent functional diversity alone can influence the response of soil bacterial communities to changes in environmental temperature. To this end, we subject replicate communities, shielded from immigration, to a wide range of temperatures in the laboratory. In order to understand the mechanistic basis of observed community-level changes, we analyze the phylogenetic structure and functional traits of the resulting component taxa.
Materials and methods
We performed a species-sorting experiment to investigate how microbial communities respond to shifts in temperature ( Figure 1). After each community incubation at a given temperature, we estimated the thermal optimum ( Topt ) for every isolated strain by measuring the thermal performance curve (TPC) of its maximal growth rate across several temperatures ( Figure 1D). This allowed us to determine how strain-level thermal preferences and niche widths vary with community growth (isolation) temperature, and the presence of taxa pre-adapted to the new temperature. We also performed a phylogenetic analysis of the overall assemblage to identify whether deep evolutionary differences predict which taxa (and their associated traits) are favored by sorting at different temperatures. To quantify strain-level functional traits, we measured their available cellular metabolic energy (ATP), respiration rates, and biomass yield at population steady state (carrying capacity), which allowed us to identify r -vs. K -strategists as well as trade-offs between different strategies. (C) Soil washes from each core plated out onto agar and grown at both the sorting temperature and 22°C (standard temperature) to allow further species sorting and facilitate isolation (next step). (D) The six most abundant (morphologically different) colonies from each plate were picked, streaked, and isolated, and their physiological and life history traits measured. The curves represent each strain's unique unimodal response of growth rate to temperature.
Species-sorting experiment
Soil cores were taken from a single site in Nash's Field (Silwood Park, Berkshire, UK, the site of a longterm field experiment [Macdonald et al., 2015]) in June 2016 ( Figure 1A). Six cores were taken from the top 10 cm of soil, using a 1.5-cm-diameter sterile corer. Ambient soil temperature at the time of sampling was 19.4°C. The cores were maintained at different temperatures in the laboratory (4, 10, 21, 30, 40, and 50°C) for 4 weeks to allow species sorting to occur at those temperatures ( Figure 1B). The soil was rehydrated periodically with sterile, deionized water during incubation. During this period, in each microcosm (incubated soil core), we expected some taxa would go extinct if the temperature was outside their thermal niche, and that survivors would acclimate to the new local thermal conditions. We also expected that the 4-week incubation period would be sufficient time for changes to species interactions due to changes in abundance or traits, and therefore that interaction-driven sorting would occur in addition to the immediate extinctions and acclimation. Because bacteria display higher growth rates at warmer temperatures (Smith et al., 2019), the different incubation conditions could result in differential generational turnover of species across the given timescale. However, we did not supplement the soil samples with any additional nutrients and thus expect any growth of bacteria during this time to be heavily restricted due to nutrient limitation. Therefore, environmental exclusion (elimination of taxa maladapted to the temperature conditions) was expected to be the dominant process affecting the bacterial taxa during this stage of the sorting experiment, rather than changes in abundances due to population growth. We then isolated bacterial strains by washing the soil with PBS, plating the soil wash onto R2 agar, and incubating the plates at both their 4-week incubation temperature treatments ('sorting temperature') and at 22°C ('standard temperature').
The sorting temperature allowed us to determine whether strains in each community tended to have thermal optima-matching experimental temperatures, while the standard temperature allowed us to determine whether a 4-week incubation resulted in a loss of taxa that were poorly adapted to 22°C. Appearance of strains with thermal optima matching the standard temperature would indicate incomplete species sorting because the 4-week treatment at temperatures higher or lower than 22°C had not eliminated (or at least suppressed) them.
The plates were incubated until bacterial colonies formed, of which we isolated a single colony from each of the six most abundant morphologically distinct colony types on each plate ( Figure 1C). Additional species sorting likely occurred during this plating-based isolation because strains with the highest growth rates at each temperature would be the first to form visible colonies and be selected. The time frame for colony appearance on the agar plates differed between temperature treatments, ranging from (∼10 days at 4°C to ∼1.5 days at 50°C). Morphologically distinct colonies were isolated from each of the six sorting-temperature and six standard-temperature plates on R2 agar by streakplating, before being frozen as glycerol stocks (Figure 1), which were later revived for trait measurements (see below). In total, 74 strains were isolated in this way.
Taxonomic identification
16S rDNA sequences were used to identify the isolates. Raw sequences were first trimmed using Geneious 10.2.2 (https://www.geneious.com), and BLAST searches were then used to assign taxonomy to each trimmed sequence at the genus level. GenBank accession numbers of sequences are provided in Table 2.
Quantifying physiological and life history traits Growth, respiration, and ATP content We measured growth rate and respiration rate simultaneously across a range of temperatures for each isolate to construct its acute TPCs for these two traits. We henceforth denote the maximum growth rate across the temperature range by µmax , and the temperature at which this growth rate maximum occurs as Topt (optimal growth temperature or thermal optimum). The ATP content of the entire cell culture was also measured at the start and end of the growth assay. Strains were revived from glycerol stocks into fresh LB broth and incubated to carrying capacity at the temperature of the subsequent experiment. This growth to carrying capacity was an acclimation period, which typically took between 72 hr (warmest temperatures) to 500 hr (coldest temperature). Biomass abundance was determined by OD 600 -optical density measurements at 600 nm wavelength. Prior to each growth-respiration assay, the strains were diluted 1:100 in LB, pushing them into a short lag phase before exponential growth started again (also tracked by OD 600 measurements). The exponentially growing cultures were subsequently centrifuged at 8000 rpm for 5 min to pellet the cells, which were then resuspended in fresh LB to obtain 400 µl culture at a final OD 600 of ∼0.2-0.3. This yielded cells primed for immediate exponential growth without a lag phase. These cultures were serially diluted in LB (50% dilutions) three times, producing a range of starting densities of growing cells (four biological replicates per strain/ temperature combination). 100 µl subcultures of each replicate population were taken and OD 600 was tracked in a Synergy HT microplate reader (BioTek Instruments, USA) to ensure that cells were indeed in exponential growth. Initial ATP measurements were made using the BacTiter-Glo assay (see below for details) and cell counts were taken using a BD Accuri C6 flow cytometer (BD Biosciences, USA). Cells were then incubated with a MicroResp plate to capture cumulative respiration (see below for details of the MicroResp system) at the experimental temperature and allowed to continue growing for a short period of time (typically 3-4 hr). After growth, the MicroResp plate was read, and final cell count and ATP measurements taken.
We estimated average cell volumes and calculated the cellular carbon per cell from the flow cytometry cell diameter measurements using the relationship from Romanova and Sazhin, 2010: Multiplying this by the cell counts gives an estimate of the carbon biomass of the culture at the starting and ending points.
The difference between the initial biomass and biomass at the end of the experiment gives the total carbon sequestered through growth. Given an initial biomass (C 0 ) that grows over time ( t ) to reach a final biomass ( Ctot ), assuming the population is in exponential growth, the mass-specific growth rate (µ) is given by Respiration rates of cultures were measured during growth using the MicroResp system (Campbell et al., 2003). This is a colorimetric assay initially developed to measure CO 2 production from soil samples, which has since been used to measure respiration of bacterial cultures (Lawrence et al., 2012;Foster and Bell, 2012;Rivett et al., 2017). We calculate the biomass-specific respiration rate using an equation that accounts for changes in biomass of the growing cultures over time (Smith et al., 2021): Here, Rtot is the total mass of carbon produced according to the MicroResp measurements, C 0 is the initial population biomass, µ is the previously calculated growth rate, and t is the experiment duration. ATP content of the cultures was measured using the Promega BacTiter-Glo reagent, which produces luminescence in the presence of ATP, proportional to the concentration of ATP. 50 µl of culture (diluted 1:100) was incubated with 25 µl reagent. Luminescence was measured over a 6 min period to allow the reaction to develop completely, and measurements of luminescence recorded at the 0, 2, 4, and 6 min timepoints. The highest relative light unit (RLU) measurement for each culture was taken and used to calculate the quantity of ATP, using log(nM ATP) = 1.21 · log(RLU) − 4.69, derived from a calibration curve. This was then normalized by the flow cytometry measurements to calculate the value of ATP/biomass.
Thermal performance curves
To quantify TPCs of individual isolates, we fitted the Sharpe-Schoolfield model with the temperature of peak performance ( T pk ) as an explicit parameter (Schoolfield et al., 1981;Kontopoulos et al., 2020b) to the experimentally derived temperature-dependent growth rate and respiration rates of each isolate: Here, T is the temperature in Kelvin (K), B is the biological rate (in this case, either growth rate, µ, or respiration rate, R ), B 0 is the temperature-independent metabolic rate constant approximated at some (low) reference temperature T ref , E is the activation energy in electron volts (eV) (a measure of 'thermal sensitivity'), k is the Boltzmann constant ( 8.617 × 10 −5 eV K -1 ), T pk is the temperature where the rate peaks, and E D is the deactivation energy, which determines the rate of decline in the biological rate beyond T pk . We then calculated the peak performance (i.e., Rmax or µmax ) by solving Equation 1 for T = T pk . This model was fitted to each dataset using a standard nonlinear least-squares procedure (Smith et al., 2021). The T pk for growth rate was considered the optimum growth temperature (i.e., Topt ) for each isolate. Then, the operational niche width was calculated as the difference between Topt and the temperature below this value where µmax ( B(T) in Equation 1) reached 50% of its maximum (i.e., µmax at Topt ). This, a measure of an organism's thermal niche width relevant to typically experienced temperatures (Pawar et al., 2016;Kontopoulos et al., 2020a), was used as a quantification of the degree to which an isolate is a thermal generalist or specialist.
In most cases, Topt was derived directly from the Sharpe-Schoolfield flow cytometry growth rate fits. Four strains of Streptomyces were unsuitable for standard flow cytometry methods due to their formation of mycelial pellets (van Veluw et al., 2012). For these strains, growth rates derived from optical density measurements were used to estimate Topt instead.
Trade-offs between traits
To understand the trade-offs and collinearities between different life history and physiological traits, we performed a principal components analysis (PCA), with optimum growth temperature ( Topt ), niche width, peak growth rate ( µmax ), peak respiration rate ( Rmax ), mean cellular ATP content (logtransformed), and carrying capacity (OD 600 ) as input variables (scaled to have mean = 0 and SD = 1).
All rate calculations, model fitting, and analyses were performed in R.
Comparison to alternative datasets
We additionally investigated phylum-level life history strategy differences in two previously collated meta-analysis datasets as a comparison to our findings. DeLong et al., 2010 compiled data on both active (growth phase) and passive (stationary phase) metabolic rates, as well as growth rates, across a range of bacteria (mainly from Makarieva et al., 2005), which were corrected to 20°C using an activation energy of 0.61 eV. We also investigated differences in the growth rates of bacteria compiled in Smith et al.,
Phylogenetic trait mapping
We used 16S sequences to build a phylogeny in order to investigate the evolution of thermal performance across the isolated bacterial taxa. Sequences were aligned to the SILVA 16S reference database using the SILVA Incremental Aligner (SINA) (Pruesse et al., 2012). From this alignment, 100 trees were inferred in RAxML (v8.1.1) using a GTR-gamma nucleotide substitution model. The tree with the highest log-likelihood was taken and time-calibrated using PLL-DPPDiv, which estimates divergence times using a Dirichlet Process Prior (Heath et al., 2012). DPPDiv requires a rooted phylogeny with the nodes in the correct order; however, RAxML by default produces an unrooted tree. Therefore, we included an archaeal sequence in our 16S alignment (Methanospirillum hungatei, RefSeq accession NR_074177) and used this as an outgroup in our RAxML run. This gives a tree rooted at the outgroup, which we checked for correct topology using TimeTree (Kumar et al., 2017) as a reference. We derived calibration nodes from TimeTree (Kumar et al., 2017) and performed two DPPDiv runs for 1 million generations each, sampling from the posterior distribution every 100 generations. We ensured that the two runs had converged by verifying that each parameter had an effective sample size above 200 and a potential scale reduction factor below 1.1. We summarized the output of DPPDiv into a single tree using the TreeAnnotator program implemented in BEAST (Bouckaert et al., 2019). We then dropped the outgroup tip to give a time-calibrated phylogeny of our bacterial 16S sequences only, which was used for further analysis. Details of calibration nodes used are given in Table 1.
To test whether there was evidence of evolution of Topt , we calculated Pagel's λ (Pagel, 1999), which quantifies the strength of phylogenetic signal -the degree to which shared evolutionary history has driven trait distributions at the tips of a phylogenetic tree. λ = 0 implies no phylogenetic signal, that is, the signal expected if variation in trait values is independent of the phylogeny. λ = 1 implies strong phylogenetic signal, that is, that the trait has evolved gradually along the phylogenetic tree (approximated as Brownian motion [BM]). Intermediate values ( 0 < λ < 1 ) imply deviation from the BM model, and may be observed for different reasons, such as constrained trait evolution due to stabilizing selection, and variation in evolutionary rate over time (e.g., due to episodes of rapid niche adaptation). Pagel's λ requires that the trait be normally distributed. However, Topt in our dataset has a right-skewed distribution. Therefore, to test phylogenetic heritability we calculated λ for log(Topt) .
Blomberg's K is another metric that is also widely used to infer phylogenetic heritability (Blomberg et al., 2003;Münkemüller et al., 2012). Blomberg's K calculates the phylogenetic signal strength as the ratio of the mean squared error of the tip data and the mean squared error of the variance-covariance matrix of the given phylogeny under the assumption of BM (Münkemüller et al., 2012). K = 1 indicates taxa resembling each other as closely as would be expected under a BM model, K < 1 indicates less phylogenetic signal than expected under BM, and K > 1 indicates more phylogenetic signal than expected and thus a substantial degree of trait conservatism (Blomberg et al., 2003). Under a BM model of trait evolution, Pagel's λ is expected to perform better than K , which may itself be better utilized for simulation studies (Münkemüller et al., 2012). Previous work suggests that T pk is likely to evolve in a BM manner in prokaryotes (Kontopoulos et al., 2020a), making λ a more appropriate metric for these data than K . Furthermore, λ is potentially more robust to incompletely resolved phylogenies and is therefore likely to provide a better measure than K for ecological data in incomplete phylogenies (Molina-Venegas and Rodríguez, 2017). Therefore, we use λ as likely the more appropriate metric for our data; however, for the sake of completeness, we also test for phylogenetic heritability using K .
We mapped the evolution of Topt onto our phylogeny using maximum likelihood to estimate the ancestral values at each internal node, assuming a BM model for trait evolution (an appropriate model, given the obtained λ value). Where possible, we used Topt estimated directly from the Sharpe-Schoolfield fits. For six isolates whose growth was recorded at too few temperatures to fit the Sharpe-Schoolfield model, the temperature with the highest directly measured growth rate was taken as an estimate of Topt .
The estimates of phylogenetic signal and the visualization of trait evolution were performed using tools from the R packages ape and phytools (Revell, 2012;Revell and Freckleton, 2013). The p-value for phylogenetic signal was based on a likelihood ratio test.
Species sorting
In total, 74 strains of bacteria were isolated; 6 from each incubation temperature with matching sorting isolation temperature and 6 from each incubation temperature followed by a standard isolation temperature, with the exception of the 30°C sorting temperature regime, from which we obtained eight isolates. Of these isolates, 60 could be reliably revived in liquid culture, from which 54 grew across a wide enough temperature range to produce enough data points for fitting the Sharpe-Schoolfield model (Equation 1). The 60 strains that could be revived were from 16 genera within three bacterial phyla ( Table 2).
Isolates were in general well adapted to their sorting temperature (Figure 2A). A quadratic linear regression model fitted the data well (p<0.0001, shown in Figure 2A) and was preferred to a straightline regression model (ANOVA, p<0.0001). The deviation from a simple linear response arises because the Topts of isolates from the three lowest temperatures (4, 10, and 21°C) are significantly higher than their sorting and isolation temperature (Figure 2A). In comparison, the Topts of standard temperature isolates were largely independent of the temperatures that their community had been previously grown at ( Figure 2B), indicating that species sorting of the 4-week period had been incomplete, that is, strains maladapted to those temperature treatments had not been eliminated and were able to be resuscitated.
Evolution ofT opt
Topt displays a strong signal of phylogenetic heritability, closely approximating a BM model of trait evolution (Pagel's λ = 0.97 , p<0.001, n = 60), that is, closely related species have more similar Topt than random pairs of species. Qualitatively the same result was obtained using Blomberg's K metric ( K = 0.71, p<0.001, n = 60). The estimated ancestral states of Topt were mapped onto a phylogeny, where it can be seen that colder-or hotter-adapted strains tend to cluster together ( Figure 3A). The inferred evolution of Topt through time indicates that a large amount of the trait space (cool to hot) is explored by Firmicutes, while Actinobacteria and Proteobacteria are constrained to a much narrower range of (relatively cool) optimal growth temperatures ( Figure 3B).
Functional traits and life history strategies
We investigated the level of association and trade-offs between different traits in the two major phyla isolated (Firmicutes and Proteobacteria) using PCA ( Figure 4A). Growth specialists (copiotrophs, r specialists) are expected to grow rapidly but wastefully, and therefore have high ATP content in combination with high growth rates, but low overall yield (carrying capacity). Yield specialists (oligotrophs, K specialists) are expected to grow more slowly but more efficiently, and should therefore display the opposite pattern, that is, relatively low growth rates and ATP content, but high yield. The first two principal components explained 60.1% of the cumulative variation in the data. Topt , carrying capacity, and respiration rate showed greatest loading on the first principal component (PC1), while growth rate and niche width load most strongly on PC2. The Firmicutes and Proteobacteria phyla are partitioned in this space. The positive loadings onto PC2 of growth rate and ATP content versus the negative loading of carrying capacity suggest an r vs. K growth strategy trade-off; Proteobacteria have traits associated with a K -selected life history strategy while Firmicutes tend to have traits associated with an r -selected strategy. Furthermore, thermal niche width loads positively on PC2 along with growth rate and ATP content, implying that thermal generalism is not traded off against growth rate in these taxa; that is, no thermal generalist-specialist trade-off in growth rates.
To further understand the partitioning of taxa into these life history strategies, we investigated the differences in accessible cellular energy (ATP) content between these two phyla. We found that across the entire dataset (all replicate measurements across all temperatures), respiration rate and ATP content display a power-law relationship in both phyla ( Figure 4B). While Firmicutes have generally higher ATP levels overall, they display a sublinear scaling relationship of ATP levels with respiration rate (scaling exponent = 0.60 ± 0.07, p<0.001, R 2 = 0.13, n = 1722). In comparison, while Proteobacteria have less standing ATP content on average, they show an approximately linear scaling relationship between ATP and respiration rate (scaling exponent = 0.99 ± 0.06, p<0.001, R 2 = 0.59, n = 710). This suggests that Proteobacteria are deriving ATP from aerobic respiration only, whereas Firmicutes may be utilizing alternative pathways.
Finally, to ask whether the higher growth rates and lower respiration rates of Firmicutes comparative to Proteobacteria was a phenomenon constrained to our small dataset, or whether it was a more general trend observed between the two phyla, we compared this to data compiled in two meta-analyses -DeLong et al., Smith et al., 2019. In the DeLong et al., 2010 data, Table 2. List of revivable strains and GenBank accession numbers.
Strain codes follow XX_YY_ZZ naming convention, where XX is the incubation temperature, YY is the isolation temperature, and ZZ is a numeric designator for the specific isolate. RT = room temperature (22°C, termed 'standard temperature'). All 16S sequences are archived on NCBI's GenBank with the accession numbers indicated. Table 2 continued
Strain
Proteobacteria have higher active and passive metabolic rates than Firmicutes (active rates Wilcoxon rank-sum test p=0.0017, n = 39; passive rates Wilcoxon rank-sum test p=0.0098, n = 108, Figure 5A), consistent with our findings; however, there is no significant difference between the growth rates of the two phyla in these data (Wilcoxon rank-sum test p=0.66, n = 31, Figure 5B). By comparison, the Smith et al., 2019 dataset does show a significant difference between the growth rates of these phyla, with Firmicutes on average higher than Proteobacteria (Wilcoxon rank-sum test p=0.00035, n = 135, Figure 5C). We also compared the distribution of Topt for both phyla in the data from Smith et al., 2019 and find that Proteobacteria account for much more of the low-temperature strains, while Firmicutes are more associated with high temperatures (Figure 5D), which is consistent with our temperature isolation findings here (Figure 2).
Discussion
Here, using a novel species-sorting experiment, we have studied the extent to which species sorting and acclimation can influence the responses of soil bacterial communities to temperature change. We find that when replicate soil bacterial communities sampled from a temperate environment are subjected to a wide range of temperatures for 4 weeks, in microcosms where immigration is not possible, strains with thermal preferences matching the local conditions emerge consistently. The strong correspondence between strain-level optimal growth temperatures and isolation temperatures (Figure 2A) indicates that a pool of taxa with disparate thermal physiologies, including those maladapted to the ambient thermal conditions, persisted in the parent community. This result is reinforced by fact that the Topt s of standard temperature isolates were largely independent of the temperatures that their community had been previously grown at ( Figure 2B) maladapted to that temperature had not been eliminated and were able to be resuscitated. Therefore, we conclude that most 'sorting' occurred during the isolation step of our experiment rather than during the 4-week incubation period -the thermal optima of taxa isolated reflects the isolation conditions. While a 4-week period is arguably too short for mutation-or recombination-driven thermal adaptation in environmental samples (as a significant degree of generational turnover is required [Bennett et al., 1990;Lenski, 2017;Chase et al., 2021]), it is worth considering the possibility that some of the community-level emergence of thermally adapted strains could have been driven by rapid evolution through selection on standing trait variation. Indeed, stochastic mapping of thermal physiological traits on the prokaryotic tree of life has shown that Topt evolves relatively rapidly compared to other traits such as niche width or activation energy (thermal sensitivity) (Kontopoulos et al., 2020a). This is consistent with adaptive evolution experiments, which have shown that bacteria as well as archaea can rapidly adapt to new temperatures by shifting their Topt (Bennett et al., 1990;Kishimoto et al., 2010;Blaby et al., 2012;Smith et al., 2019). The molecular mechanisms underlying such rapid evolution are still being investigated, but structural changes to enzymes that alter their melting temperatures appear to be a key mechanism when adaptation to relatively high temperatures is called for (Pucci and Rooman, 2017). While determining whether such mechanisms can be operationalized over the duration of our sorting experiment was beyond the scope of our study, this is still arguably a very short time frame for significant shifts in thermal optima due to selection on standing variation alone. Furthermore, the communities that remained after 4 weeks of growth at the six temperatures consisted of taxonomically distinct sets of strains, and the Topt s of the overall set of taxa exhibited a significant phylogenetic signature (Figure 3). This indicates that the observed systematic differences in Topt across the temperature-specific communities were driven by species sorting on preexisting physiological variation across strains rather than thermal adaptation of single strains. Overall, we therefore conclude that species sorting played a dominant role in determining the response of the parent community to abrupt changes in temperature, in the absence of immigration, and with negligible adaptation.
We also detected systematic turnover in functional traits that likely underpin the change in thermal optima with species sorting. There were differences in the taxa isolated at different temperatures, with more Proteobacteria at lower temperatures and more Firmicutes at higher temperatures (all Topt > 35°C were Firmicutes). Furthermore, these phyla were partitioned in the r -K and thermal generalism-specialism trait spaces (Figure 4). Proteobacteria were found to be relatively K -selected thermal specialists and Firmicutes relatively r -selected thermal generalists. These findings are inconsistent with a generalist-specialist trade-off in which increasing thermal niche width is proposed to inevitably incur a metabolic cost, reducing maximal growth rates (Huey and Hertz, 1984;Angilletta, 2009). As with our findings, recent work on phytoplankton thermal performance traits also failed to detect a generalist-specialist trade-off (Kontopoulos et al., 2020a), questioning its universality in microbes. Since the existence of such a trade-off plays a key role in life history theory, there would be value in further experiments to confirm the generality of this finding. The increased growth rates and lower respiration rates of Firmicutes relative to Proteobacteria found here are also largely consistent with datasets from meta-analyses of bacterial rates ( Figure 5). Additionally, previously reported values for cellular ATP content have generally been found to be higher for Firmicutes than Proteobacteria with more than tenfold greater intracellular ATP content reported for Bacillus versus Pseudomonas strains (Hattori et al., 2003), some of the major representatives of Firmicutes and Proteobacteria in this experiment, respectively. This suggests that these phyla tend to allocate resources to growth and respiration in fundamentally different ways. One explanation for these seemingly divergent strategies may be found in Firmicutes deriving extra energy though fermentation pathways. There is a mechanistic trade-off between growth rate and yield whereby bacteria may increase their rate of ATP production by supplementing aerobic respiration with fermentation (Pfeiffer et al., 2001). Fermentation pathways increase the rate of ATP production but result in lower total yield, allowing populations to reach higher growth rates but lower carrying capacity from the same resource input. This is consistent with the apparent r vs. K selection trade-off observed in our results. The differences in scaling relationship between ATP content and respiration rate may provide further evidence of differences in metabolic pathways utilized. Across Proteobacteria, ATP content has a scaling exponent of approximately 1 with respiration rate, indicating that these strains are deriving ATP solely from aerobic respiration ( Figure 4B). The fact that Firmicutes have a lower scaling exponent (0.60 ± 0.07), that is, that they are generating higher levels of ATP than expected at lower rates of respiration, may indicate that they derive ATP from alternative pathways alongside aerobic respiration. These differences in metabolic strategies reflect underlying differences in the efficiency of growth, that is, carbon use efficiency (CUE), between these taxa (Smith et al., 2021). Moreover, CUE varies systematically with temperature in a phylogenetically structured manner Smith et al., 2021). Thus, community turnover due to temperature change is likely to have a profound impact on community-level functional traits, such as CUE .
In contrast to the strong association between Topt and incubation temperature in the sorting temperature isolates, we did not observe any similar relationships in the standard temperature isolates, where mesophiles were consistently recovered regardless of prior incubation conditions. This indicates that species sorting was incomplete (in that maladapted taxa were not driven extinct), implying that bacterial communities can be resilient to temperature change at the community level. Taxa suited to different temperatures are able to 'switch on' as conditions become suitable, allowing community-level functional plasticity due to the latent functional diversity present within communities. Although mesophiles were recovered from all incubation temperatures in our standard temperature experiment, there was the same taxonomic bias as seen in the sorting temperature isolates -more Firmicutes were recovered from higher temperatures. This is probably a reflection of the propensity of Firmicutes to form endospores and remain dormant until conditions are favorable, upon which they invest resources into rapid growth to gain a competitive advantage over other taxa, consistent with our life history trait findings of r -specialism in Firmicutes (Lennon and Jones, 2011). In comparison, the Proteobacteria in this experiment were generally more suited to oligotrophic environments (e.g., Collimonas; Leveau et al., 2010), where constituent species are expected to present low growth rate and high carrying capacity ( K specialists; Fierer et al., 2007), as well as increased respiration (Keiblinger et al., 2010). This idea is supported by the observation that we isolated the strains from sandy, acidic soil (i.e., oligotrophic) (Fornara et al., 2013), and sequencing studies revealed that Proteobacteria are the most abundant phylum (Macdonald et al., 2015). We do not suggest that this adoption of r vs. K strategy is general to all Firmicutes and Proteobacteria. Indeed, meta-analysis reveals little consistency in the phyla associated with copiotrophy or oligotrophy (Ho et al., 2017). Nor do we suggest that warming is likely to result in selection for Firmicutes over Proteobacteria -community temperature responses are not likely to be consistent at coarse phylogenetic levels (Oliverio et al., 2017). However, the results presented here are consistent with phylum-specific traits for the majority of our isolates when compared to each other.
Patterns of microbial community succession in nature are driven by the differences in growth strategy between taxa that we report here. Studies have revealed taxonomic groups associated with different stages of microbial succession, with patterns broadly consistent across timescales of days (Noll et al., 2005;Shrestha et al., 2007;Rui et al., 2009) and years (Nemergut et al., 2007;Banning et al., 2011) and even over thousands and tens of thousands of years, as revealed through sequencing of soil sediments (Jangid et al., 2013). Generally, across these studies, the phyla Firmicutes and Bacteroidetes are associated with early succession, while other phyla such as Actinobacteria and Acidobacteria are more abundant at later stages of succession. Proteobacteria are less consistent at the phylum level, with Alphaproteobacteria associated with late succession, Betaproteobacteria associated with early succession and Gammaproteobacteria variously associated with different stages of succession in different studies. Isolated taxa reveal a strong association between early succession and high growth rates (Shrestha et al., 2007) as well as rRNA operon copy numbers, a key determinant of bacterial growth rate (Klappenbach et al., 2000). The K -selected taxa may therefore be thought of as general constituents of soil, associated with standard low turnover of carbon, while the r -selected taxa may be seen as more opportunistic from their involvement in early succession. Indeed, signatures of community-level differences in r -vs. K -selection have been observed in microbial communities at different successional stages (Pascual- García and Bell, 2020). Fluctuating temperatures may therefore drive repeated assembly dynamics via sorting on latent microbial diversity, leading to functional community changes through time. However, the frequency and magnitude of temperature fluctuations may also influence the life history strategies of the taxa in the community (Gilchrist, 1995;Basan et al., 2020).
Although we report patterns broadly consistent with previous findings at the phylum level, bacteria isolated from the environment will always represent only a small, incomplete subset of the overall diversity of the natural community. Previous 16S sequencing of the field site sampled here has revealed Proteobacteria to be the most abundant phylum, followed by Verrucomicrobia, Acidobacteria, Actinobacteria, and Firmicutes, respectively (Macdonald et al., 2015). That the majority of our isolates are from the Firmicutes and that we isolated no Acidobacteria or Verrucomicrobia, despite their expected relative abundances in these soils, is not surprising. Firmicutes are consistently overrepresented in culture libraries (Schloss et al., 2016;Floyd et al., 2005), while most members of the Acidobacteria and Verrucomicrobia are notoriously difficult to reliably culture (Kielak et al., 2010;Kalam et al., 2020). Therefore, caution should be taken when interpreting community responses from culture-based studies like ours.
In summary, we have found that resuscitation of latent functional diversity driven by phenotypically plastic responses of single taxa to temperature change can allow whole bacterial communities to track dramatic changes in temperature. Community function is expected to be driven by interactions between the most abundant taxa (Rivett and Bell, 2018) and therefore changes in the abundance of taxa with temperature variation are likely to drive profound changes in overall community functioning (mediated by community-level variation in traits such as CUE). In particular, r -vs. K -selection is likely to vary with temperature change at the community level, from daily to seasonal successional trajectories, driven by species sorting. Furthermore, climate change is expected to lead to increased temperature fluctuations (Vasseur et al., 2014), both in magnitude and frequency. This may potentially lead to more frequent species sorting effects over short timescales, further driving changes in community composition through time. Overall, these results show that latent diversity in thermal physiology combined with temperature induced species sorting is likely to facilitate the responses of microbial community structure and functioning to climatic fluctuations. Continued on next page
|
2021-04-22T13:25:28.985Z
|
2021-04-14T00:00:00.000
|
{
"year": 2022,
"sha1": "396709486832ba7143a947056eba6198d8211a22",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/04/14/2021.04.14.439774.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8064aa1966f407d68344a4b1aac9a6f2e8a8efa",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
119151173
|
pes2o/s2orc
|
v3-fos-license
|
On the moduli spaces of metrics with nonnegative sectional curvature
The Kreck-Stolz s invariant is used to distinguish connected components of the moduli space of positive scalar curvature metrics. We use a formula of Kreck and Stolz to calculate the s invariant for metrics on S^n bundles with nonnegative sectional curvature. We then apply it to show that the moduli spaces of metrics with nonnegative sectional curvature on certain 7-manifolds have infinitely many path components. These include the first non-homogeneous examples of this type and certain positively curved Eschenburg and Aloff-Wallach spaces.
property. It is also of note that all previous results concerning M sec>0 and M sec≥0 calculate the s invariant only for homogeneous metrics admitting an S 1 action with geodesic orbits.
We identify further 7-manifolds with M sec≥0 and M Ric>0 having infinitely many path components. As in previous examples, the manifolds are total spaces of n−sphere bundles. In our case, however, the metrics are not homogeneous and the S n fibers are not totally geodesic.
The first set of examples are total spaces M m,n of S 3 bundles over S 4 . Such bundles are classified by pairs of integers (m, n) ∈ π 3 (SO(4)) ∼ = Z ⊕ Z. The second set are total spaces S a,b of S 3 bundles over CP 2 which are not spin. They are classified by two integers a, b describing the first Pontryagin class and the Euler class. Grove and Ziller [GZ1,GZ2] showed that M m,n and S a,b admit metrics of nonnegative sectional curvature.
Theorem A. Let m, n, a, b ∈ Z with n = 0 and a = b. Then for M = M m,n or S a,b the moduli spaces M sec≥0 (M ) and M Ric>0 (M ) have infinitely many path components.
Note that the family M m,±1 includes S 7 and the exotic Milnor spheres. By [EZ] Proposition 6.7 the manifold S −1,a(a−1) is diffeomorphic to the Aloff-Wallach space W 7 a,1−a discussed above. But in general M m,n and S a,b do not have the homotopy type of a 7-dimensional homogeneous space, e.g. when |H 4 (M m,n , Z)| = |n| / ∈ {1, 2, 10} or |H 4 (S a,b , Z)| = |a − b| = 2 mod 3 respectively. To describe the final set of manifolds, we start with S 2 bundlesN t and N t over CP 2 which are spin, respectively not spin, and are classified by an integer t describing the first Pontryagin class. The 7-manifoldsM t a,b and M t a,b are the total spaces of S 1 bundles overN t and N t respectively, classified by two additional integers a and b, with gcd(a, b) = 1, describing the Euler class. Escher and Ziller [EZ] showed that M t a,b andM 2t a,b admit metrics of non-negative sectional curvature such that S 1 acts by isometries.
Theorem B. (a) Let a, b, t ∈ Z with gcd(a, b) = 1 and t(a + b) 2 = ab. Then M sec≥0 (M t a,b ) and M Ric>0 (M t a,b ) have infinitely many path components. (b) Let a, b, t ∈ Z with gcd(a, 2b) = 1 . Then M sec≥0 (M 2t a,2b ) and M Ric>0 (M 2t a,2b ) have infinitely many path components.
In [EZ] Corollary 6.4 it was shown that the manifold M −1 a,b is the Eschenburg biquotient F a,b = S 1 a,b,a+b \SU (3)/S 1 0,0,2a+2b . These are the only Eschenburg biquotients admitting free S 1 actions and when ab > 0 they admit metrics of positive sectional curvature, see [Es]. Furthermore M 1 a,b is the Aloff-Walach space W a,b . We have thus as an immediate corollary Corollary. For M = W a,b or M = F a,b the moduli spaces M sec≥0 (M ) and M Ric>0 (M ) have infinitely many path components.
These are the first examples where M sec≥0 (M ) has infinitely many components, some of which contain metrics with positive sectional curvature. We note that in [EZ] one finds further examples of positively curved Eschenburg spaces which are diffeomorphic to some of the manifolds S a,b or M t a,b and so the same conclusion holds. By Corollary 7.8 of [EZ]M 0 a,2b is diffeomorphic to the homogeneous space N 7 2b,a , and hence Theorem B part (b) generalizes the Kreck-Stolz examples. But again in generalM 2t a,2b and M t a,b do not have the homotopy type of a 7-dimensional homogeneous space, e.g. when |H 4 (M 2t a,2b , Z)| = |a 2 − 8tb 2 | = 2 mod 3 or |H 4 (M t a,b , Z)| = |t(a + b) 2 − ab| = 2 mod 3. The strategy of the proof is as follows. We calculate the s invariant with topological data on the associated disc bundle of the sphere bundle. In Theorem 2.1 we extend the metric with sec ≥ 0 on each sphere bundle to a metric of positive scalar curvature on the associated disc bundle which is a product near the boundary. If the disc bundle is a spin manifold Kreck and Stolz [KS3] obtained a formula for the s invariant in terms of the index of the Dirac operator, which vanishes since the scalar curvature is positive, and topological data on a bounding manifold, see Theorem 1.1. Theorem A follows easily: the manifolds M m,n and S a,b are classified up to diffeomorphism in [CE] and [EZ] respectively. In particular each sphere bundle is diffeomorphic to infinitely many others. Their computations easily yield the formula for the s invariant as well. Theorem A follows since s is a polynomial in the integers a, b, m and n, where m, n satisfy ma + nb = 1.
Theorem B is more involved. For part (b) we observe thatM t a,b is a spin manifold if and only if b is even, and in this case the associated disc bundle is a spin manifold as well. The proof then proceeds as before although the proof that the metrics have positive scalar curvature is more involved. We use the Kreck Stolz invariants s 1 , s 2 , s 3 ∈ Q/Z of [KS2] to obtain infinitely many circle bundles diffeomorphic to each manifold. For part (a) the manifolds M t a,b are always spin but the disc bundles are not. Here we use another formula from [KS3], see Theorem 1.3, which does not require knowledge of a spin bounding manifold, but requires that the bundle be a circle bundle and that the fibers be geodesics. The latter condition does not hold for the metrics with sec ≥ 0, so we first deform the metrics, preserving positive scalar curvature, until the fibers are geodesics and such that S 1 still acts by isometries. Then the strategy proceeds in the same way.
We note thatM 2t a,2b+1 and the spin S 3 bundles over CP 2 also admit metrics with sec ≥ 0, but they are not spin manifolds so the methods do not apply . The conditions a = b and n = 0 for S a,b and M n,m as well as t(a + b) 2 = ab for M t a,b are required to ensure the manifolds have the correct cohomology ring for the diffeomorphism classifications.
We point out that the case of S 3 bundles over S 4 was obtained independently by A. Dessai in [De].
I would like to thank my Ph. D. advisor Wolfgang Ziller for endless ideas and support.
Preliminaries
Let (M, g) be a 4k − 1 dimensional Riemannian spin manifold with vanishing rational Pontryagin classes. The Kreck-Stolz s invariant is defined in [KS3] intrinsically for a positive scalar curvature metric g on M : Here D is the Dirac operator on the spinor bundle, B is the signature operator on differential forms and η is the spectral asymmetry invariant of a differential operator defined in [APS]. p i (M, g) are the Pontryagin forms defined in terms of the curvature tensor of g. and L are the Hirzebruch polynomials and a k = (2 2k+1 (2 2k−1 − 1)) −1 . d −1 represents a form whose exterior derivative is the indicated form. Kreck and Stolz [KS3] showed that existence of this form and uniqueness of the integral follow from the vanishing of the rational Pontryagin classes. They further showed the invariant depends only on the connected component of g in M scal>0 .
Kreck and Stolz use the Atiyah-Patodi-Singer index theorem [APS1] and the Lichnerowitz theorem for manifolds with boundary ( [APS2], p.416) to prove Theorem 1.1. [KS3] Let W be a spin (4k)-manifold with a metric h of positive scalar curvature which is a product metric on a collar neighborhood of ∂W . If ∂W = M has vanishing rational Pontryagin classes and g = h| M has positive scalar curvature then where [W, ∂W ] is the fundamental class and and L are Hirzebruch's polynomials. Furthermore j −1 p i (W ) is any preimage of the i th Pontryagin class of W in H 4i (W, ∂W ; Q) and sign(W ) is the signature of W .
In Theorem A and Theorem B part (b) the associated disc bundle to the sphere bundle is a spin manifold and hence we can apply Theorem 1.1. If however the disc bundle is not a spin manifold we use a different strategy. In the special case of an S 1 bundle with geodesic fibers, Kreck and Stolz use a cobordism argument to reduce to a case where another bounding manifold can be found and derive a correction term.
Theorem 1.3. [KS3] Let π : M → B be a principal S 1 bundle such that M is a spin (4k − 1)manifold with vanishing rational Pontryagin classes. Suppose B is a spin manifold and M is given the spin structure induced by the vector bundle isomorphism T M ∼ = π * (T B) ⊕ V , where V is the trivial vector bundle generated by the action field of the S 1 action. Let g be a metric with scal(g) > 0 on M such that S 1 acts by isometries and the S 1 orbits are geodesics. Then Here W is the disc bundle associated to M .
Metrics on sphere and disc bundles
In [GZ1] and [GZ2] one finds many examples of metrics with nonnegative sectional curvature on principal SO(n) bundles such that SO(n) acts by isometries. Hence the associated sphere bundles admit such metrics as well. We will apply Theorem 1.1 to appropriate metrics constructed on the associated sphere and disc bundles.
Theorem 2.1. Let P be a principal SO(n + 1) bundle admitting a metric g P , invariant under the SO(n + 1) action, with sec(g P ) ≥ 0. In the case n = 1 assume in addition that at each point x ∈ P there exists a 2-plane σ x ⊂ T x P with sec g P (σ x ) > 0 which is orthogonal to the orbit of SO(2). Then there exists a metric g M on the associated sphere bundle M = P × SO(n+1) S n with sec(g M ) ≥ 0 and scal(g M ) > 0 that extends to a metric g W on the associated disc bundle Proof. Let g S n be the standard metric on the sphere of radius 1/2. We define the metric g M such that the product metric g P + g S n and g M make the projection ρ : P × S n → P × SO(n+1) S n = M into a Riemannian submersion. By the O'Neill formula g M has nonnegative sectional curvature.
To show g M has positive scalar curvature we must check that each point of M has a 2plane of positive sectional curvature. First assume n > 1. Consider (p, x) ∈ P × S n . Let X, Y ∈ so(n + 1) such that the action fields X * , Y * ∈ T x S n are linearly independant. The vertical space of the SO(n + 1) action on P × S n is the set of vectors (Z * , −Z * ) for all Z ∈ so(n + 1), where we repeat notation for the action fields on P and S n . It follows that projections of (0, X * ), (0, Y * ) ∈ T (p,x) P × S n onto the horizontal space are A = (aX * , bX * ) and B = (cY * , dY * ) for some a, b, c, d = 0. In the product metric we have In the case of n = 1 we have by assumption a 2-plane σ x ⊂ T x P in the horizontal space of the SO(2) action on P . It follows that (σ x , 0) lies in the horizontal space of the SO(2) action on P × S 1 , and by the O'Neill formula sec g M (ρ * (σ x , 0)) ≥ sec g P (σ x ). So g M has a 2-plane of positive sectional curvature at each point and hence scal(g M ) > 0.
We next show that g M extends to a metric g W on W with positive scalar curvature. Let is a smooth metric on D n+1 with sec(g D n+1 ) ≥ 0. Define the metric g W on W such that g P +g D n+1 and g W make the projection The assumption that f is concave and f ′ (r) < 1 when r > 0 ensure sec(g D n+1 ) ≥ 0. Furthermore, when n > 1, planes tangent to the spheres of constant radius r will have positive sectional curvature, and we repeat the argument above to conclude scal(g W ) > 0. For n = 1, the same argument as for g M implies scal(g W ) > 0.
For r ∈ [R, 1] the projection π can be regarded as The image is a collar neighborhood of the boundary of W. Since f = 1/2 in this region, the metric on the left is g P + g S n + dr 2 and the metric induced on the quotient is g M + dr 2 . So g W is a product metric near the boundary with g W | ∂W = g M .
We note that by replacing g S n by 1 λ g S n in the proof and considering λ ∈ [0, 1], one sees that g M lies in the same path component of M sec≥0 (M ) as the metric induced by g P under the submersion P → P/SO(n) ∼ = M .
In the case of an S 1 bundle with totally geodesic fibers, Theorem 1.3 applies without requiring the disc bundle to be spin. The following theorem shows that some S 1 invariant metrics with nonnegative sectional curvature can be deformed to metrics with geodesic fibers while maintaining positive scalar curvature.
Theorem 2.2. Let M be a manifold admitting a free S 1 action and a metric g of nonnegative sectional curvature, invariant under that action. Suppose that for each Then M admits a metric h of positive scalar curvature such that S 1 acts is by isometries, the S 1 orbits are geodesics, and h and g are in the same path component of M scal>0 (M ).
Proof. Since the set of 2-planes orthogonal to the S 1 orbits is compact, the maximum sectional curvature of such a plane at each point is a positive continuous function, and hence there exists C > 0 such that we can choose σ x with sec(σ x ) > C. Let X be the action field of the S 1 action on M and u = |X| g . We fix 0 < ǫ < inf M (u) such that sup x∈M, |Y |g=1 and a warpped product metric g λ on M × S 1 : Next define the metric h λ on M such that g λ and h λ make the projection of S 1 on M × S 1 is by isometries of g λ , commutes with the quotient action, and induces an action on M × S 1 S 1 which makes the diffeomorphism M × S 1 S 2 ∼ = M equivariant. Thus S 1 acts on M by isometries of h λ .
We now show that scal(h λ ) > 0 for all λ ∈ (0, 1]. For a point x ∈ M let {X 1 , ..., X n−1 , X} be an orthogonal basis of T x M with |X i | g = 1 and σ x = span(X 1 , X 2 ). Then we can find a, b ∈ R and Z = (aX, b∂ θ ) such that {(X 1 , 0), ..., (X n−1 , 0), Z} is an orthonormal basis of the horizontal space of π at (x, y) ∈ M × S 1 . By the O'Neill formula For details on the sectional curvatures of a warped product, used in the last equality, see [Be] Section 9J. Applying the definition of v λ we have So h λ , λ ∈ (0, 1], is a continuous path of metrics with positive scalar curvature. Each h λ is identical to g on the orthogonal complement of X, while Since X is a Killing vector field and |X| h 1 = ǫ is constant, the integral curves of X, which are the orbits of the S 1 action, are geodesics in h 1 . Furthermore |X| h 0 = u, so h 0 = g. Thus h = h 1 and g are in the same path component of M scal>0 (M ).
In Sections 3 and 4 we find, for each manifold M in Theorems A and B, a sequence of metrics on manifolds diffeomorphic to M such that no two metrics yield the same value of s. The following lemma shows that these sequences complete the proof of Theorems A and B. Proof. We pull back the metrics g i to a sequence of metrics on M . By [KS3] Proposition 2.13, s is preserved under pullbacks. Since the values of s are distinct, these metrics lie in an infinite set of distinct path components of M scal>0 (M ).
An argument as in [DKT, BKS] shows the metrics lie in different path components of M sec≥0 as well. Suppose two of the metrics g 0 , g 1 (up to diffeomorphism) can be connected with a path g t maintaining nonnegative sectional curvature. Bohm and Wilking [BW] showed that such metrics on a simply connected manifold immediately evolve to have positive Ricci curvature under the Ricci flow. Thus the path g t evolves to a path maintaining positive Ricci curvature, and thus positive scalar curvature. g 0 and g 1 are connected to the new endpoints by their evolution under the Ricci flow which similarly maintains positive scalar curvature. So g 0 and g 1 can be connected with a path maintaining positive scalar curvature, which leads to a contradiction. Furthermore, g 0 and g 1 evolve under the Ricci flow to metrics of positive Ricci curvature in distinct components of M scal>0 (M ) and therefore of M Ric>0 (M ).
S 3 bundles over S 4 and CP 2
In this section we prove Theorem A, starting with the simplest case.
3.1. S 3 bundles over S 4 . S 3 bundles over S 4 are classified by elements of π 3 (SO(4)) = Z ⊕ Z. We use the basis for π 3 (SO(4)) given by the maps µ(q)(v) = qvq −1 and ν(q)(v) = qv. Here v ∈ R 4 viewed as the quaternions and q ∈ S 3 viewed as the unit quaternions. Let M m,n be the bundle classified by mµ + nν ∈ π 3 (SO(4)). In [GZ1] it is shown that the SO(4) principal bundle of every S 3 bundle over S 4 admits an SO(4) invariant metric of nonnegative sectional curvature, and hence the sphere bundles do as well.
Assume n = 0. From the homotopy long exact sequence one sees that H 4 (M m,n , Z n ) = Z n , so the rational Pontryagin classes of M m,n vanish. Let W m,n be the associated disc bundle. Then H 2 (W m,n , Z 2 ) = H 2 (S 4 , Z 2 ) = 0 and hence W m,n is a spin manifold. Theorem 2.1 and Theorem 1.1 imply that M m,n has a metric g Mm,n of nonnegative sectional and positive scalar curvature with s invariant given by (1.5). Crowley and Escher [CE] computed the invariants p 2 1 (W m,n ) = 4(n + 2m) 2 n and sign(W m,n ) = 1. So s(M m,n , g Mm,n ) = −(n + 2m) 2 + n 2 5 · 7 · n .
Corollary 1.6 of [CE] shows that M m ′ ,n and M m,n are diffeomorphic if m ′ = m mod 56n. So the manifolds in the sequence {M m+56ni,n } i are all diffeomorphic to M m,n . Since n is constant in the sequence, the s invariant is a polynomial in i. It follows that there is an infinite subsequence of metrics with distinct s invariants. Lemma 2.3 completes the proof of the first part of Theorem A.
Remark 3.1. Comparison to the 7-dimensional homogeneous spaces in [N] shows that M m,n has the cohomology ring of such a space only when |n| = 1, 2 or 10. The homogeneous candidates are S 7 , T 1 S 4 and the Berger space SO(5)/SO(3) with H 4 = 0, Z 2 and Z 10 .
3.2. S 3 bundles over CP 2 . In [GZ2] it is shown that every principal SO(4) bundle over CP 2 with w 2 = 0 admits an SO(4) invariant metric of nonnegative sectional curvature. Such bundles are classified by two integers a, b describing the first Pontryagin and Euler classes p 1 = (2a + 2b + 1)x 2 and e = (a − b)x 2 , where x is the generator of H * (CP 2 , Z). Let π : S a,b → CP 2 be the S 3 bundle over CP 2 with these characteristic classes. If a = b then the Gysin sequence implies that H 4 (S a,b , Z) = Z |a−b| , so the rational Pontryagin classes vanish.
Let E 4 − → CP 2 be the 4-plane bundle associated to S a,b and W a,b ⊂ E 4 the associated disc bundle with projection ρ : W a,b → CP 2 . Then T W a,b ∼ = ρ * (E 4 ⊕ T CP 2 ) and w 2 (T W a,b ) = ρ * (w 2 (E 4 ) + w 2 (T CP 2 )) = 0. So W a,b is a spin manifold. Theorem 2.1 and Theorem 1.1 imply that S a,b has a metric g S a,b of nonnegative sectional and positive scalar curvature with s invariant given by (1.5).
It is shown in [EZ] Proposition 4.3 that Corollary 4.5 of [EZ] implies that S a,b and S a ′ ,b ′ are diffeomorphic if a − b = a ′ − b ′ and a = a ′ mod λ = 2 3 · 3 · 7 · |a − b|. Thus the manifolds in the sequence {S a+iλ,b+iλ } i are all diffeomorphic to S a,b . Since a − b is constant for the sequence, the s invariant is a polynomial in i. So there is an infinite subsequence of metrics in this sequence with distinct s invariants. Lemma 2.3 completes the proof of the second part of Theorem A.
Remark 3.2. (a) The only 7-dimensional homogeneous spaces with the same cohomology ring as any S a,b are the families N 7 k,l and W 7 k,l described in the introduction, see [N]. The quantities |H 4 (N 7 k,l , Z)| = l 2 and |H 4 (W 7 k,l , Z)| = k 2 +l 2 +kl are always equal to 0 or 1 mod 3, so if |a−b| = 2 mod 3, S a,b does not have the homotopy type of a 7-dimensional homogeneous space.
(b) By [EZ] Proposition 6.7, S −1,a(a−1) is diffeomorphic to the homogeneous Aloff-Wallach space W 7 a,1−a . There also exist infinitely many positively curved Eschenburg spaces and many other Aloff-Wallach spaces which are diffeomorphic to S 3 bundles over CP 2 , see [EZ] Theorem 8.1.
S 1 bundles over S 2 bundles over CP 2
Escher and Ziller [EZ] defined two families of 7-manifolds as follows. Let x be the generator of H * (CP 2 , Z). Define p : N t → CP 2 as the S 2 bundle with Pontryagin and Stiefel-Whitney classes p 1 (N t ) = (1−4t)x 2 and w 2 (N t ) = 0. They showed that N t is diffeomorphic to the projectivization P (E t ) of the complex line bundle E t over CP 2 with Chern classes c 1 (E t ) = x and c 2 (E t ) = tx 2 . Furthermore if P t is the principal U (2) bundle corresponding to E t , N t is diffeomorphic to P t /T 2 , where T 2 ⊂ U (2).
Let y be the first Chern class of the dual of the tautological line bundle over P (E). By the Leray-Hirsch theorem For simplicity, we denote p * (x) again by x. Finally define the principal S 1 bundle with Euler class e = ax + (a + b)y and gcd(a, b) = 1.
Proposition 6.1 in [EZ] shows that the bundle . Since gcd(a, b) = 1, the total space is simply connected, and from the Gysin sequence it follows that the cohomology ring of M t a,b is of the form required by the diffeomorphism classification of [KS2] as long as 0 = |t(a + b) 2 − ab| = |H 4 (M t a,b , Z)|. Next definep :N t → CP 2 as the S 2 bundle with Pontryagin and Stiefel-Whitney classes p 1 (N t ) = 4tx 2 and w 2 (N t ) = 0. In this caseN t is diffeomorphic to the projectivization P (Ē t ) of the complex line bundleĒ t over CP 2 with Chern classes c 1 (Ē t ) = 2x and c 2 (Ē t ) = (1 − t)x 2 . If P t is the principal U (2) bundle associated toĒ t ,N t is diffeomorphic toP t /T 2 . Let y be the first Chern class of the dual of the tautological line bundle over P (Ē t ). Then H * (N t ) = Z[x, y]/(x 3 , y 2 + 2xy + (1 − t)x 2 ).
Again we denotep * (x) by x. Finally define the principal S 1 bundle S 1 →M t a,b − →N t with Euler class e = (a + b)x + by and gcd(a, b) = 1.
In this case, one sees that π 1 (P t ) = Z 2 andP t has a two-fold coverP ′ t which is a principal S 1 ×S 3 bundle over CP 2 . FurthermoreN t ∼ =P ′ t /T 2 , with T 2 = {(e iθ , e iφ )} ⊂ S 1 × S 3 . Proposition 7.5 in [EZ] shows that the bundle definingM t a,b is equivalent to As before,M t a,2b is simply connected and has the cohomology necessary for the diffeomorphism classification of [KS2], since a is odd and so |H 4 (M t a,2b )| = |a 2 − 4tb 2 | = 0. Escher and Ziller showed that M t a,b andM 2t a,b admit S 1 invariant metrics g t a,b andḡ 2t a,b respectively with nonnegative sectional curvature. In order to apply Theorem 2.1 and Theorem 2.2 we prove the following lemma.
Lemma 4.1. At each point x of (M t a,b , g t a,b ) and (M 2t a,b ,ḡ 2t a,b ) there exists a 2-plane σ x orthogonal to the S 1 orbit with sec(σ x ) > 0.
Proof. The metrics are constructed using cohomogeneity one actions, and we first recall the general description of such manifolds. We consider actions of a compact Lie group G on a manifold M such that the orbit space is the interval [−1, 1]. Let π : M → [−1, 1] be the projection onto the orbit space. Let H ⊂ G be the isotropy subgroup of a point in the principal orbit π −1 (0) and K ± the isotropy groups of points in the singular orbits π −1 (±1). The slice theorem implies that Here d − is the codimension of the singular orbit. Furthermore, the boundary of D − is G/H, diffeomorphic to the principal orbit π −1 (0). D + is described equivalently with the same boundary. Then M is diffeomorphic to the union D − ∪ G/H D + . Conversely, given Lie groups H ⊂ K ± ⊂ G with K ± /H ∼ = S d ± −1 , the action of K ± on S d ± −1 extends to a linear action on D d ± . We can then define M = D − ∪ G/H D + as above, and M will admit a cohomogeneity one action by G with isotropy groups H ⊂ K ± .
If d ± = 2, it is shown in [GZ1] that one can define a metric with sec ≥ 0 on M as follows. Let g, k, h be the Lie algebras of G, K − , H respectively and Q a biinvariant metric on G. Choose g = m ⊕ k and k = h ⊕ p to be Q-orthogonal decompositions and Q a the left invariant metric on G defined by Q a = Q| m⊕h + aQ| p .
Let f (r) be a concave function with f (0) = 0, f ′ (0) = 1 and f (r) = al 2 a−1 for r near the boundary of D 2 , where 2πl is the length of K − /H with respect to Q. In [GZ1] it is shown that the metric g = Q a + dr 2 + f (r)dθ 2 on G × D 2 has nonnegative curvature as long as 1 < a ≤ 4/3 and hence induces a G invariant metric g − of nonnegative curvature on the quotient D − . Furthermore g − is a product near the boundary G/H, with the induced metric on G/H the same as that induced by Q. A similar metric can be put on D + , and because of the boundary condition the two can be glued to form a smooth G invariant metric g of nonnegative sectional curvature on D − ∪ G/H D + ∼ = M .
In order to prove the claim, we need to describe the manifolds and metrics in a slightly different way than in [GZ2]. For p − , p + , q ∈ Z, p + odd and p − = q mod 2, let P p − ,p + ,q be the cohomogeneity one manifold defined by the following Lie groups: where R(φ) represents a 2×2 rotation matrix and the sign in H is chosen to make H a subgroup of K + . One easily sees that U (2) acts freely on P p + ,p − ,q . Since U (2) commutes with S 3 , the quotient P p + ,p − ,q /U (2) admits an action by S 3 which is cohomogeneity one with the same isotropy groups as the action of S 3 on CP 2 (see [GZ2] Figure 2.2.) Thus P p + ,p − ,q is the total space of a principal U (2) bundle over CP 2 . Suppose P is a principal U (2) bundle over CP 2 . From the spectral sequence of the fibration U (2) → P → CP 2 , one sees that H 2 (P, Z) ∼ = Z |c 1 | , where c 1 denotes the coefficient of x in the first Chern class c 1 (P ) ∈ H 2 (CP 2 , Z). Applying the Seifert-Van Kampen theorem to P p − ,p + ,q = D − ∪ D + , one shows that π 1 (P p − ,p + ,q ) = Z q . By the universal coefficient theorem we conclude that H 2 (P p + ,p − ,q , Z) = Z q and hence c 1 (P p + ,p − ,q ) = qx.
Let Z be the center of U (2). Since U (2)/Z ∼ = SO(3), P/Z is a principal SO(3) bundle over CP 2 with first Pontryagin class p 1 (P/Z) = c 1 (P ) 2 − 4c 2 (P ), see [EZ], 2.5, 2.6. In particular, P p − ,p + ,q /Z admits a cohomogeneity one action by SO(3) × S 3 and one easily shows that the isotropy groups are H = Z 4 = (R 1,3 (π), j) Here R n,m ∈ SO(3) is a rotation in the n, m plane of R 3 . By [GZ2] Theorem 4.7, this bundle has first Pontryagin class p 1 (P p − ,p + ,q /Z) = (p 2 + − p 2 − )x 2 . It follows that c 2 (P 2t ) = 1 4 (q 2 − p 2 + + p 2 − )x 2 . The description of the action on P p − ,p + ,q has d ± = 2, so we can construct a U (2)-invariant metric g with sec ≥ 0 as above. We check that g has a 2-plane with sec > 0 orthogonal to the orbit of T 2 ⊂ U (2) at each point. We do this on each half D ± = G × K ± D 2 separately. By the O'Neill formula it is necessary to find such a 2-plane orthogonal to the orbit of T 2 × K ± at each point of G × D 2 . For D − we have g = u(2) ⊕ su(2), k = p = span{(p − i, i)} and h = {0}.
Here {i, j, k} is the standard basis of su(2) and {l, i, j, k} is the standard basis of u(2) with l the generator of the center.
Since T 2 and K − act on G on the left and right respectively, the tangent space to the orbit at each point (y, z) ∈ G × D 2 is contained in Here L y and R y designate left and right translation on G. Since (0, j) and (0, k) are orthogonal to k and u(2) ⊕ {0} with respect to the left invariant metric Q a and u(2) ⊕ {0} is Ad-invariant, dL y (0, j) and dL y (0, k) are orthogonal to the orbit of T 2 × K ± . Choose τ [y,z] to be the image of dL y (0, j) ∧ dL y (0, k). By the O'Neill formula where dL y (0, i) V is the projection of dL y (0, i) onto dL y (k). The same argument can be made on D + using dL y (0, i) ∧ dL y (0, k).
To summarize, P p − ,p + ,q is the U (2) principal bundle over CP 2 with Chern classes c 1 = qx and c 2 = 1 4 (q 2 − p 2 + + p 2 − )x 2 and it admits a U (2) invariant metric g and a 2-plane τx at each point x ∈ P p − ,p + ,q with τx ⊥ T 2 ·x and secg(τx) > 0. In particular, P 2t,1−2t,1 = P t and P 2t−1,2t+1,2 =P 2t . The metric g t a,b is defined such that g and g t a,b make P t → P t /S 1 a,b into a Riemannian submersion. Let g ′ be the locally isometric lift of g to the universal coverP ′ 2t ofP 2t . Note that the T 2 ⊂ U (2) action onP 2t lifts to the T 2 ⊂ S 1 × S 3 action onP ′ 2t .ḡ 2t a,b is defined such that g ′ andḡ 2t a,b makē P ′ 2t →P ′ 2t /S 1 −b,a into a Riemannian submersion. On each manifold, the image σ x of τx under the S 1 quotient will be orthogonal to the orbits of T 2 /S 1 . Using the O'Neill formula once more it follows that sec(σ x ) > 0 with respect to g t a,b and g 2t a,b . We note that these metrics are invariant under the centralizer of S 1 , which is isomorphic to S 1 × S 3 in each case. The groups acting effectively by isometries are S 1 × SO(3) and U (2) respectively.
Lemma 4.1 yields the metrics required to calculate the s invariant for the two families of S 1 bundles.
Next let E 2 be the 2-plane bundle associated toM t a,b andW t a,b ⊂ E 2 the disc bundle with projection σ :W t a,b →N t . We have the bundle isomorphism TW t a,b ∼ = σ * (E 2 ⊕ TN t ) and second Stiefel Whitney class Here the notation σ * is repressed since it is an isomorphism on cohomology. Thus when b is even, a is odd, andW t a,b is a spin manifold. From the Gysin sequence one sees that H 4 (M t a,b , Z) is torsion so all the rational Pontryagin classes vanish.
We see that a 2 k − 8t k b 2 k = r, m k a k + 2n k b k = 1, and each of a k , b k , m k , n k , t k is equal to the corresponding a, b, m, n, t mod λ. When r < 0 we have t, t k > 0, so 2b k (1 + 2t k ) has the same sign as 2b(1 + 2t). It follows that sign(W 2t k a k ,2b k ) =sign(W 2t a,2b ). This is enough to ensure the numerators of s i (M 2t a,2b ) and s i (M 2t k a k ,2b k ) are equal modulo the denominators so s i (M 2t a,2b ) − s i (M 2t k a k ,2b k ) ∈ Z. Thus the invariants s i ∈ Q/Z and |H 4 (M, Z)| are equal and M 2t k a k ,2b k is diffeomorphic to M 2t a,2b by [KS2] Theorem 3.1. Since a 2 k − 8t k b 2 k and sign(W 2t k a k ,2b k ) are constant for the sequence {M 2t k a k ,2b k } k , the s invariant is a polynomial in k, and there is an infinite subsequence of metrics with distinct s invariants. Lemma 2.3 completes the proof of Theorem B part (b).
Thus we can give M t a,b the spin structure induced from the bundle isomorphism T M t a,b ∼ = ρ * T N t ⊕ V ′ , where V ′ is the bundle generated by the S 1 action field and ρ : M t a,b → N t . From the Gysin sequence one sees that H 4 (M t a,b , Z) = Z |t(a+b) 2 −ab| , so the rational Pontryagin classes vanish when t(a + b) 2 − ab = 0.
By Lemma 4.1 g t a,b satisfies the conditions of Theorem 2.2. It follows that s(M t a,b , g t a,b ) = s(M t a,b , h) for an S 1 invariant metric h with geodesic fibers. Then the circle bundle M t a,b → N t and h satisfy the hypotheses of Theorem 1.3 and s(M t a,b , g t a,b ) is given by (1.6). In [EZ] the terms p 2 1 , p 1 e 2 and e 4 are calculated for W t a,b and we have s(M t a,b , g t a,b ) = (a + b)(1 − t) 2 2 3 · 7 · (t(a + b) 2 − ab) + 1 2 5 · 3 · 7 (−3ab + (1 − t)(8 + (a + b) 2 )) + 1 2 5 · 7 sign(W t a,b ) where When t(a+ b) 2 = ab, M t a,b also has the cohomology ring necessary to define the diffeomorphism invariants s i . They are calculated in [EZ] Proposition 5.2. Just as forM t a,b they are given by rational functions with numerators depending on a, b, m, n, t and sign(W t a,b ), where m, n are such that ma + nb = 1. The denominators divide 2 5 · 3 · 7 · |t(a + b) 2 − ab|. As these are the only relevant details, we omit the equations for brevity.
One checks that t k (a k + b k ) 2 − a k b k = r, m k a k + n k b k = 1, a k + b k = a + b and each of a k , b k , m k , n k , t k is equal to the corresponding a, b, m, n, t mod λ. It follows that M t k a k ,b k is diffeomorphic to M t a,b while s(M t k a k ,b k , g t k a k ,b k ) is a polynomial in k. This completes the proof of Theorem B.
Remark 4.3. (a) One easily sees that W 7 a,b is diffeomorphic to only finitely many other W 7 k,l , and no other homogeneous spaces. By [ST] Proposition 1.1 the space of G invariant metrics with nonnegative sectional curvature on a homogeneous space G/H is connected. Thus M sec≥0 (W 7 a,b ) has infinitely many components by the corollary, but only finitely many of them contain homogeneous metrics. Each of those in turn contains a positively curved metric, except in the case of W 1,0 . There are examples due to [KS3] where one has two components containing metrics with sec > 0.
(b) One sees from the diffeomorphism invariants that no two of the Eschenburg spaces F a,b are diffeomorphic, so we cannot use this set of metrics to prove that any M sec>0 (F a,b ) is not path connected.
(c) We saw in the proof Lemma 4.1 that S 1 × SO(3) and U (2) respectively act by isometries on g t a,b andḡ 2t a,b , and we suspect each is the full identity component of the isometry group. (d) The same argument as in Remark 3.2 shows that M t a,b andM 2t a,b do not have the homotopy type of a 7-dimensional homogeneous space if |t(a + b) 2 − ab| or |a 2 − 2tb 2 | = 2 mod 3.
|
2018-01-30T16:56:59.000Z
|
2017-12-04T00:00:00.000
|
{
"year": 2020,
"sha1": "2d0abfa2e63f68a856fbd81cf611a56d3d6e53b1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1712.01107",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2d0abfa2e63f68a856fbd81cf611a56d3d6e53b1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
237737669
|
pes2o/s2orc
|
v3-fos-license
|
Comment on tc-2021-167
2) The work is presented as a contribution towards the development of GPL, particularly as a solution to inadequate definitions in the Chilean GPL projects. In that sense, there is no clear articulation between the proposed classification and the GPLs. Both Chilean and Argentinean GPLs avoid conflict and ambiguities by protecting all glaciers equally, regardless of their type, size, location or debris cover. In that context, it is hard to understand how this classification schema can help the design of a law proposal of consensus and without the "legal issues" mentioned for the Argentinean GPL. If the authors propose a type-dependent level of protection (as stated in lines 224-225, 261), that should be clearly stated and followed with well-elaborated reasoning to support that proposal. Arguably, a type-dependent level of protection will only complicate things, especially given that the classification is sometimes ambiguous (lines 115-116) and changes with time (lines 159-164). In some sections, it even seems that the authors suggest a case-by-case assignation of the level of protection (lines 274-278). Much emphasis was put on the usefulness of the proposed classification for glacier management. However, the Argentinean GPL and Chilean GPL proposals aim to protect glaciers, not to manage them.
1) The manuscript is well written and presents a sound classification schema of glaciers based on their sensitivity to environmental change. The methodology has solid support in the literature, and the consideration of this new classification is likely to be a valuable contribution to the management of glaciers, especially concerning their hydrological services.
2) The work is presented as a contribution towards the development of GPL, particularly as a solution to inadequate definitions in the Chilean GPL projects. In that sense, there is no clear articulation between the proposed classification and the GPLs. Both Chilean and Argentinean GPLs avoid conflict and ambiguities by protecting all glaciers equally, regardless of their type, size, location or debris cover. In that context, it is hard to understand how this classification schema can help the design of a law proposal of consensus and without the "legal issues" mentioned for the Argentinean GPL. If the authors propose a type-dependent level of protection (as stated in lines 224-225, 261), that should be clearly stated and followed with well-elaborated reasoning to support that proposal. Arguably, a type-dependent level of protection will only complicate things, especially given that the classification is sometimes ambiguous (lines 115-116) and changes with time (lines 159-164). In some sections, it even seems that the authors suggest a case-by-case assignation of the level of protection (lines 274-278). Much emphasis was put on the usefulness of the proposed classification for glacier management. However, the Argentinean GPL and Chilean GPL proposals aim to protect glaciers, not to manage them.
If the authors are mainly suggesting a type-dependent monitoring program or the addition of this classification to national inventory fields (as stated in line 271), this should be clearly stated from the start. In such a case, they should also include a more detailed explanation of how this classification will help water resources management and a throughout motivation of the methodology. For example, why a classification is better than a "sensitivity index" or case-by-case modelling.
3)Following the facts detailed in lines 31-32, it seems inaccurate to refer to glaciers/landforms as sensitive/insensitive. The differences seem to be related only to the timescales of their response to environmental changes. Maybe fast/slow response might be better terminology.
4) The main proxy to assess glacier change is mass balance, which depends on accumulation and ablation. However, this works seems to focus entirely on the ablation part of the equation.
In avalanche-fed glaciers, which is often the case for categories 2 or 3 (semi-sensitive and insulated), there could be a high climatic sensitivity associated with the snow accumulation on surrounding slopes that are not even part of the glacier. While such glaciers would "melt away slowly" due to their debris cover, their mass gain mechanism might have a very high sensitivity to environmental changes. In these cases, their water storage capability at inter-annual timescales would also have a high sensitivity to environmental change.
5) The use of the term "landform" makes the manuscript very confusing. While it can refer to anything (a glacier, a ridge, a mountain), it is often used to refer to a glacier, where the direct use of the term "glacier" would make the text much clearer. In some cases, for the same glacier the text says that it is a landform composed of multiple glacier types, and that it is a glacier composed of multiple landform types (line 118: "Where a landform is made up of multiple glacier types (Fig. 1a [Tapado Glacier])", lines 125-126 "Tapado Glacier [ Fig. 1a] is made up of the three distinct landform types..."). Other sections use the concept of "glacier morphology" (line 161). More consistent use of the terminology is necessary: "Glacier" and "surface-type" could be better concepts to use (instead of randomly interchange either of those by "landform"). 6) In the context of GPL and glacier inventories. It seems that the authors propose the use of their methodology nationwide or throughout the Andes. However, the examples presented in figures and Table 1 are biased to the semi-arid Andes; the same is true regarding the accuracy check proposed in line 234. All examples are within four degrees of latitude. It must be clear what is the geographical area for which this methodology has been designed. If the application area is the whole of the Andes, the authors should address the different challenges posed by tropical and Patagonian glaciers.
Specific comments (numbers refer to manuscript version 2) :
7: In the context of this paragraph and in particular the GPLs, "landform types" have a very different and more specific meaning than used in the rest of the text, as the most controversial definitions that have hindered consensus of the Chilean GPL are the definitions of Glacier, Periglacial, and permafrost. However, "landform types" in the manuscript refers interchangeably to glaciers or parts of a glacier with a distinct surface type (based on debris cover). This difference gives the impression to the reader that this work offers a direct solution to the definitions controversy that, has been in part, the cause of the lack of consensus, which is wrong. 21-22: Given that the authors seem to be opening the discussion over the idea of not protecting all glaciers equally but depending on their hydrological behaviour. It seems very important to elaborate on what legal issues have hindered the application of the Argentinean GPL, or at least give a reference for that affirmation.
23-24: This requires further elaboration. It is unclear how distinguishing between glacier types can reduce the legal ambiguity. In general, one would think that the current approach of Chilean and Argentinean GPLs (protecting all glaciers regardless of type) is less ambiguous than differential protection based on a glacier classification schema. 39: The switch between the "glacier" terminology and the use of "landform" should be explained here. Otherwise, simply keep using "glacier." 77-80: It seems against the objectives of this work to base the threshold of debris thickness on a single glacier. Arguably, debris type can have a significant influence, as well as the partitioning of the different melt processes affecting a glacier. In areas where sublimation is the primary melt process, a thin layer of debris might be enough to reduce melting significantly. In other cases, such as the temperate glaciers of New Zealand and Patagonia, a large amount of the melting is due to rain, and perhaps a much thicker debris cover is required to reduce melt rates. Pirámide glacier might be representative only of glaciers where shortwave radiation is the dominant melting process. 121-123: Again, it seems against the objectives of this work to include ambiguous criteria like this (what is "very minor"?). Table 1. 144-149: It is confusing to use the term "landform" when you mean "glacier". Unless the authors want to refer to different sections of a glacier but with different surface types, however, if that is the case, it does not make sense to say that the insulated part of Tapado Glacier is insensitive to environmental change while its accumulation area is a sensitive "landform".
159: "It is likely" seems a euphemism for something that unquestionably will happen. Table 1: What is the point of comparing this article classification with DGA/IANIGLA classification? Each of these is classifying completely different attributes of the glacier: Glacier sensitivity to environmental change in this article, glacier shape/main characterizing feature for DGA, and glacier debris cover for IANIGLA. 214: Which are the distinct hydrological roles? The authors only point to differences in the timescales and the degree to which these glacier types play a role as water reservoirs. 227-229: While that might be more objective, it seems a nightmare from a legal point of view. One can picture a development project affecting a sensitive glacier because a logistic regression happens to assign it to the wrong category. 256-257: As for line 214, it seems that "role" is not the best word to distinguish between the hydrological effects of different types of glaciers.
|
2021-09-27T20:55:33.524Z
|
2021-07-21T00:00:00.000
|
{
"year": 2021,
"sha1": "98c17d7f7d98bad6a9f651aaafb67e04b0a4bdc3",
"oa_license": "CCBY",
"oa_url": "https://tc.copernicus.org/preprints/tc-2021-167/tc-2021-167.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a120d7bc4e530cf375835d740dcf6493fd61bef4",
"s2fieldsofstudy": [
"Law",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
266003739
|
pes2o/s2orc
|
v3-fos-license
|
Research on the Sustainable Development of Tourism Industry in Xisuo Village from the Perspective of Communication and Exchange
: Xisuo Village, as the center of the famous Jiarong chieftain culture in Tibetan area of Sichuan, has a history of more than 600 years showing the characteristics of multi-ethnic co-habitation and multi-cultural symbiosis and the association, communication and integration of various ethnic groups have played an important role in the economic, cultural and social development of Xisuo Village. Based on the analysis of the characteristics of ethnic association, communication and integration in Zhuokeji area and the combination of local culture in folk houses, population and material resources, as well as the introduction of Xisuo village and ethnic culture, this paper points out the existing problems of local tourism in the aspects of uneven development factors, rigid publicity mode and poor management of scenic spots, and proposes corresponding solutions to these problems. At the same time, ideas such as enhancing multicultural sustainability and artificial intelligence for sustainable tourism development are diffusely proposed.
Introduction
In recent years, with the rapid development of rural tourism, many ethnic villages have ushered in new opportunities in the development of tourism by virtue of their own ethnic characteristics.The scale of tourism development and construction continues to expand, and the efficiency of the tourism industry continues to improve and upgrade.The tourism industry has brought certain dividends in terms of data, but there are also some problems and risks.Excessive protection of ethnic villages is difficult to meet the needs of development, and excessive development is the destruction of ethnic villages.Ethnic villages are not only the common production and living homes of compatriots of all ethnic groups, but also the physical space for association, communication and integration of various ethnic groups in the historical development.Xisuo Village in Ma'erkang City, Sichuan Province, as a key protection unit of ethnic villages, its tourism industry is also standing at the forefront, with both opportunities and challenges.Based on the historical development and ethnic characteristics of Xisuo Village, this paper takes the association, communication and integration of various ethnic groups as the main line of research, analyzes the problems faced by its tourism development, and explores the sustainable development strategy of tourism.
Introduction to Xisuo Village
Xisuo Village is located in Ma'erkang Town in the southeast of Ma'erkang City (the administrative area of Zhuokeji Town was assigned to the jurisdiction of Malkang Town in 2019), about 7 kilometers away from the center of Ma'erkang City, with an average altitude of about 2700 meters, an area of about 95 square kilometers, and a population of 349 people [1] .The main ethnic groups in Xisuo Village are Tibetans, among which Jiarong Tibetans are the main ones.It is the main place of Jiarong Tibetan chieftain culture.The village has a national key cultural relic protection unit -Zhuokeji Tusi chieftain official village, provincial cultural relic protection unit -Xisuo folk houses.At present, there are 63 households in the village, and about 36 households with reception capacity.
Introduction to Jiarong Tibetan Chieftain Culture
As for the definition of chieftain culture, Chinese scholars generally agree that the integration of -20-chieftain system and nationality produce that.Through the review of relevant literature, the culture of Zhuokeji chieftain culture can be summarized as follows: Zhuokeji chieftain culture is the sum of material culture and spiritual culture formed and developed by Jiarong Tibetan and other ethnic groups living in Zhuokeji area under the background of the chieftain system in the 600 years of history.Zhuokeji chieftain culture is an important content of Jiarong Tibetan culture and a major feature that is different from other Tibetan cultures.Based on the understanding of the cultural connotation of Zhuokeji chieftain culture, it can be subdivided into the following three parts: first, the space carrier of the construction of the chieftain culture -the chieftain architectural complex; second, the chieftain cultural inheritance mode -the religious folk custom activities; third, the natural resources in Zhuoki.
Chieftain Architectural Complex Culture
The national cultural relics protection unit in Xisuo village, known as "a pearl in the history of oriental architecture" -Zhuokeji Chieftain Official Village has always been the research object of many architectural scholars, which reflects the architectural characteristics of Jiarong Tibetan culture: taking bluestone, yellow mud, wood as the main building materials; the hierarchical using attributes of ground floor corrals, middle floor dwellings, and top floor sutra halls; corridors and cloisters connect each functional area and meet the lighting needs.Its Han cultural characteristics are also very significant, such as imitating Chinese railings and lattice windows, which fully show the architectural characteristics of Han and Qing Dynasty [2] .The blending of Han and Tibetan architectural culture in Zhuokeji official village is evident, and the folk houses in Xisuo Village also show this characteristic.The fortress-like buildings with well-arranged and distinct heights are the main characteristics of Xisuo folk houses, which constitute the real space of multi-ethnic association, communication and integration.
Religious and Folk Activities
As an important carrier of Jiarong Tibetan culture, the religious culture in Xisuo village also presents the characteristics of pluralism and symbiosis, with Tibetan Buddhism, Taoism, Benbo religion and other religious sects co-existing.In many temples in the Zhuokji area, including the Dandalun Temple in Xisuo Village, the annual ten-day "Skurynchimpo" ritual (celebrating the festival to commemorate the emperor's awarding of the chieftain) is very grand, where chieftains and headmen wear official robes and hats given by the emperor to celebrate with the common people, including acrobatics, lion dances, circle dances, etc [3] , which has strong characteristics of the association, communication and integration between Han culture and Tibetan culture.
Abundant Material Conditions
Ma'erkang City is located in the southern edge of the Qinghai-Tibet Plateau, in the northwest of the Sichuan basin.About 2700 meters above sea level, Xisuo village has less extreme climate, known as "natural oxygen bar".Rich in natural resources, it is one of the largest coniferous forests in China, and it is also rich in medicinal plants such as Fritillaria, Notopterygium root, Gastrodia elata Blume and Cordyceps sinensis.Its grassland is abundant and can be grazed in all seasons [4] .The rich material resources constitute the material basis of Xisuo Village's production and life, and are also the necessary conditions to attract the Han and Hui ethnic groups to trade and exchange, and the realistic basis for the formation of exchanges and integration of various ethnic groups.
Introduction to Tourism Resources
Xisuo folk houses in Xisuo Village, as cultural relics protection units in Sichuan Province, have become the primary viewing places for tourists with their scattered and colorful buildings.The turning bridge connecting the two sides of the Xisuo folk houses and the Zhuokeji chieftain official village on the other side of the river are also the necessary viewing places for tourists.The Dandalun Temple with a history of 600 years in the village is a holy place for many tourists and local residents to worship and pray for blessings [5] .With the development of new types of tourism in recent years, interactive tourism experiences such as Buddha painting, textile learning, and Tibetan brewing have come into being, which is of great significance in realizing the sustainable development of the Jiarong Tibetan culture.It
The Development of Catering and Accommodation and Handicraft Industry
Xisuo Village currently has a reception capacity of about 30 lodging hotels, of which about 24 are logged on the network booking platform, including Xayang lodging, Xiaoguanzhai lodging and other lodging with the integration of Han and Tibetan cultures, as well as lodging with a strong Jiarong Tibetan cultural flavor, such as Ajiana, Alang Tibetan and Mosibu lodging.There are about 20 catering enterprises with reception capacity, which are dominated by Jiarong Tibetan catering, including Kangba hot pot and Tibetan morning tea.There are about 5 national cultural goods stores in Xisuo village, mainly dealing with some Tibetan Buddhist jewelry, small souvenirs of Jiarong Tibetan culture, Tibetan clothing, Tibetan cloth and so on.With the increase of foreign tourists and the arrival of foreign businesses and self-employed people in recent years, the two cafes and bistros have also added new content to the restaurant scene in Xisuo village.
Outdated Supporting Infrastructure in Xisuo Village
With the upgrading of tourism consumption, whether the local living facilities or the infrastructure are perfect, has become one of the main considerations for travelers to choose a tourist destination.Xisuo village, as an important part of tourism in Ma'erkang city, still has a lot of room for infrastructural development.Among them, the fire safety problem is more prominent.For example, fire facilities are backward, fire escape is strewn with debris and private cars and other serious problems.As the first floor of Xisuo folk houses is mainly livestock breeding, the pollutants and excrement of livestock are directly expose, which affects the tourist experience and the respiratory health of local residents and tourists.The project of human and animal separation is promoted slowly.The environment of public toilets in Xisuo village is poor, the degree of cleaning cycle is low, and the tourist experience is worse.These problems not only affect the vast majority of tourists, but also affect the sense of living of local residents.
Lack of Professional Talents
In the era of digital information explosion, it is necessary to use the Internet to change services, brands and content in a timely manner.However, most of the tourism practitioners in Xisuo village have low educational qualifications, and some even have problems in Chinese communication, not to mention the operation of self-media.Although the Ma'erkang Municipal Government have held labor skills training for tourism practitioners for many times, the lack of training time, the limited number of trainees, and the lack of training content are also difficult to balance, which leads to the improvement of talent skills at the surface stage.Although there are foreign operators, the core elements are mastered by foreign operators, and local people are mostly engaged in basic catering services, residential cleaning and other basic work, with low income and strong substitutability.There are cases of young people returning home, but the lack of talent is a long-term problem for the scale of tourism development.
Insufficient Capital Volume of Tourism Development
The capital cost of tourism investment and development is high, and the return cycle is long, so it is difficult to achieve breakthrough development by relying only on the government and local residents.Only with the participation of multiple elements such as social capital and large tourism enterprises can the upgrading and development of tourism be realized.At present, several homestays, restaurants and handicraft shops in Xisuo village are small in scale and simple, with weak anti-risk ability.If the development only relies on government planning and the owners' own funds, the risk is high and the return is small, which will not only make it difficult to recover funds, but also cause the phenomenon of poverty.
Solidified Publicity Model of National Culture and Chieftain Culture
In the era of self-media outbreak, the traditional publicity model has been unable to meet the needs of the new tourism model.obviously not enough to rely only on the recommendation of several platforms.
Incomplete Management of Tourist Areas
The hygiene standard, safety standard and industry standard of accommodation and catering in Xisuo village are all lack of unification and are more arbitrary.The large fluctuation of accommodation prices, the big difference between online booking and offline prices, and the poor service attitude of relevant practitioners have a negative impact on the standardized management of tourist attractions and the healthy development of the tourism industry.The lack of institutional norms and industry norms for the scenic spot industry by the government and other subjects will eventually lead to passive situations such as chaos in the tourism industry, low price war, and poor service.
Increasing Infrastructure and Supporting Facilities
First, the government should strengthen infrastructure construction and improve the reception capacity of Xisuo village.The investment in fire safety and the transformation of lighting engineering should be increased; the improvement of the "human-animal separation" project and the renovation of toilets should constantly be followed up in order to provide a comfortable, safe and healthy space for local residents and tourists.Second, the government should continuously improve the policies related to the demolition and construction of residential buildings and protect traditional buildings.Third, the level of modern facilities in Xisuo village and optical fiber data network in Xisuo village should be improved; the rules and regulations of the tourism industry in the village should be optimized; the medical conditions and levels of the village health center and the emergency incident handling capacity in the village should be improved.Fourth, the skill level of tourism operators needs to be improved; skills training in service level, language communication and other aspects can regularly be carried out, and completion certificates and certificates of qualification or other forms can be issued to enterprises or individual.
Introduction of Capital and Enterprises
With the continuous development of rural tourism market, various industries such as catering industry and accommodation industry in ethnic villages have gradually changed from individual management to enterprise management.The introduction of professional technology, operation and development of tourism companies is not only conducive to expanding the development of local tourism, but also provides new ideas for the diversification of local industries.Various modes of cooperation, such as "government + enterprise + individual", "enterprise + individual", "enterprise + collective economy + individual", will provide more references for achieving sustainable development of tourism.
Upgrade of Publicity Mode
The richness and diversity of self-media categories are enough to prove the high yield brought by self-media.By means of multi-platform union such as Wechat, Douyin, Twitter, and Kuaishou account, we can tap the potential customers of different platforms, cultivate the potential traffic, guide the traffic to pay attention to the relevant short videos, and attract consumers to the offline tourism with short videos.
Enhancing Multicultural Sustainability
The discussion of the sustainability of national culture is first based on the physical space of national culture, and the protection of national architecture is the focus of cultural sustainable development.When emphasizing the dividends brought by tourism development, we should also emphasize how much our impact on tourist destinations has been reduced.Protecting ethnic buildings, optimizing the residential space management of Xisuo folk houses with the concept of management units, and making appropriate planning and development are important contents for realizing the sustainable development of architectural culture [6] .Intangible cultural heritage is not only the core of local culture, but also the core of the development of local tourism.The classification and management of cultural heritage through the systematic sorting out of the cultural heritage of various ethnic groups in the form of song and dance ISSN 2616-5783 Vol.6, Issue 18: 19-23, DOI: 10.25236/AJHSS.2023.061804 Published by Francis Academic Press, UK -23-dramas, poems, film and television scripts, etc. is an important part of the intangible cultural heritage to realize sustainable development.
Digitalization of Tourism Promotes Association and Communication and Integration Among Ethnic Groups
The rapid development of artificial intelligence has brought many opportunities and challenges to mankind, and the good use of artificial intelligence can provide more convenience and reference for the development of tourism.Through AI analysis, visitors can be provided with differentiated travel and convenient experiences.At the same time, tourism industry practitioners can timely adjust the operation content based on data analysis, timely update and upgrade tourism products, and provide more convenient exchanges and integration of various ethnic groups.
Conclusion
The sustainable development of tourism industry in Xisuo Village is not only the need for the preservation and inheritance of traditional buildings and temples in Xisuo Village, but also the need for sustainable development of intangible cultural heritage.Digitalization, modernization and capitalization empower Xisuo Village tourism industry is not only the only way for local economic growth, but also a realistic way for local residents to increase their income.
The tourism development of Xisuo village is mainly advertised by highway advertisement and Ma'erkang government.Although the tourism development of Xisuo village has also been promoted on social platforms due to the publicity of Xiaohongshu and Douyin expert account, it is Academic Journal of Humanities & Social Sciences ISSN 2616-5783 Vol.6, Issue 18: 19-23, DOI: 10.25236/AJHSS.2023.061804
|
2023-09-15T01:03:53.233Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "0ce5602825faff8215bdf2c9b7b934574700040b",
"oa_license": null,
"oa_url": "http://francis-press.com/uploads/papers/rxKVlHlQ4XMNRtfQnZW9IDXRL4iTDZjkYGr5Uz2r.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0ce5602825faff8215bdf2c9b7b934574700040b",
"s2fieldsofstudy": [
"Business",
"Sociology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
18232843
|
pes2o/s2orc
|
v3-fos-license
|
Longitudinal and Transverse Zeeman Ladders in the Ising-Like Chain Antiferromagnet BaCo2V2O8
We explore the spin dynamics emerging from the N\'eel phase of the chain compound antiferromagnet BaCo2V2O8. Our inelastic neutron scattering study reveals unconventional discrete spin excitations, so called Zeeman ladders, understood in terms of spinon confinement, due to the interchain attractive linear potential. These excitations consist in two interlaced series of modes, respectively with transverse and longitudinal polarization. The latter have no classical counterpart and are related to the zero-point fluctuations that weaken the ordered moment in weakly coupled quantum chains. Our analysis reveals that BaCo2V2O8, with moderate Ising anisotropy and sizable interchain interactions, remarkably fulfills the conditions necessary for the observation of these longitudinal excitations.
We explore the spin dynamics emerging from the Néel phase of the chain compound antiferromagnet BaCo2V2O8. Our inelastic neutron scattering study reveals unconventional discrete spin excitations, so called Zeeman ladders, understood in terms of spinon confinement, due to the interchain attractive linear potential. These excitations consist in two interlaced series of modes, respectively with transverse and longitudinal polarization. The latter have no classical counterpart and are related to the zero-point fluctuations that weaken the ordered moment in weakly coupled quantum chains. Our analysis reveals that BaCo2V2O8 , with moderate Ising anisotropy and sizable interchain interactions, remarkably fulfills the conditions necessary for the observation of these longitudinal excitations. The nature of the excitations in spin half antiferromagnets is a topic of considerable current interest in the field of quantum magnetism. In three dimensions (3D), the Néel state is a very good approximation of the ground state. It is characterized by staggered long-range magnetic order and its excitation spectrum is dominated by single-particle states, so called spin waves or magnons, that correspond to a precession of the ordered moment around its equilibrium direction. These quasi-particles carry a total spin of unity and correspond to transverse excitations. Quantum fluctuations, usually resulting in minor corrections in two and three dimensions, become especially relevant in the one-dimensional (1D) case, destroying the long-range order as well as the precession modes, even at T = 0. The spin excitation spectrum is instead a continuum composed of pairs of S = 1/2 excitations called spinons that are created or destroyed only in pairs, like domain walls in an Ising magnet.
Physical realizations of 1D systems, however, eventually order at very low temperature, owing to small coupling between chains. This dimensional cross-over, from the continuum of spinons towards the classical picture of a 3D Néel state dressed with spin waves, is an appealing issue [1]. More precisely, in the ordered state of quasi-1D systems, each chain experiences an effective staggered molecular field. As a first consequence, a linear attractive potential between the spinons appears, which competes with their propagating character, and finally leads to their confinement in bound states. A spectacular manifestation of this effect in the case of Ising spins, initially described by Shiba [2], is the quantization of the excitation continuum in a series of discrete lines below the Néel temperature. This effect, called Zeeman ladder, was proposed to explain the discretization of the excitations observed in the ordered phase of CsCoCl 3 and CsCoBr 3 with Raman spectroscopy [2,3]. Recently, a similar series of modes was also observed in the Ising ferromagnetic chain compound CoNb 2 O 6 [4,6]. It should be pointed out, however, that since there are three possible spin states for a pair of spinons, three types of bound states are expected. Besides the two transverse modes, a third bound state type, corresponding to fluctuations parallel to the direction of the ordered moment, hence a longitudinal mode, is also expected to accompany the crossover from 1D to 3D physics. Its observation has however been established up to now only in the quasi-1D Heisenberg spin 1/2 antiferromagnet KCuF 3 [6].
In this article, we introduce a new focus on this physics. We examine the excitations of BaCo 2 V 2 O 8 , which realizes a quasi-1D spin half antiferromagnet, intermediate between the Ising and Heisenberg cases. By means of inelastic neutron scattering, we especially describe below the ordering temperature the emergence of transverse and longitudinal excitations, in the form of two well defined Zeeman ladders. This remarkable material thus displays, through the dimensional cross-over, both signatures of the quantum fluctuations mentioned above.
BaCo 2 V 2 O 8 consists of screw chains of Co 2+ running along the fourfold c−axis of the body-centered tetragonal structure [1]. These chains are weakly coupled yielding an antiferromagnetic (AF) ordering (propagation vector k AF = (1, 0, 0) [8][9][10]) in zero field below T N 5.5 K [10][11][12]. The magnetic moment in the distorted octahedral environment is described by a highly anisotropic effective spin S = 1/2 [13] with g xy = 2.95 and g z = 6.2 [14], thus allowing quantum fluctuations [15]. The validity of this description is sustained by the observation of the first crystal field level at 30 meV [10]. This physics is described by the XXZ Hamiltonian: where, according to the analysis of the magnetization curve [15], the intrachain AF interaction is J = 5.6 meV and the anisotropy parameter is = 0.46. The neutron experiment was performed on the JCNS/CEA-CRG cold neutron three-axis spectrometer IN12 at the Institut Laue-Langevin high-flux reactor, Grenoble, France. A series of energy scans at constant scattering vector Q was measured in the Néel phase to obtain the spin dispersion perpendicular (along a) and parallel (along c) to the chains.
Direct evidence for the emergence in the ordered phase of unconventional dispersive excitations is shown in Fig. 1. At the zone center Q = (2, 0, 2) and at T = 1.6 K, a series of sharp modes ranging between about 1.5 and 6 meV, with decreasing intensities as the energy increases, is observed [see Fig. 1(b)]. These sharp modes show a sizable dispersion along the chain direction, as can be seen in Fig. 1(a). The presence of an intense peak dispersing between 6 and 7 meV with an out-ofphase weaker dispersion along the c−axis can also be noticed. As expected for magnetic excitations, all these modes disappear above T N [see Fig. 1(b)]. The relative Q dependence of their intensities suggests that the peak around 7 meV can be interpreted as an optical mode, whereas the series of low energy excitations is acousticlike. The existence of both types of excitations is indeed expected considering the 16 Co 2+ ions per unit cell in a classical picture. Yet, other interpretations for this intense mode could be considered such as kinetic bound state of spinons or bound state of pairs of spinons [4,6].
In the remaining, however, we shall focus on the low energy series, and first investigate their polarization relative to the direction of the staggered moment. A neutron scattering experiment is indeed only sensitive to the spin components perpendicular to Q. Since the ordered moment is along the c−axis, measurements with Q c reveal transverse excitations ( a and b) while measurements with Q a disclose the superposition of transverse ( b) and longitudinal ( c) excitations. Energy-scans were thus measured at T = 1.6 K at various Q positions (Fig. 2). For Q = (0, 0, 2), a single series is observed with the lowest energy mode around 1.8 meV. As the scattering vector rotates towards the a direction, a twin series of modes, shifted at slightly higher energies, rises progressively with an intensity that increases with respect to the first series. The lowest mode of this second series is gapped with a minimum at about 2 meV. These results evidence unambiguously the transverse (T ) nature of the first series of discrete modes and the longitudinal (L) nature of the second one.
An important characteristic of this quasi-1D chain system is the strength of the interchain interactions. It can be evidenced from the dispersion of the excitations in the direction perpendicular to the chain axis. Although not visible for l = 1, a sizable dispersion, of the order of 0.1 meV, is observed for l = 2 with an expected minimum of the gapped mode at the AF points. Because of the complexity of the unit cell, there are likely several relevant and competing interchain interactions in BaCo 2 V 2 O 8 . They may add or compensate each other, resulting in this peculiar l dependence of the dispersion [16].
Last, we extracted the position of the modes in order to investigate the bounding mechanism of the spinons. The modes in the energy range 1.5 -6 meV at Q = (0, 0, 2) and at Q = (3, 0, 1) were fitted by a series of Gaussian functions (see Fig. 4). Their full width at half maximum was obtained from a fit of the lowest energy T and L modes and held constant to the same value (0.2 meV) for the subsequent modes of the series. It was necessary to add to the model an increasing background as the energy increases, very probably due to a continuum of excitations. For Q = (0, 0, 2), eight sharp T modes could be extracted. For Q = (3, 0, 1), five T modes and five L modes could be separated. The sixth and seventh modes of the series were fitted by a unique Gaussian function including the T and L modes too close in energy to be separated. This analysis shows that the spacing between the modes appears in a very nontrivial sequencing. In order to interpret these results, a good starting point is the pure 1D quasi-Ising limit ( 1 in Eq. (1)). A state containing two spinons is created by reversing one spin from one of the 2 degenerate Néel states. Two AF bonds are broken, leading to a state with energy J, degenerate with all states resulting from reversing an arbitrary number of subsequent spins. These states carry a spin S z = ±1 for an odd number of reversed spins and S z = 0 for an even number. As soon as = 0, the excitation spectrum becomes a continuum composed of such two domain walls which propagate independently. It is worth noting that in this picture, the S z = ±1 states form transverse excitations, while the S z = 0 states form longitudinal ones. This 1D domain wall picture, as well as the existence of a continuum with an energy gap, were first described by Villain [17]. Shiba then showed that the introduction of interchain couplings J , acting as a molecular field, gives to the two domain wall states an additional potential energy proportional to the distance comprised between them. This causes the above mentioned quantization of the excitation continuum, leading to a series of discrete dispersing lines below the 3D ordering temperature [2][3][4].
Following [4,[18][19][20], we propose first to analyze, at the bound state dispersion minima, the sequence of their energies with: The prefactors z j are the negative zeros of the Airy function Ai(−z j ) = 0, z j = 2.34, 4.09, 5.52, 6.79, 7.94, etc. and α ≈ (h 2 J) 1/3 where h is the interchain molecular field [4]. As shown in the insets of Fig. 4, the energies of the T and L modes were satisfactorily fitted to Eq. (2) for various Q, yielding α ≈ 0.42 ± 0.03 meV, 2E T o ≈ 0.85 ± 0.15 meV, and 2E L o ≈ 1.08 ± 0.05 meV [10]. In absence of a model taking into account both arbitrary and interchain interaction, we then assume that the dispersion along c of the first bound state E T 1 is roughly similar to the dispersion of the lower boundary of the two-spinon continuum in the pure 1D case, namely 2E T o . For any J and , this boundary is given by [21]: [15,22]. Note that J is twice smaller than the estimation given in Refs. [14,15]. Last, assuming h ∼ J (each Co having only one Co neighbor in the 'diagonal' direction of the (a, b) plane [10]), a quite strong interchain interaction J ∼ 0.3 meV can be inferred from the determination of α, consistent with the dispersion along a * at l = 2 (Fig. 3). BaCo 2 V 2 O 8 is then rather far from the perfect 1D regime with a J /J ∼ 0.1 ratio comparable to the one used to explain ESR measurements [15], but at variance with the small J value obtained from phase diagram calculations [22]. Coming back to the mass difference between the T and L modes, it is worth noting that in the 1 limit, the distinguishing feature of longitudinal excitations, compared to the case of transverse excitations, is the existence of a specific coupling with the Néel states. As the term exchanges two neighboring spins, the Néel state is directly coupled to S z = 0 excited states containing 2 reversed spins. This makes the longitudinal modes more massive (at higher energy) than their transverse counterpart, as we observe in BaCo 2 V 2 O 8 . The ground state is then an admixture of S z = 0 two domain wall states added to the Néel state, producing a weakening of the ordered moment. Note that in the 1 limit, the intensity of longitudinal modes should scale with 2 . Indeed longitudinal excitations were hardly observed in systems close to the Ising limit such as CsCoCl 3 and CsCoBr 3 [2,23]. The somehow more isotropic character of BaCo 2 V 2 O 8 (larger value) however is expected to enhance the longitudinal modes.
In the limit of purely isotropic Heisenberg spins ( = 1), a longitudinal mode is also expected. This situation has been illustrated in a neutron study carried out in KCuF 3 [6]. The spectrum near the AF zone center consists of a doubly degenerate, well-defined, gapless transverse spin-wave mode, plus a damped longitudinal mode characterized by a finite energy gap. This longitudinal mode could however not be resolved in another 1D material, namely BaCu 2 Si 2 O 7 , which has a much weaker interchain coupling [24]. It was suggested that a sufficiently strong dispersion perpendicular to the chains is probably necessary in order to stabilize a longitudinal mode, which otherwise could decay into a pair of gapless transverse spin waves. In BaCo 2 V 2 O 8 , we have determined sizable interchain couplings. Moreover, in contrast to the experimental observation in KCuF 3 , the BaCo 2 V 2 O 8 longitudinal modes are remarkably intense and resolution limited. The reason is probably that these longitudinal modes cannot decay into transverse modes since the latter have a large gap due to the spin anisotropy.
It is finally very instructive to recall that BaCo 2 V 2 O 8 has also raised recently much interest for its field-induced behavior, that is describable in terms of Tomonaga-Luttinger physics [8,9,25]. An exotic magnetic ordered phase, unknown in classical systems, is induced by a magnetic field applied parallel to the chain axis. A longitudinal incommensurate spin density wave (amplitude of the moments modulated along the field direction) is actually stabilized thanks to the particular values of J and [22]. Those ingredients, i.e., sizable interchain interactions and intermediate anisotropic character, are the same as the ones we have invoked to account for the quantized transverse and longitudinal magnetic excitations, observed in BaCo 2 V 2 O 8 . This material is thus a rare example of spin 1/2 system displaying spin longitudinal modes, of pure quantum origin, in both the dynamical and the fieldinduced static regimes.
We would like to thank R. Ballou and J. Robert for fruitful discussions and B. Vettard for his technical support. This work was partly supported by the French ANR project NEMSICOM.
Crystalline and magnetic structures of BaCo2V2O8
BaCo 2 V 2 O 8 crystallizes in the centrosymmetric tetragonal body-centered I4 1 /acd (No. 142) space group, with a = 12.444Å, c = 8.415Å, and eight chemical formulas per unit cell [1]. The 16 magnetic Co 2+ ions of the unit cell are equivalent (Wyckoff site 16f ). The spin-3/2 Co 2+ ions (effective spin-1/2) are arranged in edgesharing CoO 6 octahedra forming screw chains, running along the c−axis, and separated by non-magnetic V 5+ and Ba 2+ ions (see Fig. 1 in Ref. [2]). Figure 5 shows one of the two domains of the antiferromagnetic (AF) structure determined in a previous single-crystal neutron diffraction experiment at H = 0 and T = 1.8 K [2]. . The two types of chains are plotted in projection along the c−axis using two different colours: red for the chains described by a 41 screw axis, blue for those described by a 43 axis (the arrows indicate the sense of rotation on increasing z). For each Co 2+ ion of the unit cell, the direction of the spin, '+' or '-', along the c−axis is indicated, as well as the z atomic coordinate. This figure presents one of the two magnetic domains; the other domain is simply obtained by reverting all spins in one type of chain, e.g., the blue ones. Notice the 'diagonal' interchain AF coupling between the chains of the same type (e.g., between the 2 Co 2+ ions located at z = 3 8 in the two labelled red chains, located at z = 7 8 in the two blue ones).
The dominant interaction is the intrachain nearest neighbor AF exchange coupling (occurring between two Co 2+ ions of the same chain located at z = n/8 and z = n/8 + 1/4, with n integer). This interaction imposes an AF ordering along the chains with the spins parallel to the chain c−axis. Looking at the crystalline and AF structures, the dominant interchain interaction is very probably AF along the 'diagonal' direction a ± b, that is between two Co atoms of the same type of chain (blue or red chains) located at the same z. This explains the stabilization of the observed two magnetic domains. The various exchange interactions occurring between the two types of chains have been described in details in Ref. [3] and were shown to yield an effective 'parallel' (i.e., along the a and b directions) interchain coupling of negligible weight as compared to that of the 'diagonal' interaction.
Sample and additional neutron scattering data
The BaCo 2 V 2 O 8 single-crystal used in the inelastic neutron scattering (INS) experiments was grown at Institut Néel (Grenoble, France) by the floating zone method [4]. A 5 cm long cylindrical crystal rod, of about 3 mm diameter, was obtained, with the growth axis at about 60 • from the c−axis. An about 1 cm thick slice was cut perpendicular to the c−axis.
For the neutron experiment performed on the IN12 spectrometer and described in the article, the sample was mounted in a standard cryostat with the b−axis vertical. The final wave vector k f was fixed at 1.5Å −1 and the higher order contamination was removed using a velocity selector placed before the monochromator.
Additional INS data are presented in Fig. 6. This figure reports measurements obtained on the CEA-CRG thermal neutron three-axis spectrometer IN22 at the Institut Laue-Langevin high-flux reactor, Grenoble, France. The sample was mounted in a standard cryostat with the b−axis vertical and the final wave vector k f was fixed at 3.84Å −1 . Pyrolytic graphite (002) monochromator and analyzer were used, while the λ/2 contamination was suppressed by using a graphite filter on the incident neutron beam. These measurements show a nondispersive mode at 30 meV whose intensity decreases with |Q| and which dramatically broadens at high temperature. It is ascribable to the first crystal field level of the Co 2+ atoms. Note that an alternative explanation of the intense longitudinal modes observed in BaCo 2 V 2 O 8 could be associated to the true S = 3/2 nature of the Co 2+ spin with large anisotropy as described in Ref. [5]. This explanation is however rather unlikely in view of t he high energy value of the first crystal field level.
Additional details about the data analysis
The magnetic Bragg peaks corresponding to the antiferromagnetic structure of BaCo 2 V 2 O 8 with k = (0, 0, 1) appear at Q = (h + 1, k, l) with h + k + l even (condition due to the I type of the lattice). Table I , coefficient α, and agreement factor r 2 for the transverse (T ) and longitudinal (L) modes at four different Bragg positions. As the result of the fit slightly depends on the number of modes considered, this number n modes is specified. Figs. 4(a,b) for instance]. The number of modes included in the fits (4 to 8 starting from the lowest energy ones) was varied in order to estimate the error bars. The small dispersion of the results comes from the fact that, as in CoNb 2 O 6 [see Fig. 3(b) in Ref. [6]], the energies of the modes do not vary perfectly linearly with the negative zeros of the Airy functions. Note that the threshold energies 2E T o and 2E L o , as well as the coefficient α, do not depend on the Bragg position. The fitted values, averaged on the various fit s, are: 2E T o = 0.85 ± 0.15 meV, 2E L o = 1.08 ± 0.05 meV, and α = 0.42 ± 0.03 meV.
|
2015-09-11T09:33:08.000Z
|
2014-07-01T00:00:00.000
|
{
"year": 2014,
"sha1": "052428538518742282fa28623b1344268acc3e31",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1407.0213",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "052428538518742282fa28623b1344268acc3e31",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
27150762
|
pes2o/s2orc
|
v3-fos-license
|
Polypyrimidine Tract-binding Protein (PTB) Differentially Affects Malignancy in a Cell Line-dependent Manner*
RNA processing is altered during malignant transformation, and expression of the polypyrimidine tract-binding protein (PTB) is often increased in cancer cells. Although some data support that PTB promotes cancer, the functional contribution of PTB to the malignant phenotype remains to be clarified. Here we report that although PTB levels are generally increased in cancer cell lines from multiple origins and in endometrial adenocarcinoma tumors, there appears to be no correlation between PTB levels and disease severity or metastatic capacity. The three isoforms of PTB increase heterogeneously among different tumor cells. PTB knockdown in transformed cells by small interfering RNA decreases cellular growth in monolayer culture and to a greater extent in semi-solid media without inducing apoptosis. Down-regulation of PTB expression in a normal cell line reduces proliferation even more significantly. Reduction of PTB inhibits the invasive behavior of two cancer cell lines in Matrigel invasion assays but enhances the invasive behavior of another. At the molecular level, PTB in various cell lines differentially affects the alternative splicing pattern of the same substrates, such as caspase 2. Furthermore, overexpression of PTB does not enhance proliferation, anchorage-independent growth, or invasion in immortalized or normal cells. These data demonstrate that PTB is not oncogenic and can either promote or antagonize a malignant trait dependent upon the specific intra-cellular environment.
The polypyrimidine tract-binding protein (PTB), 3 also termed heterogeneous nuclear ribonucleoprotein I, is a 57-kDa RNA-binding protein that binds preferentially to pyrimidine-rich sequences (1)(2)(3). PTB contains four RNA recognition motifs (RRMs). RRM 1 and 2 at the N terminus of the protein are involved in the dimerization of PTB, whereas RRM 3 and 4 are responsible for high affinity interactions with RNA (4,5). PTB has been shown to be involved in many aspects of pre-mRNA and mRNA metabolism. PTB participates in pre-mRNA splicing (6) and acts as a splicing repressor in alternative splicing of pre-mRNA (5,(7)(8)(9)(10)(11)(12). PTB is also involved in 3Ј end polyadenylation of pre-mRNA (13)(14)(15) and is important for translational regulation of certain RNA transcripts through internal ribosome entry sites (16 -20). In addition, PTB shuttles between the nucleus and the cytoplasm (21), which is regulated through phosphorylation by 3Ј,5Ј-cAMP-dependent protein kinase (22).
Alternative splicing is a process that allows multiple different proteins to be made from the same pre-mRNA by either including or excluding particular exons during pre-mRNA splicing. PTB plays a key role in alternative site selection for many gene products by acting as a splicing repressor that prevents the inclusion of target exons (11,(23)(24)(25). Changes in alternative splicing sites have been previously correlated with malignant transformation (26 -29), and the expression level of PTB has been found elevated in transformed cells. Such an elevation is responsible for the increases in fibroblast growth factor receptor-1␣-exon skipping in glioblastoma multiforme tumors (26). Increases in PTB expression are also associated with changes in alternative splicing of multidrug resistance protein 1, which contributes to the drug-resistant phenotype associated with many cancers (28). In addition to PTB, changes in the expression levels of other factors involved in alternative splicing, such as SR proteins, have been found to impact the metastatic phenotype. A classic example demonstrated that a CD44 splice variant, CD44 v6, confers metastatic potential when expressed in nonmetastatic cells (30). Therefore, changes in alternative splicing dynamics in tumor cells likely modify gene expression, in which the expression of functionally altered proteins may directly contribute to the malignant phenotype (31). Being a splice repressor, PTB may influence the transformed phenotype through changing alternative splicing patterns.
PTB itself also undergoes alternative splicing and has three splicing isoforms. PTB1 is the smallest, whereas PTB2 and -4 have an additional 19 or 26 amino acids, respectively, between RRM 2 and 3 as a result of exon 9 inclusion (1,2). These isoforms are differentially effective in the alternative splicing of ␣-tropomyosin. PTB4 has strongest influence and PTB1 the weakest on exon 3 skipping in vivo and in vitro (32). However, differential splicing efficiency of the individual PTB isoforms is not observed for all PTB substrates, as demonstrated by the equal efficiency of ␣-actinin exon skipping by all isoforms (32). Therefore, the differential expression of PTB isoforms may enormously influence gene expression because of the large number of PTB substrates present in the cell and thus differentially affect cellular behavior based on the amount of certain PTB substrates that are expressed in a given cell type.
Although several studies have demonstrated altered PTB expression in cancer cells (31,33), fundamental questions regarding the role of PTB in cancer cells remain unresolved. It is not clear whether increased PTB expression is a phenomenon common to cancers of all origins, or whether PTB isoform expression changes similarly among cancers from different origins, or whether increased PTB expression is important for the transformed phenotype. Recently, a study using siRNA knockdown technique demonstrated that PTB promotes the malignant phenotype in ovarian tumor cell lines (34). To further address these issues, we investigated the changes in PTB expression in cancer cells from multiple origins and compared the PTB isoform profiles in these cells. We examined the role of PTB in malignant transformation by knocking down PTB expression in cultured tumor and nontransformed cells. We found that PTB levels generally increase in cancer cells from a variety of tissues; however, the expression levels of the three individual PTB isoforms are heterogeneous among different cell types. PTB knockdown by siRNA significantly reduces the growth rate for both cancer and normal cell lines, and reduces anchorage-independent growth in tumor cells to a great extent than monolayer culture. In addition, PTB knockdown inhibits the invasive capacity of two cancer cell lines but increased invasion in another. However, overexpression of PTB in normal and immortalized cells does not increase proliferation or induce traits associated with transformation in vitro. Our findings suggest that PTB itself is not transforming but may support or interfere with malignancy depending on the specific cellular environment as it can promote transformed phenotypes in some cells while antagonizing them in others.
MATERIALS AND METHODS
Cell Culture and Tissue Specimens-HeLa (human cervical cancer), Wacar and Homa (normal human skin fibroblasts), HEK-293 (human embryonic kidney transformed with adenovirus 5 DNA), and NIH-3T3 (Mus musculus, fibroblasts) were maintained in Dulbecco's modified Eagle's medium. PC-3, PC-3M, PC-3M Pro4, and PC-3M LN4 prostate cancer cell lines were generous gifts from the laboratories of Dr. Zhou Wang and Dr. Chung Lee (Northwestern University) and cultured in RPMI 1640 medium. WI-38 normal lung fibroblasts were grown in minimal essential medium. All media were supplemented with 10% fetal bovine serum (Atlanta Biologicals) and 100 units/ml penicillin and streptomycin unless otherwise noted. CG cells (human neuroblastoma), T84 cells (human colon carcinoma), and SAOS-2 (osteosarcoma) were cultured according to the protocols provided by ATCC Cultures TM . Stably expressing NIH-3T3 cells were created by transfecting ϳ2 ϫ 10 6 cells with 2 g of either PTB1-GFP vector (39) or constitutively active K-Ras vector (courtesy of Dr. William Hahn-Addgene plasmid 9051) or 2 g of each at the same time and then selecting with 500 g/ml G-418 (PTB-GFP) and/or 5 g/ml puromycin (H-Ras) for 2 weeks. The resulting stable, nonclonal cell lines were utilized for assays within 1 month of creation. Human endometrial tissue samples were obtained by surgical resection, trypsinized, and seeded in culture (Robert H. Lurie Comprehensive Cancer Center of Northwestern University). Histopathological examination allowed the samples to be classified as benign, grade-1, grade-2, or grade-3 endometrial tumors. All cell culture products were obtained from Invitrogen, and all other reagents mentioned under "Materials and Methods" were obtained from Sigma unless otherwise noted.
Immunostaining-Cells were fixed with 4% paraformaldehyde in PBS for 10 min followed by 5 min of permeabilization with 0.5% w/v Triton X-100 in PBS at room temperature. Primary antibody was applied for 1 h, and cells were washed with PBS three times for 10 min. The primary antibody, SH54 (anti-PTB) (35), was used at a 1:300 dilution in PBS, and secondary anti-mouse antibodies conjugated to fluorescein isothiocyanate or Texas Red were used at a 1:200 dilution (Jackson Immu-noResearch Laboratories). Coverslips were analyzed with a Nikon Eclipse E800 microscope equipped with a SenSys cooled CCD camera (Photometrics). Images were captured using Metamorph image acquisition software (Universal Imaging).
Protein Electrophoresis and Immunoblotting-Protein extracts were prepared by sonicating tissue or cells in RIPA buffer containing 1% Nonidet P-40, 1% deoxycholic acid, sodium salt, 0.1% SDS, 10 mM Tris-HCl, pH 7.4, and 150 mM NaCl. Protein concentrations were determined with the BCA protein assay kit (Pierce). Equal amounts of each protein sample were separated on 10% SDS-polyacrylamide gel and transferred to nitrocellulose membrane. Antibodies used for Western blot analysis were rabbit or mouse anti-PTB (SH54) at a 1:800 dilution, rabbit anti-actin (Sigma) at a 1:2000 dilution, mouse anti-GFP (BD Biosciences) at a 1:1000 dilution, rabbit anti-K-Ras (Santa Cruz Biotechnology) at a 1:500 dilution, and horseradish peroxidase-conjugated goat anti-rabbit or goat anti-mouse IgG secondary antibodies (Jackson ImmunoResearch) at a 1:10,000 dilution. SuperSignal West Pico Chemiluminescent Substrate (Pierce) detection reagents were used to detect immunoreactive bands.
Northern Blotting-To determine the RNA expression level of PTB in different cell lines by Northern analysis, total RNAs were extracted from different cell lines with TRIzol reagent (Invitrogen) according to the manufacturer's instructions. Total RNAs from different cell lines were loaded (5 g/lane) and run on a 1% agarose gel and subsequently transferred onto GeneScreenPlus membranes (PerkinElmer Life Sciences) by capillary action with a high salt solution. Hybridization and washing conditions were standard as described previously (36). A 32 P-labeled PTB probe was used to detect the expression level of RNA with labeled glyceraldehyde-3-phosphate dehydrogenase probe used as loading control. 32 P was obtained from Amersham Biosciences.
RT-PCR-RNA was converted to cDNA, and the DNA was amplified by PCR with a forward primer (5Ј-ACCAGCCT-CAACGTCAAGTA) and a reverse primer (5Ј-GGGTTGAG-GTTGCTGACCAG) in a single reaction. These primers were designed to include the alternatively spliced region of PTB so ratios of isoforms could be directly compared among cell lines. Reverse transcription was performed with Moloney murine leukemia virus reverse transcriptase (Invitrogen) on total RNA obtained via TRIzol isolation from cell lines. PCR was carried out with 30 cycles at 95°C for 30 s, 57°C for 1 min, and 72°C for 90 s. The PCR products were resolved on a 2% agarose gel. The intensity of the isoform bands were measured with Kodak MI software, which allowed for determination of isoform ratios.
Alternative Splicing Efficiency Assay-A caspase 2 minigene was used as described previously (37). HEK-293 cells were plated onto 6-well culture plates at 60 -70% confluence. After 24 h, 4 g of GFP-tagged PTB expression vectors and 1 g of reporter minigene were introduced into cells by a standard calcium phosphate precipitation protocol. Total RNA was purified with RNeasy mini kit (Qiagen) from 6-well culture plates 36 h after transfection. Alternative splicing products of the caspase 2 minigene were detected using RT-PCR in the presence of [ 32 P]dCTP (GE Healthcare) as described previously (37,38). PCR products were fractionated with 6% polyacrylamide gel containing 1ϫ TBE buffer and then detected and quantified using a PhosphorImager BAS-1800II (Fuji Film).
RNA Interference-Double-stranded RNA was chemically synthesized, deprotected, and purified by Dharmacon Research, Inc. One strand of the double-stranded RNA was homologous to the PTB mRNA sequence 5Ј-UGACAAGAGC-CGUGACUAC(dTdT)-3Ј. The scramble control siRNA was from Ambion, Inc. (Silencer negative control 2 siRNA). Transfection of siRNA duplexes into various cell lines was conducted as described previously according to the manufacturer's instructions (Oligofectamine reagent, Invitrogen) (39). Cells were utilized for subsequent experiments 72 h post-transfection.
Anchorage-independent Growth Assay-Seventy two hours after transfection, cells from the PTB siRNA and the control siRNA-treated dishes were trypsinized and counted using a hemocytometer. The same number of cells (about 5 ϫ 10 4 ) from all experimental conditions were added into 2 ml of 1.5% (w/v) methylcellulose media and seeded onto 1% agarosecoated 35-mm Petri dishes and allowed to grow for 10 days. At this point, pictures were taken using phase microscopy to show the colony formation. Then the media containing the cells were removed from the dish, put in a 15-ml tube, vigorously pipetted and vortexed to break up the colonies, allowed to sit for 5 min, and then gently mixed as to suspend the cells homogeneously while avoiding air bubbles. The cell number relative to control was determined by measuring the scattering at 650 nm using a spectrophotometer (Beckman DU-64).
Invasion Assay-Invasive activity was determined via the transwell Matrigel invasion (Boyden chamber) assay. Transwell inserts (0.8 m; BD Biosciences) were coated with Matrigel (100 g in 100 l, for 1 h at room temperature), and coated inserts were then washed with PBS and used immediately. Seventy two hours after siRNA transfection, 2 ϫ 10 5 cells from each experimental condition were added to the upper chamber in 500 l of serum-free medium. Twenty four hours after incubation at 37°C, the noninvading cells were removed from the upper chamber with a cotton swab, and invading cells adherent to the bottom of membrane were fixed and stained using a Diff-Quick staining kit (DADE AG). Invading cells were counted by tallying the number of cells in 10 random fields under a ϫ20 objective using an ocular micrometer. Data were expressed as average relative (compared with control) number of migrating cells in 10 fields from six experiments (40).
Plasminogen Activator Assay-Net plasminogen activator activity in conditioned media was quantified using a coupled assay to monitor plasminogen activation and the resulting plasmin hydrolysis of a colorimetric substrate (D-Val-Leu-Lys-pnitroanilide; Sigma) as described previously (41,42).
Transformed Cell Lines and Cancer Tissues Express Increased
Levels of PTB-To evaluate the expression level of PTB in tumor cell lines and human tissue samples, we used both immunofluorescent staining and Western blotting. For the immunofluorescent staining, tumor and normal cells were immunolabeled in parallel, and the images were captured under the same image acquisition settings. The results show a significant increase in nuclear labeling intensity in tumor cells over that in normal cells as exemplified in PC-3M (a human prostate cancer cell line), HeLa (human cervical cancer), CG cells (human neuroblastoma), SAOS-2 (osteosarcoma), versus WI-38 (a normal human lung fibroblast cell line) (Fig. 1A). Many malignant cells also contain the perinucleolar compartment (Fig. 1A, arrowheads), a nuclear structure that is highly enriched with PTB (43). To quantify the expression of total PTB protein in cancer cell lines from various tissue origins, we performed Western blotting using the anti-PTB antibody SH54 (35). The panel of human cancer cells examined (HeLa, CG, T84, SAOS-2, HEK-293, PC-3, PC-3M, PC-3M LN4, and PC-3M Pro4) was derived from a broad spectrum of tissue types and represents cells of varying degrees of malignancy. The normal cell lines evaluated were WI-38 and Homa, which are human fibroblasts. Western blotting demonstrates that the level of PTB protein is generally increased in transformed cell lines examined when compared with the normal cell lines (Fig. 2B). The increases in protein expression is consistent with the increased level of steady state PTB mRNA in tumor cells as measured by Northern blotting (Fig. 1B). Densitometry quantification of the PTB protein levels shows that most tumor cell lines express PTB at a level 2-fold, or greater, than normal cells (data not shown). To evaluate whether the increases in PTB levels are correlated with the degree of malignancy, we compared PTB expression in three prostate cancer cell lines of varying levels of malignancy with the parental PC-3 line. PC-3 cells were originally isolated from a human prostate cancer, and PC-3M was created by implanting a PC-3 xenograft tumor into a nude mouse, allowing distant metastases to form, and subsequently removing a metastatic lesion to culture (44). PC-3M LN4 is enriched with highly metastatic cells through four iterations of inoculating PC-3M cells into the mouse prostate and isolating the metastatic tumor cells. In contrast, PC-3M Pro4 is highly concentrated with nonmetastatic tumor cells through four iterations of inoculating PC-3M cells into mouse prostate and isolating tumor cells localized to the prostate (45). If PTB level is directly related to the metastatic capacity, we would expect PTB expression in the PC-3 panel of cell lines to correlate with their metastatic capacity; however, PTB is expressed at a higher level in the PC-3M cells compared with PC-3 cells, and there is little difference among the three PC-3M derivatives (Fig. 2B).
To evaluate the expression of PTB in human tumor tissues, we examined the levels of PTB expression in freshly isolated normal endometrial and endometrial adenocarcinoma cells. PTB expression in cells isolated from five adenocarcinoma tissues of varying grades was compared with two normal endometrial tissue cells by Western blotting, and the results show the total level of PTB is increased in tumor tissues, which is consistent with the observations in cancer cell lines (Fig. 1C). However, there is no obvious correlation between the expression level of PTB and tumor grades as exemplified by the grade 3 tumor samples, in which one shows a substantial increase in PTB expression although the other is comparable with or slightly less than the grade 1 tumor (Fig. 1C). The lack of correlation between the levels of PTB and the severity of the disease are consistent with the findings from the prostate cancer PC-3 cell line derivatives (Fig. 2B). Together, these data demonstrate that PTB levels generally increase in cancer cells independent of the tissue origin or degree of malignancy, as characterized by metastatic capacity or histological grading.
Differential PTB Isoform Expression in Cancer Cells-Although
PTB levels are generally elevated in cancer cells, the ratios of the three PTB isoforms are heterogeneous among the cancer cell lines (Fig. 2B). The shortest isoform, PTB1, shows significant increases in all cancer cell lines tested compared with normal cells, in which PTB1 is often below the level of detection (Fig. 2B). Because PTB2 and PTB4 cannot be resolved on SDS gels, we evaluated the isoform expression by RT-PCR, which also ensures that the different bands observed on the Western blot are indeed because of alternative splicing rather than post-translational modifications. A set of primers was designed to include the alternatively spliced region that amplifies all three variants ( Fig. 2A). The amplification of all variants in a single reaction provides an internal control for quantification of the proportion of each RNA isoform from a given cell line and thus allows comparisons between unrelated cell lines. Because the ratio of PTB1:PTB4 is relatively unchanged in all tested samples, we focused on the ratio of PTB1:PTB2 (Fig. 2C). In normal cells (WI-38 and Homa), PTB2 and -4 are the predominant isoforms so that the ratio of PTB1: PTB2 in these cells is less than 0.4 (Fig. 2C). This is consistent with the findings by Western blot (Fig. 2B) (the top PTB band represents PTB2 and -4), in which PTB1 is not detected. In comparison, cancer cell lines have heterogeneous ratios of PTB isoforms. Some of the cell lines, including PC-3M, PC-3M Pro4, PC-3M LN4, HEK-293, and T84 cell lines (Fig. 2B) and endometrial adenocarcinoma cells (Fig. 1C), show increased expression of the small isoform PTB1 over the other isoforms ( Fig. 2B and Fig. 1C). The changes at the protein level are consistent with the findings at the RNA level as detected by RT-PCR (Fig. 2C), in which the ratio of PTB1:PTB2 significantly increases in the corresponding cancer cells, reaching as high as 1.32 for PC-3M cells. In contrast, other cancer cell lines, PC-3, HeLa, GC, and SAOS-2 maintain an isoform ratio of PTB1:PTB2 that is closer to the normal cells (Fig. 2, B and C). The heterogeneity of PTB isoform expression profiles in tumor cells from various origins and of varying levels of malignancy suggests that isoform switching toward PTB1 is not directly correlated with the malignant transformation. Our findings are consistent with another report where two prostate cancer cell lines predominantly increase the expression of PTB1, although HeLa expressed predominantly PTB2/4 (46).
A previous study (32) showed that PTB isoforms have different effects on exon 3 exclusion of ␣-tropomyosin but have the same efficiency in excluding both the NW and SM exons of ␣-actinin pre-mRNA. These findings indicate substratedependent splicing activity for different PTB isoforms. To further evaluate the impact of each isoform on alternative splicing, we examined the effects of PTB isoforms on the inclusion of caspase 2 exon 9 (47). Although the full-length caspase 2 (caspase 2L) functionally promotes apoptosis, an inclusion of exon 9 generates a truncated protein product (caspase 2S) by frameshift, which inhibits programmed cell death (48). HEK-293 cells were transfected either with a construct expressing GFP alone or with a construct expressing GFP-tagged PTB isoforms. The transfection efficiency and expression levels of these proteins were very similar among the three isoforms as measured by Western blot (Fig. 3A). The expression of all three GFP-PTB variants significantly shifts the caspase 2 minigene from the short (caspase 2S) to the long (caspase 2L) form (Fig. 3B). Although GFP-PTB4 appears to be slightly less efficient than the other isoforms, the differences are not significant (Fig. 3C). Therefore, three PTB isoforms have similar influence in the splice site selection for caspase 2 and promote the formation of caspase 2L. Together with a previous study (32), these results suggest that PTB isoforms may have distinct or similar splicing regulatory efficacy, depending on the splice substrate. The similar or differential influence of PTB isoforms on a large number of PTB substrates could generate a very complex expression pattern of different protein products in different cell populations.
PTB siRNA Down-regulates PTB Expression without Impacting Global Cellular Transcription or Inducing Cell Death-To evaluate the functional significance of increased expression of PTB in the malignant behavior of cells, we knocked down PTB in both malignant cells and nontransformed cells by siRNA. The PTB siRNA oligo used in these experiments targets the 5Ј end of the mRNA and effectively eliminates the majority of PTB mRNAs (39). PTB siRNA and control oligos were transfected into PC-3M and HeLa cells. Seventy two hours after transfection, the expression of PTB was evaluated by immunofluorescence labeling and by Western blotting (Fig. 4). Immunolabel-FIGURE 2. Differential expression patterns of PTB isoforms. A, schematic illustration of PTB isoforms and PCR primers used to detect all PTB splicing isoforms. B, Western blotting shows PTB levels are generally higher in transformed cell lines when compared with normal cell lines. The shortest isoform, PTB1, shows significant increases in all cancer cell lines tested compared with normal cells, in which PTB1 is often below the level of detection. The normal cell lines used were WI-38 (lung fibroblasts) and Homa (human skin fibroblasts). C, quantitative RT-PCR shows that the ratio of PTB1:PTB2 is substantially increased in some tumor cell lines but not in all.
FIGURE 3. Effects of PTB isoforms on caspase 2 alternative splicing.
A, expression of PTB isoforms in transfected cells was confirmed by Western blot. B, overexpression of PTB isoforms resulted in decreases in caspase 2S (Casp2S) and increases in caspase 2L (Casp2L) as detected by RT-PCR. C, ratio of caspase 2S/caspase 2L was measured by densitometry from the RT-PCR experiments. The ratio of caspase 2S to 2L was significantly different from the control (GFP), but there was no difference among different PTB isoform groups. JULY 18, 2008 • VOLUME 283 • NUMBER 29 ing of siRNA-treated HeLa cells demonstrated a significant reduction of PTB expression when compared with cells transfected with the control oligo (Fig. 4A) with a transfection efficiency generally over 50% (data not shown). Western blot analyses show that PTB expression in siRNA transfected cells can be reduced by ϳ95% in PC-3M cells (Fig. 4B).
Differential Roles for PTB in Cancer Cells
Prior to evaluating how PTB reduction impacts the transformed phenotype, it is important to exclude the possibility that PTB knockdown is detrimental to cells. To do so, we examined the transfected cells for transcriptional activities and apoptotic indices. Our previous studies have shown that PTB knockdown did not significantly change the intra-nuclear distribution of pre-mRNA splicing factors or the nucleolar localization of pre-rRNA processing factors (39). Because localization of these factors is generally sensitive to transcriptional inhibition, those findings suggested that cells with reduced PTB expression remain transcriptionally active and structurally intact in terms of subcellular compartments (39). To directly assess the influence of PTB on cellular transcription, we performed BrU incorporation assays in HeLa cells transfected with PTB siRNA oligos and compared them with the adjacent cells with normal PTB expression. Cells were pulse-labeled with BrU for 5 min, and the newly synthesized RNA incorporated with BrU was detected using a specific antibody recognizing BrU. Cells with severely decreased PTB expression maintain very similar BrU incorporation levels and patterns both in the nucleolus (pol I transcription) and in the nucleoplasm (pol II and pol III transcription) (Fig. 4C), demonstrating that PTB knockdown indeed does not significantly impact global transcriptional activity. To determine whether decreases in PTB levels might induce apoptosis, we compared the apoptotic index in cells treated with PTB siRNA or control oligos. Three days after transfection, the number of cells undergoing apoptosis was evaluated using 4Ј,6-diamidino-2-phenylindole staining. The apoptotic index (the percentage of apoptotic cells per 100 nonmitotic cells) was not significantly different between cells transfected with PTB siRNA or control oligos (data not shown), which demonstrates that reduction of PTB by siRNA does not induce apoptosis.
PTB Knockdown Reduces Cellular Proliferation and Anchorage-independent Growth-PTB is involved in the RNA metabolism of a large number of transcripts in several different capacities, including polyadenylation, RNA stability, alternative splicing, and translational regulation of mRNA. Therefore, knockdown of PTB expression may have significant impacts on many fundamental cellular activities. To determine the impact of PTB knockdown on cell proliferation, we compared the growth rate of cells transfected with PTB siRNA and control oligo. Seventy two hours after transfection, cells were trypsinized, and an equal number of cells from both groups were reseeded and allowed to grow for an additional 5 days. PTB knockdown significantly reduced the growth of two tumor cell lines (HeLa and PC-3M) and a normal cell line (WI-38) (Fig. 5A). The reduction appears to be more severe in normal cells than in the two tumor cell lines. These findings demonstrate that PTB is likely ubiquitously important to maintain cell proliferation.
To evaluate whether the increased PTB expression observed in tumor cells contributes to their malignant behavior, we examined the impact of PTB knockdown on the growth of cancer cells in semi-solid media and compared it with the results from monolayer culture. The semi-solid media assay is indicative of the ability of cells to grow (form colonies) in an anchorage-independent manner, a trait unique to malignantly transformed cells. Three days following transfection with either PTB siRNA or control oligo, an equal number of cells were seeded in media with 1.5% methylcellulose and cultured for 10 days. The total cell number for the two experimental groups was measured, and the results demonstrate that cells transfected with PTB siRNA show a significant growth reduction in methylcellulose as compared with those transfected with control oligos in two tumor cell lines tested (HeLa and PC-3M) (Fig. 5B). To determine whether the growth reduction is specifically because of suppression of anchorage-independent growth rather than to the general proliferation reduction observed in monolayer culture, we standardized the data to control values for each condition to allow for comparisons. When compared with controls, the growth rate of cells transfected with PTB siRNA was decreased to a greater extent in methylcellulose media than in monolayer culture, demonstrating that PTB reduction not only reduces proliferation in general, but it also reduces anchorageindependent growth, which is an in vitro indicator of transformation.
PTB Knockdown Differentially Affects the Invasive Behavior of Cancer Cells-To further evaluate the role of PTB in the malignancy, we examined the ability of PTB siRNA to inhibit the invasive behavior of cancer cells. The Matrigel invasion (Boyden chamber) assay is a well accepted in vitro assay that determines the ability of tumor cells to penetrate a proteinaceous matrix resembling the tumor basement membrane. Equal numbers of cells transfected with siRNA against PTB or control oligo were seeded onto the upper chamber of a transwell and incubated at 37°C for 24 h. Cells remaining at the upper chamber were removed, whereas cells that penetrated into the lower chamber were fixed, stained, quantified, and compared among the experimental groups. The results from these experiments (n ϭ 6) consistently show that PTB knockdown in both PC-3M (Fig. 6A) and T84 cells (data not shown) significantly reduces the number of cells that are capable of invading through the Matrigel. To further examine the mechanism by which PTB might contribute to the invasive behavior of these cells, we compared the extracellular protease activities in PC-3M cells transfected with either PTB siRNA or control oligos. Plasminogen activators are serine proteases that cleave the inactive zymogen, plasminogen, into active plasmin. Once activated, plasmin can cleave almost all extracellular matrix proteins either directly or indirectly by activating zymogens belonging to other proteases classes like the matrix metalloproteases. Plasminogen activators play a critical role in invasion and metastasis of virtually all cancer types studied. In this study we analyzed the level of plasminogen activator activity in the supernatant of cells treated with PTB siRNA or control oligos using a published protocol (49). PC-3M cells transfected with PTB siRNA produce significantly less active extracellular protease than cells transfected with control oligos (Fig. 6C). This suggests that PTB may play a role in the production of active extracellular proteases in these cells. In contrast to the inhibition of invasion by PTB knockdown in PC-3M and T84 cells, PTB knockdown in HeLa cells increased the invasive behavior of these cells (Fig. 6B). Correspondingly, the production of extracellular plasmin is not reduced in these cells (Fig. 6C). The finding is reproducible and consistent in several independent experiments. Together, these findings demonstrate that PTB differentially influences malignant traits depending on the cell lines.
PTB Knockdown Differentially Affects the Alternative Splicing of the Same Substrates in Different Cells-To begin to address the mechanism behind the differential role PTB plays in various cancer cell lines, we examined the influence of PTB knockdown on the same alternatively spliced substrate in HeLa and PC-3M cells. Caspase 2 exon 9 inclusion (47) was assayed as described in Fig. 3. As shown in Fig. 3, overexpression of PTB reduces the inclusion of exon 9 in HEK-293 cells, leading to the increases of the caspase 2L form. Correspondingly, PTB reduc- tion dramatically increases exon 9 inclusion in PC-3M cells, resulting in increases of caspase 2S form (Fig. 7, A and B). However, PTB reduction has little effect on the alternative splicing pattern of this substrate in HeLa cells (Fig. 7, B and C). These observations further demonstrate that PTB acts differentially upon the same cellular function, probably dependent on the genetic and epigenetic background of the specific cells.
Overexpression of PTB Is Not Sufficient to Induce Transformation-To further characterize the role of PTB in malignancy and to determine whether PTB is oncogenic, we examined if increasing PTB levels can induce transformation in NIH-3T3 cells, an immortalized cell line classically used for in vitro transformation assays. Cells were transfected with GFP-PTB1 (the small isoform that is overexpressed in cancer cells), oncogenic K-Ras, or both. Fusion to GFP did not affect PTB1 function as demonstrated in Fig. 2, in which overexpression of GFP-PTB effectively shifts the alternative splicing pattern of caspase 2. After transfections, nonclonal stable expressing populations were created using the appropriate selection. The results show that overexpression of GFP-PTB1 did not significantly affect cellular proliferation rate (Fig. 8A), but the Rastransformed cells did proliferate slightly more rapidly than the mock-transfected cells (Fig. 8A). In addition, co-expression of PTB1-GFP with Ras slightly decreases the proliferation rate compared with the Ras-transformed cells (Fig. 8A), indicating that a high level of PTB1 protein alone is not sufficient to stimulate cellular growth and may even antagonize Ras-stimulated growth. Furthermore, overexpression of GFP-PTB1 does not induce anchorage-independent growth in a semi-solid media assay. In contrast, cells expressing oncogenic K-Ras formed large colonies. Expression of PTB in the co-transfected 3T3 cells did not prevent colony formation (Fig. 8B); however, it did decrease the overall number of colonies when cell number was determined (data not shown), which further suggests that PTB may be antagonistic to the transformation by the oncogenic Ras. Additionally, the effect of PTB overexpression on the invasive behavior of 3T3 cells was determined with the Matrigel invasion assay. The results show a significant (p Ͻ 0.002) decrease in invasive capacity in GFP-PTB1-expressing cells as compared with mock-transfected cells (Fig. 8C). These findings demonstrate that PTB does not cause malignant transformation in immortalized cells and even antagonizes the transformed phenotype by these in vitro criteria. To further test if PTB overexpression could influence the growth of normal human primary cells, WI-38 cells were transfected with GFP-PTB, and their growth in monolayer and methylcellulose media was examined. There is no significant change in cell proliferation in monolayer between the control and PTB-transfected cells (Fig. 8D). In addition, PTB overexpression did not induce growth in semi-solid media (Fig. 8D). Therefore, PTB overexpression alone does not stimulate cellular growth or induce a transformed phenotype in vitro.
DISCUSSION
RNA processing is one of the key regulatory mechanisms that control gene expression in normal and cancer cells. PTB plays critical roles in several aspects of RNA processing, particularly alternative splicing, which can significantly alter gene expression patterns in given cells. As the human genome contains a surprisingly small number of genes, isoforms derived from the same genes through alternative splicing offers diversity in gene function and provides a mechanism to specify cellular activities based on their roles in multicellular organisms. Including or excluding specific exons generates functionally different proteins that cater to specific requirements of cells depending on their function (8,10,51,52). There is growing evidence demonstrating the correlation between specific alternatively spliced variants and the malignant phenotype as well as drug resistance to chemotherapy in cancer patients (31). Comprehensive mapping of cancer-specific alternative spliced genes is underway to resolve the complex patterns of spliced variants in cancer cells (53). PTB is one of over 100 known splicing factors and regulators that influence alternative splicing decisions. Although some evidence indicates that the fine balance of different factors is important for splice sites choices, how these factors generate the final protein variant pattern in each cell population remains unclear. In this study, we provide evidence that echoes the complexity of RNA processing and alternative splicing decisions in cancer cells by demonstrating the different phenotypic changes resulting from the alternation of PTB expression level in different cell lines.
PTB has been shown to be a repressive splicing regulator that generally leads to the exclusion of targeted exons (8, 10, 12). As a splicing repressor, the level of PTB expression can potentially influence the splicing of a large number of substrates and impact malignancy in transformed cells through a variety of cellular pathways. In fact, PTB levels have been shown to increase in transformed cell lines and ovarian cancer (27,28,34). Our findings that PTB is significantly increased in endometrial cancer tissues and in transformed and cancer cell lines of various origins are consistent with these previous observations (26 -29), further demonstrating that PTB expression increases are generally associated with malignant transformation and are not specific to certain tumor types.
Although the elevated levels of PTB in tumor cells have been documented, its role in malignancy was not examined until recently. The increased expression of PTB is thought to potentiate the malignant behaviors in cancer cells. Two examples include the PTB-based alternative splicing of the fibroblast growth factor receptor 1 and the multidrug resistance protein 1/ATP-binding cassette transporter (MDR1). These alternatively spliced proteins promote malignant growth of tumor cells (27) and confer drug resistance (28), respectively. A recent report showed that knockdown of PTB suppresses ovarian tumor cell growth and invasiveness in vitro (34), which supports the promoting role of PTB in the transformed phenotype and led to the consideration of PTB as a drug target for cancer chemotherapy. However, our observations paint a more complex picture when looking into more cell lines, in which PTB is not always a promoter for the transformed phenotype in vitro.
Our studies show that although down-regulation of PTB significantly reduces the invasive capacity of a prostate cancer cell line, PC-3M, and a colon cancer cell line, T84, it consistently increases the invasion capacity of HeLa cells. Interestingly, overexpression of PTB1 in NIH-3T3 cells reduces their invasive capacity. These differential changes in invasive capacity upon the alternation in PTB expression in different cell lines demonstrate that PTB can promote or antagonize the malignant behavior of cells dependent upon the specific intracellular environment. Molecularly, the differential alternative splicing pattern of caspase 2 in various cell lines induced by PTB siRNA knockdown mechanistically supports that the intracellular environment directly affects the role of PTB in functional cellular processes. Although PTB knockdown leads to inhibition of cell growth, it is not selective for the transformed cells. Rather, the inhibition of growth is more profound in normal human fibroblasts than in tumor cells. These observations together support the hypothesis that PTB acts in concert with complex cellular mechanisms to regulate gene expression. Alternative splicing of specific pre-mRNAs has been shown previously to be dependent upon the concentration of splicing promoters and splicing repressors. For example, excess ASF/SF2 over-heterogeneous nuclear ribonucleoprotein A1 prevents improper exon skipping of -tropomyosin pre-mRNA (50). Thus, we speculate that our data can be explained by the idea that there is differential expression of endogenous PTB (including isoform ratios), differential expression of PTB substrate mRNAs, and differential expression of factors that promote or antagonize PTB function among different cell lines, which leads to different phenotypic changes upon alterations in PTB expression.
Although there is a general increase of PTB expression in malignant cells, the levels are highly variable among different types of cancer cells and are not directly correlated with the degree of malignancy in the cells tested. For example, PTB expression in a grade 3 endometrial carcinoma is equivalent to that of a grade 1 tumor. Moreover, the PC-3M cell line and its derivatives, which are from the same origin but selected for different metastatic capacity in vivo (45), all show a very similar expression level of PTB. In addition, overexpression of PTB1,
Differential Roles for PTB in Cancer Cells
which is elevated in many cancer cell lines, does not enhance cellular proliferation, anchorage-independent growth, or invasion capacity in NIH-3T3 cells, indicating that PTB alone is not sufficient to induce transformation in vitro. Together with the findings summarized in the previous paragraph, we conclude that PTB itself is not sufficient to induce a transformed phenotype, and it may support or antagonize malignant phenotype dependent upon a specific cellular environment. Thus, we believe it is premature to consider PTB as a ubiquitous or general potentiator of malignancy, particularly as an anti-cancer drug target.
Although PTB is generally increased in malignant cells, there is a great heterogeneity of PTB isoform expression among tumor cell lines and tissues of different origins. PTB1 increases greatly in most transformed cells and tissues examined, although it is hardly detectable in normal tissues, causing a switch in the ratio of PTB1:PTB2 in some of the cancer cells; however, other cancer cells maintain a greater expression of the larger isoforms. A previous study showed that the three PTB isoforms have differential activities over -tropomyosin exon 3 skipping but are equally active in ␣-actinin exon skipping, suggesting a substrate specific splice site selection by the three isoforms (32). We evaluated the activity of the three isoforms on another substrate, caspase 2 minigene (47). Overexpression of each of the three PTB isoforms individually represses exon 9 inclusion. All three PTB isoforms show similar effects on caspase 2 alternative splicing. Thus far, only a few of the splicing substrates of PTB have been examined for the regulatory effects by different PTB isoforms, and a comprehensive splicing substrate list that details the effect of all three PTB isoforms on alternative splicing of all PTB target genes remains to be established. The alternative splicing patterns that are influenced by the three PTB isoforms could potentially provide profiles that differentiate one cell population from another.
In summary, we have shown that altering PTB expression in various cancer and nontransformed cell lines has differential effects, either promoting or suppressing a malignant trait in vitro. In addition, reduction of PTB has a differential impact on the alternative splicing selection of the same substrate in various cell lines, which provides a mechanistic explanation of the differences observed in functional assays. PTB alone is not sufficient to induce transformation in NIH-3T3 or normal human cells, and there is no clear correlation between PTB levels or isoform ratios with the degree of malignancy. These findings demonstrate the complex role of PTB in cancer cells and lead to our working model; the differential effects of PTB upon cellular behavior observed here are due to the tremendous number of possible gene expression outcomes affected by overall PTB expression level, relative isoform ratios, expression level of PTB substrates, and concentrations of factors that influence PTB function in different cell populations. Therefore, PTB likely contributes to the malignant phenotype as an integrated component of a complex mechanism.
|
2018-04-03T00:55:10.905Z
|
2008-07-18T00:00:00.000
|
{
"year": 2008,
"sha1": "6ffa081a14129e041b26486bf4d58beba9dad98f",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/283/29/20277.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "04f3233a559337f5459c20f5061249de5cefc4d9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
9300646
|
pes2o/s2orc
|
v3-fos-license
|
Correlations between pressure and bandwidth effects in metal-insulator transitions in manganites
The effect of pressure on the metal-insulator transition in manganites with a broad range of bandwidths is investigated. A critical pressure is found at which the metal-insulator transition temperature, T$_{MI}$, reaches a maximum value in every sample studied. The origin of this universal pressure and the relation between the pressure effect and the bandwidth on the metal-insulator transition are discussed.
The effect of pressure on the metal-insulator transition in manganites with a broad range of bandwidths is investigated. A critical pressure is found at which the metal-insulator transition temperature, TMI , reaches a maximum value in every sample studied. The origin of this universal pressure and the relation between the pressure effect and the bandwidth on the metal-insulator transition are discussed. Manganites have been the focus of intense studies in recent years since the observation of large magnetoresistance sparked interest in these materials for use as magnetoresistance sensors. The metal-insulator transitions (MIT) observed in this kind of materials is crucial to the colossal magnetoresistance effect. Metalinsulator transition occurs in manganites in two cases: first, metallic ground state exists in the low temperature range in some doping systems at certain doping concentrations, such as: La 1−x Sr x MnO 3 (x∼0.16-0.50), La 1−x Ca x MnO 3 (x∼0.18-0.50), Nd 1−x Sr x MnO 3 (x∼0.25-0.50); second, metallic states can be induced by other factors, such as magnetic fields, photons, pressure, electric fields. Pr 1−x Ca x MnO 3 at x∼0.3 is typical in the latter class. In most of the manganites, the metallic state is coupled to the ferromagnetic state so that the very large magnetoresistance can be explained by double exchange theory.
In the parameters determining the complicated properties of the manganites, the e g electron bandwidth W is a particularly important one to the metal-transition temperature T MI , or the appearance of the metallic state under some factors. In manganites, the Mn 3d orbital is split into t 2g and e g orbitals by the octahedral crystal field. The conduction band electrons are of e g symmetry. Because the e g orbital is Jahn-Teller active, Jahn-Teller distortion (JTD) can further split the two-fold degenerate e g orbital to trap the conduction band electrons. Consequently, the bandwidth is highly correlated with the local atomic structure of the MnO 6 octahedra: cooperative tilting (Mn-O-Mn bond angle), Jahn-Teller distortion (Mn-O distances) and coherence of the JTD. The bandwidth is characterized by the overlap between the Mn-3d orbital and O-2p orbital and can be described empirically by an equation: 1 where W is the bandwidth, β is the Mn-O-Mn bond angle, and d Mn−O is the Mn-O bond length. In double exchange theory, it is described as the electron hopping rate or the transfer integral: t ij = t 0 ij cos( is the transfer integral that depends on the spatial wave function overlaps, θ ij is the relative angle between two neighboring Mn ion t 2g core spins. Generally, the structure can be tuned in two ways: chemical doping and external pressure. In chemical doping, by selecting different doping elements and doping concentration, the average A-site atom size <r A > in the AMnO 3 system is changed. Because of the mismatch between <r A > and Mn-site ion size, the local atomic structure of MnO 6 octahedra can be modified. Therefore, the bandwidth is tuned by chemical doping so that complicated electronic and magnetic phase diagrams have been observed. 2 The external pressure method, is a "clean method" that only modifies lattice structure without inducing chemical complexity. To date, in studies on manganites, the effects of external pressure on the charge ordering, metal-insulator transition, magnetic states have been observed. Currently, most of the high pressure studies on manganites are on metal-insulator transition. In the low pressure range, this electronic transition is coupled to the ferromagnetic transition, which can be explained qualitatively by double exchange theory. 3,4 It is also found that hydrostatic pressure has a similar effects to chemical doping with larger atoms and higher doping concentration. Both can increase the Mn-O-Mn bond angle, compress Mn-O bond length and hence, lead to larger bandwidth. Correspondingly, T C (or T MI ) increases, or in some manganites originally in insulating state, a MIT is induced. The effect of chemical doping and pressure can be scaled to each other with a conversion factor 3.75×10 −4Å /kbar. 5 However, most pressure experiments were conducted below 2 GPa.
By applying pressures up to ∼6 GPa on manganite systems with a broad range of bandwidths, the effect of pressure on the MIT and the correlation between the pressure effect and the chemical doping were observed. It is found that T C and/or T MI do not change monotonically with pressure 6 and these two transitions do not always couple. 7 A universal pressure may exist for the metalinsulator transition in manganites. With an increase of the bandwidth, the change in the metal-insulator transition temperature with pressure may vanish.
To systemically explore the external pressure effect and the chemical doping effect and the correlation between Note: t is the tolerance factor calculated with the data in Ref. 10; T M I is the metal-insulator transition temperature at ambient pressure; dT M I /dP is the change rate of T M I at P∼0 extracted by fitting the data with a third-order polynomial, the numbers in brackets are the errors in the last one or two digits; P* is the pressure where the T M I increase trend reverses.
a The resistivity in paramagnetic phase at ∼316K gives a P* of 3.8±0.3 GPa. 9 See text for details. The samples were prepared by solid-state reaction. The procedure and details of making the samples were described elsewhere. 6,8,9 All the samples are characterized with the x-ray diffraction and magnetization measurements. The details of high pressure resistivity measurement method and error analysis were described previously. 6 The metal-insulator transition temperature whenever present is defined as the temperature at the resistivity peak. Because of the lower temperature stability of our system in the cooling cycle, the data were taken only while warming up.
In all the samples studied, there is a MIT at ambient pressure or a MIT can be induced by applying pressure. Corresponding to the bandwidth phase diagram in Ref. 5, the Nd 1−x Sr x MnO 3 (x = 0.45, 0.50) system has a large bandwidth; La 0.60 Y 0.07 Ca 0.33 MnO 3 has a medium bandwidth; Pr 1−x Ca x MnO 3 (x = 0.25, 0.30, 0.35) system has a small bandwidth.
In Table I, <r A >, the tolerance factor t, and metalinsulator transition temperature at ambient pressure, which corresponds to the bandwidth, are listed. The average Mn-O bond length and Mn-O-Mn bond angle of all samples determined from the Rietveld refinement to the x-ray diffraction patterns are shown in Fig. 1. According to equation (1), with increasing <r A > or t, the decreasing bond length and bond angle lead to increasing bandwidth W and hence, increasing T MI .
With the application of pressure, the metal-insulator transition temperatures of the samples which have MIT at ambient pressure increases. In the narrow bandwidth Pr 1−x Ca x MnO 3 system, the samples are insulating at ambient pressure. Under pressure, metal-insulator transitions are induced. With pressure increase, the behavior of the T MI is similar to other samples with larger band- width. When the pressure is above a certain point, the increasing trend of T MI of all samples is reversed. The evolution of the transition temperatures of all samples are shown in Fig. 2. In Fig. 2, the most salient feature is that a critical pressure P* exists in each sample: with pressure increase, below P*, T MI increases; above P*, T MI decreases. By fitting T MI vs. P plots with a third-order polynomial, P* for each sample can be extracted and is listed in Table I. In the fitting error range, the samples have the same critical pressure P* (One exception is the NSMO45 sample, the P* determined from the T MI is small. But if we look at the resistivity changes with pressure, its critical pressure is the same as other samples. This may comes from the highest temperature limit of the instruments which leads that the T MI near to the limit in the middle range pressure can not be determined. 9 ).
In large bandwidth samples, the change of T MI with pressure is slower than in narrow bandwidth ones, indicating that the large bandwidth samples are more stable under pressure. The samples studied are selected with different doping concentration and from different doping systems. The bandwidths spans a large range. The samples also have much different ground state electronic and magnetic properties at ambient conditions. But the metal-insulator transitions in these samples all follow a similar behavior, therefore, it is reasonable to speculate that the critical pressure P* is universal for the metal-insulator transitions in manganites. From the structural measurements on manganites, 6,11,12 the behavior of T MI under pressure could possibly be ascribed to a local atomic structure transformation of the MnO 6 octahedra.
Under pressure, the smaller bandwidth samples seem to have smaller pressure range in which the sample is metallic in low temperature and outside which they are insulating. On the other hand, the samples with large bandwidth is more stable under pressure and the variable range of T MI is small and they do not become insulating in a larger pressure range. The lower stability of T MI in small bandwidth samples may come from the small A-site atoms, which leave more space between the octahedra for them to rotate -accordingly, a smaller pressure window for the metallic state.
In Fig. 2, the large bandwidth samples have higher T MI . The only exception is that in the Nd 1−x Sr x MnO 3 system, the x = 0.5 compound nominally has a larger bandwidth than the x = 0.45 compound but has lower T MI . This possibly results from the strong charge ordering effect in Nd 0.5 Sr 0.5 MnO 3 .
The changing rate of the metal-insulator transition temperature with pressure at the ambient pressure, dT MI /dP at P = 0, is also interesting. The values of dT MI /dP extracted from the third-order polynomial fitting results are listed in Table I. Clearly, the smaller the bandwidth, the larger dT MI /dP, indicating that the local structure of smaller bandwidth sample is more distorted and has a relatively large degree to which it can be compressed by pressure.
In summary, by applying external pressure on manganites of different chemical doping system and doping concentration and hence different e g electron bandwidth, it is found that the pressure effect on the metal-insulator transition in manganites is not equivalent to that of the chemical doping. Only at low pressures, is the pressure effect on the metal-insulator transition analogy to the chemical doping with elements with large atom size. With pressure increase, the trend of T MI increase with pressure is reversed at a critical pressure, above which the transition temperature decreases with pressure and finally, the material may become insulating. The critical pressure is found to exist in all the samples studied and possibly is universal for the metal-insulator transition in the manganites. The bandwidth (chemical doping) determines how stable the material may be under pressure. The larger bandwidth manganites are more stable under pressure and therefore, have smaller dT MI /dP near ambient pressure and smaller T MI variation under pressure. Because of the importance of the local atomic structure of the MnO 6 octahedra to the electronic and magnetic properties of the manganites, this work may also contribute to understanding the properties of thin films of manganites which are important in technological applications.
|
2017-09-06T16:41:43.432Z
|
2003-10-29T00:00:00.000
|
{
"year": 2003,
"sha1": "4f911328ad76ce1914494656609100f89af2a2a7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0310683",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8064886d7365c5e2eda4c071576e160a98e8c93d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
56391993
|
pes2o/s2orc
|
v3-fos-license
|
Extrapolation of Functions of Many Variables by Means of Metric Analysis
The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.
Introduction
One of the main problems of data processing in many areas is the problem of functions of several variables extrapolation.In the presented paper, the method of a function of many variables extrapolation, developed by us, uses the interpolation scheme of metric analysis.Below is a brief description of the metric analysis of interpolation of the values of functions of several variables and its application [3].
Extrapolation scheme
At the first and second stages of the scheme of extrapolation of functions of many variables presented in the article, the interpolation of metric analysis is used.The extrapolation scheme uses the interpolation method for the functional dependence: where the function F( X) is unknown and is subject to recovery, either at one point X * or in a set of given points on the basis of known values of the function According to the method of interpolation, based on metric analysis, interpolation values are found as solutions of problems of minimum of the measure of metric uncertainty with respect to the point where W is a matrix of the metric uncertainty and the interpolation value is determined by a linear combination and is given by .
The matrix of the metric uncertainty is defined by where n, where w k , k = 1, . . ., m are metric weights (see [3]).
Next, a metric analysis scheme is used to predict (extrapolate) the function of a single variable using an autoregressive model of metric analysis.
Let us consider a function y = f (x) of one variable x with known values The problem of finding the extrapolated value Y n+1 is reduced to the problem of interpolation of functions of several variables by means of a nonlinear autoregressive model [3,4]: Then the extrapolation of the function y = f (x) is reduced to the interpolation of the function of l variables Y = F(y 1 , y 2 , . . ., y l ) with values in n − l points The extrapolated value Y ext = Y n+1 is defined as an interpolation value of the function Y = F(y 1 , y 2 , . . ., y l ) at the point X * : where The number l determines the dimension of the space of vectors, and its value in the test was found as a solution of extremal problem [1,2] where minimization is done in l, Y re is the vector of realized values and Y ext is the vector of extrapolated values.
The extrapolation scheme for the function of several variables (1) at a given point X * = (X * 1 , . . ., X * m ) T consists of two stages.At the first stage, the point X 0 = (X 01 , . . ., X 0m ) T is selected inside the cluster of the realized values of the function Y = F( X).Then the points X 0 and X * are connected by a straight line segment which is divided into L equal segments with nodes In the points belonging to the cluster and rectilinear segment (10), the interpolation of the values of the function (1) is performed using the scheme ( 4)-(5) on the set of known values of the function Y i , i = 1, . . ., n at the points X i = (X i1 , . . ., X im ) T belonging to the cluster.At the second stage, the interpolated values, which were calculated at the first step Y = (Y 1 , . . ., Y l ) T at the points (12), are successively extrapolated to the remaining nodes S k = (S k1 , . . ., S km ) T , k = l + 1, . . ., L + 1, where By using the autoregressive scheme (6)-(9) at the nodes one can obtain the extrapolated value
Conclusion
In this paper a fundamentally new method of the extrapolation for functions of several variables is proposed.The obtained numerical results of extrapolation of functions of several variables, taken from different areas, show that the presented scheme can provide a reliable accuracy of extrapolation results.
The publication was prepared under the support of the "PFUR University Program 5-100".
W − 1
is the inverse matrix for the (n − l) × (n − l) matrix of metric uncertainty, Y = (Y l+1 , . . ., Y n ) T is the (n − l)-dimensional vector of the values of the extrapolated function.
|
2018-12-18T16:11:44.477Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "b92d91b835faafdb8e854bf03426244c888b28af",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2018/08/epjconf_mmcp2018_03014.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b92d91b835faafdb8e854bf03426244c888b28af",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
264374055
|
pes2o/s2orc
|
v3-fos-license
|
A scoping review and critical evaluation of the methodological quality of clinical practice guidelines on nutrition in the preconception
Introduction Clinical practice guidelines (CPGs) contain recommendations for specific clinical circumstances, including maternal malnutrition. This study aimed to identify the CPGs that provide recommendations for preventing, diagnosing, and treating women’s malnutrition. Additionally, we sought to assess the methodological quality using the Appraisal of Guidelines for Research and Evaluation (AGREE II) instrument. Methods An online search for CPGs was performed, looking for those that contained lifestyle and nutritional recommendations to prevent, diagnose and treat malnutrition in women during the preconception period using PubMed and different websites. The reviewers utilized the AGREE II instrument to appraise the quality of the CPGs. We defined high-quality guidelines with a final score of > 70%. Results The titles and abstracts from 30 guidelines were screened for inclusion, of which 20 guidelines were fully reviewed for quality assessment. The overall quality assessment of CPGs was 73%, and only 55% reached a high-quality classification. The domains in the guidelines classified as high-quality had the highest scores in “Scope and Purpose” and “Clarity of Presentation” with a median of 98.5 and 93%, respectively. Discussion Further assessment is needed to improve the quality of the guidelines, which is an opportunity to strengthen them, especially in the domains with the lowest scores.
Introduction
Maternal malnutrition is associated with irreversible negative health outcomes for the mother-child binomial in the medium and long term (1).Women's health and nutrition status before pregnancy is crucial in determining gestational weight gain, pregnancy health, and birth outcomes (2).Nevertheless, the preconception nutritional status has been overlooked despite its importance; poor+ nutrition in the preconception period is women's least studied stage of life (3).
Globally, more than one billion women experience at least one form of malnutrition.The prevalence of underweight in women of reproductive age in 2014 was 9.7%, and substantial burdens persist across Asia and Africa, reaching 24% in South Asia (4).In Southeast and South Asia, maternal short stature (< 150 cm) affects 40-70% of women.Latin America and the Caribbean, Pacific Islands, and the Middle East bear a significant burden of overweight and obesity, with even higher prevalence observed in regions like South Asia (5).In addition, one-third of women of reproductive age in lower-middle-income countries are anemic, and vitamin D deficiency is re-emerging as a significant global health issue (6,7).Recent studies have linked the above-mentioned conditions with several clinical conditions in pregnancy (e.g., preeclampsia, gestational diabetes, higher incidence of cesarean section, preterm birth, etc.) (8).
Clinical practice guidelines (CPGs) provide recommendations that are designed to aid healthcare providers, physicians, and patients in making informed decisions regarding appropriate healthcare for specific clinical circumstances, such as the supplementation with folate, iron, and folic acid, and weight management of women with obesity in pregnancy (9); as well as recommendations for nutritional assessment, healthy diet, dietary modifications, nutritional supplementation, or any nutritional or lifestyle recommendations given in primary care and other health care areas.However, CPGs vary among countries or regions, and some of them do not meet the basic quality standards (10, 11).Furthermore, there is often a lack of regular updates to guidelines, which means that they may not always remain up-to-date and fail to incorporate the most current evidence (8).
The Appraisal of Guidelines for Research and Evaluation Instrument (AGREE II) was developed to address the issue of quality variability in CPGs.Its main objectives are to establish a framework for assessing guideline quality, offer a methodological approach for guideline development, and provide guidance on what information should be included and how it should be reported.The AGREE II instrument can be applied to any health or disease-related guidelines, including preconception, pregnancy, the postpartum period, and other stages of women's lives (12).
High-quality CPGs benefit the reduction of issues related to poor nutrition in the preconception period.This study aimed to identify the CPGs that include recommendations for preventing, diagnosing, and treating women's malnutrition and to evaluate the methodological quality of the included guidelines using the AGREE II instrument.
Data sources and search strategy
We thoroughly assessed CPGs, including lifestyle and nutritional recommendations to prevent, diagnose and treat malnutrition in the preconception period.Our study incorporated CPGs, standard references, and position statements that provided recommendations on various aspects of nutritional assessment (including anthropometric measurements, biochemical data, clinical history, and lifestyle factors), healthy diet, dietary modifications, nutritional supplementation, and other nutritional or lifestyle recommendations.
The review process consisted of five stages.For our study, we utilized the framework initially proposed by Arksey and O'Malley (13), which was further refined by Levac et al. (14) and the Joanna Briggs Institute (15).We added one last step to assess the quality of the CPGs using the AGREE II instrument (12).
We performed two types of searches for our study.The first search involved a systematic search in a single bibliographic database1 using the algorithm outlined in Table 1 and filters for guidelines and practice guidelines.The second search involved a manual search on guidelinerelated websites of national and international agencies and societies focused on child health and nutrition.We used key terms from the PubMed algorithm, individually and combined in English and Spanish, for this manual search.
Studies selection 2.2.1. Inclusion criteria
The included documents met the following eligibility criteria: (i) they were international and national CPGs, standard references, or position statements; (ii) they were written in English or Spanish; (iii) they were published between January 2008 and February 2021, considering the publication of The Lancet's Maternal and Child Undernutrition Series.
Exclusion criteria
The exclusion criteria encompassed opinions or editorials, articles published as communication tools, and clinical practice guidelines (CPGs) focused solely on lifestyle and nutrition recommendations
Quality assessment
The evaluation process involved the participation of authors, including dietitians and physicians.Two of the authors (CMM, MAM) independently reviewed the titles and abstracts of each study to determine their eligibility for inclusion.In the event of disagreements, another author (SBM) evaluated the guideline to provide a final decision.We obtained full-text copies of the potentially eligible documents; one of them was independently assessed by two authors to determine if they met the inclusion criteria.In case of disagreements, a third author was assigned to determine the final inclusion of the study.
The AGREE II instrument assesses a CPG's development in terms of its quality, rigor, and transparency.It comprises six domains (Table 2) consisting of 23 key items in total.Each item within the instrument is assessed using a seven-point Likert rating scale, ranging from one (Strongly Disagree) to seven (Strongly Agree), as defined in the AGREE II User's Manual (10).The overall scores of each of the six domains were calculated by adding all their corresponding items and scaling the total as a proportion of the maximum possible score for that domain (max score = 100).An overall assessment score of > 70% indicated high quality in the guidelines (10).The quality of each CPGs was independently evaluated by two authors (SES, LTC, AT, FAA, MAM) using the online AGREE platform "My AGREE PLUS."
Data analysis
The means and median scores for each domain of the AGREE II instrument were computed to determine the most critical domains across the different guidelines.The overall quality of each guideline was assessed by applying a threshold of 70% for the final score of each domain.Data collection and extraction were performed using Microsoft Excel 2021, version 16.57.This study did not require ethical approval or consent.
Results
A summary of the results is shown in Figure 1, which was yielded by the keyword combinations with PubMed and other websites.We started the eligibility process after collecting all the results and omitting duplicated articles.The titles and abstracts from 30 guidelines were screened for inclusion, of which 20 guidelines were fully reviewed for quality assessment.
Of the 20 PCGs, six were related to prenatal care for pregnancy and six to weight control, overweight and obesity in women of reproductive age and during pregnancy, five of the guidelines were focused on supplementation of iron, folic acid, calcium or vitamin K, and the rest of the guidelines provided recommendations for healthy eating and lifestyle and for preconception management in women with diabetes.
Supplementary material 1 shows the general characteristics of the included guidelines, such as reference clinical guidelines, supporting organization, year, region, number of references and target audience.The main supporting organization is the World Health Organization (WHO) (17-21) NICE (22)(23)(24), the Royal College of Obstetricians and Gynecologists (25,26) and other Societies, Colleges and Departments of Health.
The mean of references was 89.3 (Min:13 Max:239); however, three guides by NICE did not specify their references (22)(23)(24).The guidelines were designed for different target audiences, and the main ones were healthcare providers.Some guides directed their guidelines towards policymakers, expert advisers, government officials, scientists, the food industry and organizations of nutrition actions for public health.Table 3 presents the scores for each domain and the final quality evaluation of all CPGs.The overall quality assessment was 73% (range = 39-100), and the median was 83% (range = 17-100).75% (n = 15) reached a high-quality classification.About the domains, three of them had a score of > 70%.The domain with the highest score was "Clarity of presentation, " with a mean of 88.5% (range = 50-100), and "Scope and purpose, " with a mean of 87% (range = 39-100), while the lowest was "Applicability" with a mean of 69.9% (range = 4-100).High-quality guidelines had a higher evaluation in "Scope and Purpose" and "Clarity of Presentation" with a mean of 97.3% (range = 39-100) and 94.9% (range = 50-100), respectively; meanwhile, the domain with the lowest score was "Applicability" with a mean of 81.5% (range = 60-100).In the guidelines classified as low quality, the domain with the lowest score was "Applicability" with a mean of 35% (score = 4-100) and "Rigour of development" with 36.8% (score = 21-100).
Two clinical guidelines developed by NICE, "Antenatal care for uncomplicated pregnancies" in 2019 (22) and "Weight management before, during, and after pregnancy" in 2010 (24), had the highest score (more than 90% in all the evaluated domains); while the clinical guidelines by Bomba-Opoń D. et al. (36), the Royal Australian and New Zealand College of Obstetricians and Gynecologists (34), the American College of Obstetricians and Gynecologists and the American Society for Reproductive Medicine (32), McAuliffe FM et al. (27) and Australian Government Department of Health (33) had an overall low quality with 39, 50, 56, 67 and 69%, respectively.Therefore, they are not recommended according to the AGREE II assessment tool.The average quality scores of each domain of the AGREE II instrument by all guidelines, high-quality guidelines, and low-quality guidelines are shown in Figure 2.
. Scope and purpose domain
For the "Scope and Purpose" domain, 75% (n = 15) of the guidelines received a score > 80%.The lowest scores (below ≤ 50%) were achieved by "Folate supplementation during the preconception period, pregnancy and puerperium" (2017) (36) with 39%, "Pre-pregnancy counseling" by The Royal Australian and New Zealand College of Obstetricians and Gynecologists (2021) (34) and The American College of Obstetricians and Gynecologists and the American Society for Reproductive Medicine (2019) (32) with 50 56%, respectively.
Rigour of development
For the 20 sets of guidelines, the mean AGREE II score for the domain "Rigour and development" was 76.1% (range = 21-100).The highest score for this domain was observed in two CPGs (10%): "Antenatal care for uncomplicated pregnancies" (22) and "Guideline No. 391-Pregnancy and Maternal Obesity Part 1: Pre-conception and Prenatal Care" (29), both in 2019.Of the guidelines, 70% received a score higher than 70, and 15% (n = 3) scored below 50% (27,32,34)."Prevention of noncommunicable diseases by interventions in the preconception period: A FIGO position paper for action by healthcare practitioners" (2020) (27) had the lowest score in this domain.
Clarity of presentation
Compared with the others, this domain obtained the highest score with a mean of 88.5% (range = 50-99) and median score of 93% (range = 50-100).The scores established for this domain were high for all the guidelines; 85% (n = 17) of them scored > 70%.
Editorial independence
On the "Editorial independence" domain, the guidelines obtained a mean AGREE II score of 81.7% (range = 8-100).Fourteen (70%) received a score higher than 80%.Bomba-Opoń D et al. (36) 's guideline was the only one that scored equal to 8%.
Discussion
Most of the CPGs we found included recommendations for managing obesity and the prescription of supplements.Nevertheless, few guidelines have been developed to make recommendations about iron and folic acid supplementation, even though anemia is one of the most common forms of malnutrition in this group of women (6,7).In addition, elaborated guides for optimizing weight were not identified despite the important role that nutritional status during preconception plays in determining health outcomes in pregnant women (2).
Our main findings revealed that only 55% of the CPGs were evaluated as high quality, while the domain scores were between highand low-quality CPGs.High-quality CPGs had a higher evaluation in the classifications of "Scope and Purpose" (median = 98.5%, range = 39-100) and "Clarity of Presentation" (median = 93%, range = 50-100).Low-quality CPGs had a higher score in the classification of "Clarity of presentation" (median = 93%, range 50-100) and "Editorial Independence" (median = 92%, range 8-100).In the guidelines classified as high quality and low quality, the domain with the lowest score was "Applicability, " with a median of 48% (range = 60-100) and 75% (score = 4-100%), respectively.Our results agree with other quality assessments of CPGs using the AGREE II instrument (37).
According to the AGREE II instrument, several quality domains need to be improved and prioritized; in this context, domains 5 and 2, which are "Applicability" and "Stakeholder involvement, " obtained the lowest mean (69.9 and 74.2%, respectively) in most of the guidelines.The "Applicability" domain has been reported to be related to implementing the guidelines by health professionals in daily clinical practice (12).This situation may be a key to understanding the gap between knowledge and implementation of CPGs, in addition to the potential implications on the clinical practice and the nutritional status of women.In our context, it is necessary the development of robust, comprehensive, and high-quality guidelines for a healthy lifestyle in the preconception period (38).
This study has different limitations.First, our systemic search was exclusively conducted in one database (PubMed) which may have limited the search for developing countries.Secondly, the search was restricted to CPGs published in Spanish or English.It is important to acknowledge certain limitations when interpreting these results because the geographical generalizability may be limited considering the under-representation from low and middle-income regions such as Asia, Africa, Latin America and Caribe.
Only a few methodologies have been designed to assess the quality of CPGs.AGREE II provides elements that allow for developing and implementing initiatives to improve healthcare quality.We recommend this instrument that guideline developers, clinicians, researchers, and policymakers consider and utilize the AGREE II tool, as it is a comprehensive and user-friendly instrument that can be adapted to specific populations, injuries, or diseases (39).
There is a gap in the evidence of the different forms of malnutrition in the preconception period, and sometimes, the guidelines have yet to be adapted to new contexts, like the pandemic caused by Coronavirus SARS-CoV-2 in 2020 (8).To our knowledge, this is the first study that evaluates the quality of CPGs for the preconception period and the importance of including different health professionals, such as dietitians, related to the preconception in this evaluation process.
Conclusion
AGREE II tool provides a framework to develop guidelines and an instrument to review their quality.Further assessment is needed to improve the quality guidelines, which is an opportunity to strengthen them, especially in the domains where the scores were the lowest.We recommend using the AGREE II instrument by all health professionals since it can be applied easily and in detail.This instrument also allows an analytical evaluation before implementing the given guidelines, which would support the making of decisions around the health system of a country or region.We need increased rigor in formulating guidelines to prevent, diagnose and treat malnutrition in all its forms during preconception, a critical period of life.
FIGURE 1 PRISMA
FIGURE 1PRISMA flow diagram of literature sources and review process.
FIGURE 2
FIGURE 2Average quality score by each domain of AGREE II for all included guidelines.
TABLE 2 The
Appraisal of Guidelines for Research and EvaluationInstrument II domains and content.
TABLE 1
Search algorithm.
3.1.Quality of guidelines according to the AGREE II domains.
TABLE 3
Appraisal of Guidelines for Research and Evaluation (AGREE) II version result for clinical practice guidelines.
|
2023-10-21T15:12:51.211Z
|
2023-10-19T00:00:00.000
|
{
"year": 2023,
"sha1": "83a8fa3226954b1b788aa16f81db672a98f0a350",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2023.1122289/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c35be4de25f1c47d9810f3460dd98ac53a3b08fc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212935964
|
pes2o/s2orc
|
v3-fos-license
|
A Comparative Study of Plato’s and Jane Austen’s Concept of Love in Pride and Prejudice
Jane Austen’s Pride and Prejudice demonstrates the encounter of the two ruling faculties of human beings: reason and passion. The characters of this novel who are mostly young people are involved in the matters of heart and mind, seeking love and affection from their beloved ones while simultaneously burdened by the codes of manners and mannerisms of their society. Although many studies have been conducted on the subject of marriage and love on Austen’s novels, the nature of this love has not been given its proper attention. A comparative study of Plato’s concept of love and that envisaged in Jane Austen’s novel clarifies a lot of things among which we can refer to their difference in the extent of realism as the former depicts love in its ideal form and the latter in its practical sense. Serving as a means to deepen the readers’ understanding, this essay introduces a new perspective to Austen studies by examining Platonic concepts of love in Pride and Prejudice in the light of the information gleaned from Plato’s two famous works that directly deal with the concept of love: Phaedrus and Symposium. The study shows that despite being Platonic in her approach to love, Austen differs from Plato in that she tries to confine love to decorum under the veil of social relationships which bespeaks of the fact that Austen’s time in early Victorian period gives priority to the practice of love in a real context over intellectual concern for what it might mean or might not.
INTRODUCTION
Literature and philosophy have always had common subjects to deal with. One of these subjects, which are of particular interest to both philosophers and literary figures due to its dazzling complexity and elegant simplicity, is love. Plato honors love as the oldest of the gods declaring that "First Chaos came, and then broad-bosomed Earth, the everlasting seat of all that is, and love" (Symposium). Human love came into existence with the creation of Adam. Without love, a human being lacks the passion for life. Consequently, he looks for it everywhere and at all times. In search of the lost love, and in an attempt to know and define it, he investigates both the world and himself. The result is the discovery of varied feelings and perceptions that have been a storehouse to hundreds and thousands of literary and philosophical works. Among many effective philosophers and literary figures who have been highly concerned about defining and determining love, Plato owns the first rank both in terms of precedence and influence. His Symposium and Phaedrus are the two works that specifically address the question of love. The number of the people who are influenced by Plato's ideas either directly or indirectly is countless. In all regions, at all periods and cultures, there are plenty of thinkers and artists who are indebted to Plato's ingenious way of examining concepts and ideas that are universal. Love being one of the ist whose "lack of idealism not only enabled her to deliver a real world but to restore to it a zest and bloom that rationalism had all but bleached away" . Eventually, in her works, she has a glimpse at ideals and tries to trace their manifestation in the real world through ascribing them to the characters' conduct. Her idealism is accentuated in her heroines' and heroes' romantic relationships in a highly conventional society where they have to keep a balance between their emotional needs and social claims.
What is intended to be analyzed in this study is to delve into the nature of love existing between lovers in Austen's novels. It seems that love, in spite of its essential role in bringing the lovers together and preserving their union, has received less attention compared to other subjects. Considering romantic relationships, almost all of the works done on Jane Austen's novels concentrate more or less on their social cause and context. A precise and careful delving into her novels will expose a certain philosophy of love, a philosophy congenial to that of Plato, concealed under the veil of social relationships. Thus, the present study aims to remove the veils and reveal the sort of philosophy of love that she pursues in Pride and Prejudice by finding the answer of this question: How much is Austen concerned with love? What are her ideals of love? Does she observe certain philosophy in defining love in her fiction? How far does she succeed in portraying the reality of love in her time? Are the happy romantic relationships in the novel real or are something of her imagination that she has artfully been able to exalt to the level of ideality? The current study is an attempt to provide proper answers for it by making a comparison between Austen's concept of love and that of Plato. It is hoped that this paper would contribute to Austen's studies by examining Jane Austen's novels in light of Plato, which can bring about a lot of interesting results because Plato juxtaposes reason and passion in his definition of love and Austen emphasizes the sovereignty of reason in romantic relationships in a society where social conventions are highly valued. As a less observed subject matter, such a comparative study between philosophy and literature in general and between Plato and Austen specifically can be quite enlightening and might answer some important questions about Austen's art.
Before discussing the nature of love in Austen's novels, it seems necessary to familiarize oneself with different conceptions and definitions of love. For instance, St. Augustine defines what true love is and how we should love sincerely. He emphasizes that the one "who loves aright believes rightly, who doesn't love believes in vain, even if what he believes in is true" (78). He concludes that "that which is not loved for itself is not loved at all" (37) as love is greater than faith and hope. To him, love is to achieve happiness and happiness is the matter of having what people want without the fear of losing it. Plato sets forth his ideas about love in Lysis, Symposium, and Phaedrus. The very claim Plato has in Symposium when he says, "I know not any greater blessing to a young man who is beginning life than a virtuous lover, or to the lover than a beloved youth" (Symposium) depicts the importance of the matter. Love, Plato declares, is a quest for the everlasting possession of the good that is the beautiful, and since "love is of the everlasting possession of the good, all men will necessarily desire immortality together with good" (Symposium).
Pride and Prejudice, which is considered by many as Austen's most popular novel, is deeply concerned with love. Different aspects of love are depicted through different types of characters. Michael Giffin in Jane Austen and Religion, while emphasizing that the "primary theme in an Austen novel is social being and social becoming" (7), explains that this social being has been built on Greek, or more specifically on platonic" myth. Giffin believes that "Austen's dichotomy of reason and feelings. has its origin in the platonic model of person and in the ancient Greek myths of rationality and irrationality" (8). Giffin proves the need for reason as a mediator to control the feelings from going astray. Christopher Brooke has put love without the knowledge of oneself and that of one's beloved under consideration. He concludes that love without understanding is doomed to fail. Stuart M. Tave in his "What Are Men to Rocks and Mountains?" declares that being in love does not guarantee a happy union of the lovers. He believes that in Pride and Prejudice "we are presented with two sets of young lovers who have problems which must be worked out, and here too are those who try to direct their lives for them" (7). Anne Crippen Ruderman considers the novel as the study of lovers dealing with love and the obstacles on their way in society. She believes that "Jane Austen's stories of courtship and marriage are particularly revealing because they are not only an account of human passion and feelings but also of the intellect and reason (1). Michael Kramp in his Disciplining love: Austen and the modern man treats "the issues of sexuality, sexual desire, and love within Austen's texts not as natural instincts that must be either satisfied or repressed, but as matters of social conduct and cultural consciousness that are crafted, maintained, and adjusted" (2). He is eager to approve that love and sexuality for Austen's men is not only a personal issue. It is also a "part of their larger civic duty" (2). It is something that determines the extent and the mode of their participation in social relationships. Allen regards the novel as "an anatomy of a particular species of desire" but at the same time, it lacks these qualities. According to Allen, although Pride and Prejudice is a novel of romantic attachments, the lovers are not willing to admit their love for each other. As Allen puts it, "the novel contains little direct discussion of sexual passion, and Austen attempts to discount the potential irrationalities of romantic love"(425). Bernard J. Paris claims that although Darcy and Elizabeth union is "less romantically gratified, it establishes a new society... to assure Elizabeth of a substantial and lasting happiness (100). He considers Darcy-Elizabeth match a prosperous one "because it is based upon a real understanding of themselves and each other (100). They are happy together because they are interested in improving their pride and inspiring their self-esteem. Patricia Menon, in Austen, Eliot, Charlotte Brontë and Mentor-Lover, in-vestigates the relationship of passion and judgment and the role of "the figure of the mentor-lover" (1) and "the nature of the attributes of the mentor-power, judgment and moral authority" (1) in Austen's novel. Menon assumes that Austen is interested in the role of the mentor-lover, however she is not obsessed with it. Menon claims that Austen believes that lovers need to learn from each other but this does not happen unless they keep their passion and their judgment balanced. Richard Simpson holds the opinion that Austen has the potential to consider love as platonic. He argues that "Austen seems to be saturated with the Platonic idea that the giving and receiving of knowledge, the active formation of another's character, is the truest and strongest foundation of love" (244). He believes that in Austen's opinion the hero, as he is depicted in her novels, is the heroine's adviser who "is often a man sufficiently her elder to have been her guide and mentor in many of the most difficult crises of her youth" (244). Robert P. Irvine approves that that "It is gratitude that forms the foundation of Elizabeth Bennet's love for Fitzwilliam Darcy (65). Seeing the evidence of Darcy's social power creates a sense of appreciation she has never felt for anybody else. Irvine attests that "Elizabeth's desire for Darcy does not happen despite the difference in their social situation: it is produced by that difference" (65). In spite of the social discomfort created by their different social status, Darcy's power allows him to do good. The result is Elizabeth's happy consent for marrying him.
Some critics believe that Jane Austen's novels, in spite of their devising love stories, lack romance. A slight change in our perspective helps us to come to a new reading of her fiction. What is aimed here is, therefore, a delving into her works to reevaluate that claim. By matching Austen's works with Plato's concept of love as manifested in Phaedrus and Symposium, it is hoped to show Austen's philosophical and/ or platonic concern about love, and to depict the quality of love Austen pursues. The argument would shed light on the notion that in spite of the fact that Austen's approach to love is Platonic, she pays more attention to decorum under the veil of social relationship and confines love in this very way.
LOVE AND AUSTEN'S FRAME OF SOCIAL RULES
By fostering social conduct and cultural consciousness, Victorian society was the forerunner of stablishing rules and standards of individual and social dealings and transactions. The conduct books, which were popular in the era, "operated to create and regulate conceptions of desirable masculinity in the same way that female conducts literature sought to create ideals of desirable femininity" (Ailwood 44). Austen's society ingrained social conduct and morality with sexual repression. Open articulation and practice of passion was not acceptable. People did not talk about sexuality since it might put young people in danger of getting passionate and losing their rationality. "Explicit novels, sensuous pictures, and exciting dances were repressed because they might awaken sexual desire in young women and young men who were not yet mature enough to take on its responsibilities" (Mitchell 269). It was in such society that Austen's heroes and heroines lived and loved.
Pride and Prejudice displays a kind of love that is compatible with the convention of a polite society. Austen was undoubtedly aware of the moral etiquettes of courtship in her time: Among the respectable middle and upper classes, all courtship was essentially conducted in public... Private conversations were brief, and usually in the open air. There was no dating-young people from respectable families did not go places together except in the company of other people. In the nineteenth century, making love meant "flirting." A lover was a suitor or admirer. This was all perfectly respectable; no sexual activity was involved (Mitchell 159). The novel, Pride and Prejudice, "seems concerned to restrict the scope of desire. The novel contains little direct discussion of sexual passion, and Austen attempts to discount the potential irrationalities of romantic love" (Allen 426). There are no such wildly-in-love heroes and heroines who are unable of taming their passion when restraint is required; otherwise, they would be condemned and censured by the society.
In spite of the anti-romantic atmosphere of Victorian society, love has always been the central theme of Austen's novels. Although, in Pride and Prejudice, she carefully observes the strict rules of courtship of her society, Austen "does not fail to portray passion. In addition, she makes a case for moderation... She argues even for deep romantic fulfillment that can come from a sense of restraint" (Ruderman 2). The lovers, in spite of their strong feelings, reserve expressing their emotions as long as possible in order to stick to propriety. For instance, in order to adhere to convention, Elizabeth and Jane repress their feelings to their lovers even after finding them strong and real. Charlotte Lucas, noticing such a concealment as a risk of losing their lovers, warns Elizabeth that It may perhaps be pleasant... to be able to impose on the public in such a case; but it is sometimes a disadvantage to be so very guarded. If a woman conceals her affection with the same skill from the object of it, she may lose the opportunity of fixing him... There are very few of us who have heart enough to be really in love without encouragement. In nine cases out of ten, a woman had better show more affection than she feels. Bingley likes your sister undoubtedly; but he may never do more than like her, if she does not help him on (Pride and Prejudice 246). But neither Jane nor Elizabeth give heed to such statements. They consider it to be men's function to realize whether a woman is in love with them. That is why throughout the novel they act accordingly. Elizabeth regards it to be improper of a woman to show her affection. She believes that "if a woman is partial to a man, and does not endeavor to conceal it, he must find it out" (Austen 246). Austen's characters restrain themselves from acting and behaving passionately in order to prevent themselves from violating propriety; otherwise, they will be disgraced as Lydia and Wickham who endanger their reputation and good name by their elopement.
DISCUSSION
In Pride and Prejudice, the characters who are connected in a net of social and romantic relationships provide the opportunity for investigating love from platonic viewpoint. Pride and Prejudice is regarded as the most classic love story of all Austen's novels where lovers are conscious of their love and romantic appeal. Lovers who adhere to the bounds of social respectability experience an emotional and rational challenge in their courtship, while those who are loyal to their instincts and act upon their passion have to encounter a general disagreement. A comparison between Plato's concept of love and the kind of love experienced by some characters in Pride and Prejudice through examining their conduct, and delving into their hearts and minds sets aside the curtains of social rules and enlightens the quality of love practiced by the lovers.
Affectionate Love versus Passionate Love
As a representative of the nineteenth century polite society, Austen is concerned with courtship. This makes her "concentrated on how man and woman may best live in harmony with each other" (Tanner 66) and with society. She believes that true love and affection can harmonize men's and women's relationship and asks the very essential question and gives her crucial solution for being happy in marriage through Jane's and Elizabeth's sisterly chat: "And do you really love him quite well enough? Oh, Lizzy! do anything rather than marry without affection" (Pride and Prejudice 463). Here, Austen interchanges the words "affection" and "love" to distinguish them from passion which she calls "the expression of violently in love" (Austen 321), and to indicate that true love is growing and lasting feelings, "not work of a day" (Austen 465), not fleeting or blinding emotions that afflict the mind and lead the lovers to misconduct and indecency; but rather, enumerates with energy the lovers' good qualities (Pride and Prejudice 465). She explains that "the expression of 'violently in love' is … so doubtful, so indefinite... It is as often applied to feelings which arise from an half hour's acquaintance, as to a real, strong attachment" (Austen 321). Austen speaks of love and affection for couples' well-being while she knows that passion is within people.
Comparing her conception of love with Plato's reveals that he, too, considers true love as the harmonizer of dispositions and calls it "an agreement of disagreements" (Symposium). Plato affirms that love of body is not everlasting since the body itself is not stable and when the youth and beauty are away, the love fades away too; whereas the love of the "noble disposition" is everlasting (Symposium). Passion which results in "a hasty attachment is … dishonourable" (Plato, Symposium). As it is observed in Lydia's case, passion violates the social bounds of decency whereas affection, as in Elizabeth's case, trims men and women's relationship off excess, acting as a moderator of passion and harmonizer of dispositions.
Elizabeth, Austen's spokeswoman in the novel, rejects Darcy's first proposal because, stimulated by pure passion, it does not come along with the accepted rules of propriety and politeness. On his first proposal, Darcy, with an air of superiority, addresses himself to Elizabeth claiming that "In vain have I struggled. It will not do. My feelings will not be repressed. You must allow me to tell you how ardently I admire and love you" (Austen 350). He cannot be accepted until his ardent love changes to affection, until his passion is tempered by reason. Darcy's next proposal is very different in tone and temperament. With a gentle tone and better disposition, he repeats his offer claiming that "You are too generous to trifle with me. If your feelings are still what they were last April, tell me so at once. My affections and wishes are unchanged, but one word from you will silence me on this subject forever" (Austen 458). Now Elizabeth has no doubts that "his affection was not the work of a day, but had stood the test of many months' suspense" (Austen 465). She accepts a man whose love has stood the test of time. She is confident that he truly loves her. Now that he has moderated his passion by a sense of responsibility and orderliness, she cannot doubt his love to be growing an everlasting affection that corresponds with her disposition.
Unrestrained Passion, the Violator of Social Decency and Mutual Happiness
Plato claims that passion, despite being the stimulator of love, needs to be controlled and moderated by reason, otherwise it will exceed the limits. He explains that "There are two guiding and ruling principles which lead us. When opinion by the help of reason leads us to the best, the conquering principle is called temperance; but when desire, which is devoid of reason, rules in us and drags us to pleasure, that power of misrule is called excess" (Phaedrus). As it is derived from Phaedrus, reason distinguishes good from bad; therefore, it is capable of moderating passion. Reason and passion fabricate the same story in Pride and Prejudice.
Austen is suspicious of passion. Not only does she consider it to be insufficient for true love, but also she regards it as the violator of social rules, unless it is tamed and moderated by reason. According to Ruderman, Austen indicates "the most serious kind of love is that felt by a character with virtue and intelligence for a worthy object" (3). Austen manifests passion's inadequateness by depicting Lydia and Wickham's as well as Mr. and Mrs. Bennet's romantic relationship as abortive. Their engagement is not founded on the basis of true love, or the cooperation of passion and reason; therefore, it is feeble and enervating. Tanner believes that Lydia's devotion to Wickham "is seen a thoughtless and foolish and selfish, rather than a grand passion; while Mr. Bennet's premature captivation by Mrs. Bennet's youth and beauty is imprudence" (66). Allen also condemns Lydia's elopement claiming that it "is distressing because it suggests that desire can lead an individual to violate cultural rules, to leave willingly the bounds of society and respectability (438). Lydia's elopement with Wickham and Mr. Bennet's infatuation with Mrs. Bennet imply that unrestrained "desire has the potential to violate the logical foundation of her society" (Allen 439). Austen is suspicious of passion for its tendency and potentiality of violating moral and social rules. Thus, in order to prevent such a violation, passion should be superintended by reason.
In the novel, passion which is manifested in the lovers' captivation of and infatuation with physical beauty, when acting independently, not only violates the social rules but also eclipses the lovers' happiness and felicity forever. Passion blinds the lovers to the truth and prevents them from knowing their beloveds as they ought to do. Lack of knowledge, according to Plato's Symposium, hinders the lovers from reaching the realm of true love, where everlasting beauty and goodness dwells, where passion and reason collaborate and culminate in perpetual satisfaction. Austen demonstrates the inadequateness of physical beauty in culminating mutual happiness by delineating Mr. and Mrs. Bennet's failed marriage. She considers Mr. Bennet's interest in his wife's comeliness and neglecting her mental and moral defects an eclipse to their happiness. Describing their imprudent match, she writes that Mr. Bennet, captivated by youth and beauty, and that appearance of good humour, which youth and beauty generally give, had married a woman whose weak understanding and illiberal mind, had very early in their marriage put an end to all real affection for her. Respect, esteem, and confidence, had vanished forever; and all his views of domestic happiness were overthrown (Austen 378). Austen emphasizes her suspicion of bodily attractiveness more through Lydia's attachment to Wickham, depicting their relation as a silly act which is out of infatuation and selfishness. Lydia and Wickham share no compatibility, understanding, or even common taste. Lydia thinks that she adores Wickham heartily, whereas she is only infatuated with his fine appearance and pleasing manners (Paul 103). Wickham, on the other hand, wants to get rid of some gambling debts and seeks someone's company and Lydia is "an easy prey" (Austen 403). The consequence is that "His affection for her soon sunk into indifference; hers lasted little longer" (Austen 471). Wickham follows his instincts and seeks fleeting pleasure. He does not pursue a "noble disposition" (Plato, Symposium) that culminates in perpetual goodness and happiness. Lovers like Lydia and Wickham fall into Plato's category of "vulgar lovers. He asserts that "Evil is the vulgar lover who loves the body rather than the soul, inasmuch as he is not stable, because he loves a thing which is in itself unstable" (Plato, Symposium). They enjoy physical pleasure for a short while but do not experience true love. Lydia's devotion to Wickham and Mr. Bennet's infatuation with his wife implies that attachments which are constructed on unreasonable foundations not only do not guaranty mutual happiness but also destroy it.
Mere Rationality versus Ideal Love
As a marriage based on rationality and reasonable foundations, Austen gives the example of Charlotte's engagement to Mr. Collins which is the flip side of Lydia-Wickham attachment. The eldest and the most sensible daughter of the Lucas family, Charlotte is not ignorant of Mr. Collins' absurdity and imperiousness; but, the force of necessity and the economic burden make her accept his proposal. "Without thinking highly either of men or of matrimony, marriage had always been her object" (Austen 310). Mr. Collins, in spite of being nonsensical, offers her a shelter and Charlotte does not hesitate to accept it. In fact, she "accepted him solely from the pure and disinterested desire of an establishment" (Austen 310). Charlotte, as a girl who almost passes the suitable age of marriage, acts reasonably according to the demands of her society in which marriage "was the only honourable provision for well-educated young woman of small fortune, and however uncertain of giving happiness, must be their pleasant preservative from want" (Austen 310). She confesses to Elizabeth that she is not romantic. She never was. She asks only a comfortable home; and considering Mr. Collins's character, connections, and situation in life, she is convinced that her chance of happiness with him is as fair, as most people can boast on entering the marriage state ( Austen 312). She chooses to yield to the calls of society and does not permit passion to interfere her decision by romanticizing and softening her reasonability. Charlotte acts according to a reasonable moderate way of life which would have been necessary to take for a woman of her situation at the time.
How logical Charlotte's reasons for marriage are, Elizabeth's "astonishment was consequently so great as to overcome at first the bounds of decorum, and she could not help crying out, 'Engaged to Mr. Collins! my dear Charlotte impossible!'" (Austen 312). This so great astonishment, cried out not from any other character but Elizabeth, Austen's spokeswoman, unveils how unsound and unconvincing Charlotte's marriage to Mr. Collins is to the author. Austen expresses her opinion of Charlotte's marriage specifically and of marrying merely for the sake of establishment and worldly comfort generally by exposing Elizabeth's mind to the readers relating that Elizabeth "had always felt that Charlotte's opinion of matrimony was not exactly like her own, but she could not have supposed it possible that when called into action, she would have sacrificed every better feelings to worldly advantage" (Austen 312). Accordingly, Austen implies her objection to Charlotte's marriage indicating that mere rationality and lack of affection, which she called "better feelings", is as destructive of happiness as mere passion in matrimony. By accepting Collins' company, Charlotte secures her fortune but sacrifices her mutual happiness she could experience with a sensible and loving man.
Collins and Charlotte, despite their different personalities, have one thing in common: both wish to have what they lack; Charlotte, having no fortune, looks for a home to secure her from poverty and spinsterhood, and Collins, having a suitable income, looks for a wife to accomplish his duty as a clergyman. But the point is that Charlotte has no choice; she is obliged to choose him whereas Collins can choose. Not being concerned with individuals, Collins will pick the first opportunity that comes to him. Any woman would satisfy him as long as she can serve him as a wife. He sets forth his reason for directing his offer to Longbourn explaining to Elizabeth that "as I am, to inherit this estate after the death of your honoured father, (who, however, may live many years longer,) I could not satisfy myself without resolving to choose a wife from among his daughters, that the loss to them might be as little as possible" (Austen 300). His consecutive proposals during his short stay in Longbourn do not break the bounds of social decency but they violate love. He does not marry for love. He marries to fulfill her patroness' wishes and carry out his duty as a churchman. He aims to marry, to who does not bother him.
Mr. Collins does not know how it feels to be in love. He neither loves nor chooses reasonably. He quickly directs his offer to another alternative once he fails the first one. He does not care or look for affection or understanding of his partner. He only needs motivations for marriage and supposes that he has a bunch of good ones, as he explains to Elizabeth, My reasons for marrying are, first, that I think it a right thing for every clergyman in easy circumstances (like myself) to set the example of matrimony in his parish. Secondly, that I am convinced it will add very greatly in my happiness; and thirdly-which perhaps I ought to have mentioned earlier, that it is the particular advice and recommendation of the very noble lady whom I have the honour of calling patroness (Austen 300). These are his motives for matrimony. He primarily directs his address to Bennet's daughters. He does not retreat, nor becomes disappointed after knowing that his offer will not do with Jane whom he chooses because her "lovely face confirmed his views, and established all his strictest notions of what was due to seniority" (Austen 277). He switches his offer to Jane's younger sister. Austen describes his rushing from one case to another with a sarcastic tone, saying that he "had only to change from Jane to Elizabeth-and it was soon done-done while Mrs. Bennet was stirring the fire. Elizabeth, equally next to Jane in birth and beauty, succeeded her of course" (Austen 277). He proposes to Charlotte short after being rejected by Elizabeth.
As Austen demonstrates through the characterization of Charlotte and Lydia, neither passion nor reason alone can bring about perpetual happiness. However, she gives the credit to reason, and never trusts passion when it comes to choose one of them. Charlotte sacrifices her happiness by taking reason's side; however, she neither tarnishes her reputation nor violates social conventions. Contrary to Charlotte, Lydia destroys her happiness and nearly disgraces her family by taking passion's side and violating social rules. Austen tries to prove that "Lydia's elopement is distressing because it suggests that desire can lead an individual to violate cultural rules, to leave willingly the bounds of society and respectability" (Allen 438). Violating cultural rules is a fault Austen cannot excuse. Despite the fact that Charlotte's conformity to reason and adherence to convention perishes her happiness, Austen takes sides with her against passion. She prefers repressing passion when expressing it would violate the propriety and disregard the social rules.
Darcy-Elizabeth's Love: The Harmony of Reason and Passion
Through Charlotte-Collins and Lydia-Wickham relationships, Austen implies that going to extremes and total adherence to passion or reason will result in no good effect but in loss in one way or another. In showing Elizabeth-Darcy attachment, she tries to prove that the collaboration of passion and reason is the key to happiness. She is quite platonic in delineating romantic relationship of this couple. Plato believes that sensual desire by the help of mutual understanding and common sense guides lovers to true knowledge and everlasting happiness (Symposium). But happiness is not easy to gain.
From the very first meeting, Elizabeth and Darcy build up a conflict, Darcy through his pride and Elizabeth through her prejudice. They need to enter a series of adventures and confront different incidents to reach self-realization, and consequently come to mutual understanding and appreciate the value of their affection. They have to surmount different obstacles to achieve a deep comprehension of each other's disposition and sentiments: Elizabeth has to clear away a fog of illusion; she has to get on to the truth about what had happened between Wickham and Darcy. She has to visit Pemberley and read Mr. Darcy's confessional letter to find out his true personality. Mr. Darcy has to learn a deeper lesson. He has to learn to respect his future wife and everything about her; to see her family as she sees them; to acknowledge that in some aspects of mind and character she is his superior-in most ways they are equal (Brooke 36).
To make happiness their resident, Elizabeth and Darcy have to remove social barriers, grapple inner conflicts, reach self-realization and get insight to each other's disposition. It is a long and sometimes mortifying journey but the result is favorable.
The overall effect of society on its members is the first obstacles that should be overcome. Elizabeth gets prejudiced against Darcy which the society fortifies it. Darcy's reserved manner brings about a general dislike of him. The neighborhood has taken an instant dislike toward Darcy, regarding him proud and snobbish. Lina Widlund in "In Search of a Man" focuses on the community's effect on Elizabeth's mind: Hearing her friends and family discuss how dreadful Mr. Darcy is makes her opinion of him even stronger. Her resentment of him is really of the same nature as that of her mother. Elizabeth feels that her pride has been harmed by the pride of Mr. Darcy. Even when she gets closer to Darcy as a person she cannot let go of her prejudice, because almost every one of her acquaintance despises him. Elizabeth's contempt might be due partly to her difficulty in understanding him. (4) Darcy's pride is also the product of a class conscious society. As Miss Lucas vocalizes, due to his social status and wealth, he has the right of being proud. She claims that "his pride … does not offend me so much as pride often does, because there is an excuse for it. One cannot wonder that so very fine a young man, with family, fortune, everything in his favor, should think highly of himself. If I may so express it, he has a right to be proud (Austen 245). Zimmerman believes that "both qualities, pride and prejudice, reserve in a severe limitation of human vision and are essentially selfish" (66). Therefore, to gain a better insight into each other's personality and have a fair understanding of one another, Darcy and Elizabeth should give up their pride and prejudice. But they will not get through it as long as their judgment is bound to that of their society.
Elizabeth and Darcy need to go through a long process to understand themselves and consequently one another.
Wilson, in Pride and Prejudice by Jane Austen, claims that Elizabeth and Darcy face no "external obstacle" in their courtship but rather an inner one. They possess intricate personalities that plunge them into misunderstanding although they correspond intellectually to each other (55). The first impression or "deceptiveness of appearance" as Wilson calls it, is one of those "external obstacle" that should be dealt with. Darcy insults Elizabeth on his first meeting at Netherfield Park: Bingley tries to convince Darcy to dance with Elizabeth but Darcy refuses rudely saying "she is tolerable; but not handsome enough to tempt me; and I am in no humour at present to give consequence to young ladies who are slighted by other men" (Austen 240). His remark is so rude and insulting that Elizabeth cannot help getting prejudiced against him. She would have forgiven his pride if he had not injured hers. The pride Darcy displays at Netherfield and the prejudice Elizabeth gets out of it are the main and initial causes of their misunderstanding which they have to get rid of. With the introduction of Wickham, an apparently charming lieutenant but malicious in truth, the relationship between Darcy and Elizabeth becomes more complicated. Consequently, things get worse.
While Elizabeth is adding to her dislike of Darcy, he is gradually getting interested in her. When he meets her at Rosings, he cannot help opening his heart admitting that he has struggled in vain. His feelings will not be repressed. She must allow him to tell her how ardently he admires and loves her (Austen 350). Darcy's tactless proposal to Elizabeth uncovers their feelings and opinions of each other. Elizabeth traces a sense of self-respect and superiority in the way Darcy proposes. She accuses him of arrogance, conceit and not behaving in a gentlemanly manner. Although ending up in quarrel and being declined, Darcy's proposal is a good excuse to open their heart and unveil their feelings.
The revelation of their apparently incompatible opinions and feelings creates a sense of disapproval but on a deeper level leads them to self-realization and knowing each other. Recollecting Elizabeth's comments on what he said, how he behaved and expressed himself torments him for many months. Plato regards this torment "the source of the greatest benefits" and a sign of love. Love benefits the lover by awakening the sense of honor in him (Symposium). The sense of honor acts as the mentor that leads the lover to virtue and prevents him from doing dishonorable acts. Plato explains that "a lover who is detected in doing any dishonourable act, or submitting through cowardice when any dishonour is done to him by another, will be more pained at being detected by his beloved than at being seen by anyone else;" the same feeling is true about the beloved (Symposium). Stuart M. Tave asserts that Elizabeth's rejecting Darcy has a humiliating effect on him. When Darcy's anger calms down and he becomes reasonable enough, he gets to perceive the justice of Elizabeth's accusations against him and realizes his pride and selfishness (29). Therefore, Darcy writes a letter to clarify himself and clear away the fog of misconception surrounding Elizabeth.
Darcy's letter has the same humiliating effect on Elizabeth as her rejecting his hand on him. His letter is garnished with dignity, self-independence, insight, self-importance, intelligence and sound feelings (Brooke 75). The impact of the letter is strong on Elizabeth. Kalil believes that in her reaction to the letter, Jane Austen sets Elizabeth out on a journey of self-realization and discovery. She realizes that till now she has been "blind, partial, prejudice and absurd" (Austen 361). She gains a moral insight about herself and her character evolves as she is able to know and analyze herself. She begins reading the letter while she still bears prejudice against Darcy. But after reading the letter several times, Darcy's statements strike her gradually as being true and instill in her a better sense of judgement. Darcy and Elizabeth start the self-realization process with contrasting characters and attributes.
Step by step, they begin to moderate their flaws and reach mutual understanding, affection, and respect. Elizabeth helps Darcy to give up his snobbishness and to be a real gentleman. Darcy, on the other hand, acts generously and resolutely to win her affection. They owe their happiness to their benefactor, love.
Like Plato, Austen views the lovers as mentors who lead their beloveds to everlasting happiness. But unlike Plato, Austen holds love within a frame of social relationship and confines it within the bounds of social convention. However, she does not prevent her protagonists from acting upon their own judgement, and shaping their own character and conduct. In Austen's opinion, it is of fundamental importance that not only should the lovers behave properly themselves, but also they should guide their beloveds to be proper and well behaved, and help them live in harmony with each other and with society.
The role of mentorship is true about the beloved too. Menon declares that Austen depicts that the beloved "by making moral responsibility for oneself and others her primary concern, and by making no distinction between men and women in their duty to make principled decisions, demonstrates her belief that, in the sphere that matters most to her, women must not surrender their autonomy" (2-3). The witty Elizabeth is brought up in a family where the parents' relationship is eclipsed by misunderstanding and disparity of character which has a strong effect on the offspring. But she builds up her own character, makes her own judgment, and acts independently. She neither yields to her mother's insistence on accepting Mr. Collins for her family's sake, nor accepts Darcy's proposal, which he makes with no doubt of being accepted for his wealth, until he proves himself to be worthy of her love and undoubtedly it could not have happened without Elizabeth guidance. Elizabeth too would not be worthy of such impressive person unless he guides her on a journey of self-realization that leads her to a better understanding of both.
Austen perfectly knows the impact of society on the individuals. She "is acutely aware of the family's role in shaping conduct, principle, and ability to love. Austen also recognizes the strength and attraction of family ties" (Menon 2-3). She is not ignorant of the demands of one's community and the claims of society on people. But those who are not capable of judgment and are unable to decide for themselves, are undoubtedly incapable of guiding their beloveds. Therefore, they do not deserve love.
In Darcy-Elizabeth relationship, Austen emphasizes sense and sensibility, but she does not disregard physical at-traction and passion. Passion and physical attraction, Menon observes, "may induce blindness, she also affirms that it is not necessary in conflict with judgement" (2). However, one should not suppose Austen, the advocate of virtuous love, to depict her hero and heroine's physical appearance explicitly. She is careful not to exceed the limits of convention and social rules of decency. Darcy is equally attracted by Elizabeth's wit and playfulness as he is attracted by "the beautiful expression of her dark eyes... Though he had detected with a critical eye more than one failure of perfect symmetry in her form, he was forced to acknowledge her figure to be light and pleasing" (Austen 247). As Menon argues "His repeated smiles... are one way Austen effectively captures the softening effect of Elizabeth's combination of wit and physical appeal on the self-sufficiency, even the hostility, of the unwilling lover" (32). In their relationship, physical attraction, as Austen displays, is not at odds with their rationality. Menon believes that "it is Elizabeth's personality that arouses his sexual interest and redefines his response to her physical appearance. His often-repeated attraction to her physical charms is inextricable from his fascination with her playfulness, wit and intelligence" (32). Although Menon climbs Plato's ladder the other way round, she does not distort the essence of his theory that physical love is not in conflict with rationality; rather it is in accordance with it.
Bingely-Jane Love: The Union of True Affection
Jane and Mr. Bingley, another happy couple, have a different story. Unlike Elizabeth and Mr. Darcy, they are lucky enough to scape some of the rungs of Plato's ladder. A natural agreement of taste and disposition brings them together; "Elizabeth really believed all his [Mr. Bingley's] expectations of felicity, to be rationally founded, because they had for basis the excellent understanding, and super-excellent disposition of Jane, and a general similarity of feeling and taste between her and himself (Austen 446). They do not confront that kind of misunderstanding and misimpression that Elizabeth and Darcy have to surmount. Paul believes that "Their relationship is based upon harmony arising out of similarity of nature" (91). Mr. Bennet acutely sums up their character in this way: "Your tempers are by no means unlike. You are each of you so complying, that nothing will ever resolved on; so easy, that servant will cheat you; and so generous, that you will always exceed your income" (Austen 446). Both Jane and Bingley are good-natured, easygoing, modest and disinterested. Jane never sees a fault in anyone. She likes people in general and does not deem them evil or immoral. Bingley is also lighthearted and affable, easily pleased and capable of pleasing easily. Nature has harmonized them very well.
Though Jane and Bingley sincerely love each other, they need to learn to be firm in their love and to trust their feelings. Their separation, planned by Mr. Darcy and Bingley's sisters, as bitter as it is, gives them the chance to realize the quality of their love. At the beginning of their courtship, Jane did not demonstrate her feelings. Her modesty and humility prevented her from giving encouragement to Bingley, considering "how great is the encouragement which all the world gives to the lover" (Plato, Symposium). Charlotte, representing Plato's idea that "open loves" are better than "secret ones" (Symposium), warns Elizabeth that "In nine cases out of ten, a woman had better show more affection than she feels. Bingley likes your sister undoubtedly; but he may never do more than like her, if she does not help him on" (Austen 246). Bingley is not faultless. He is passive and acted upon. He relies more on Mr. Darcy's counsel rather than on his own. Darcy encourages him to leave Netherfield for London once he finds out that Bingley's situation is in danger because of Jane's low social status and connections. Mr. Bingley does not object and leaves Netherfield. Jane and Bingley are equally blameworthy for their dereliction in love. But fortunately, the separation ends up in their favour. When destiny brings them together again, neither Jane nor Bingley doubt their feelings. Their separation, due to their sincere love, mutual understanding and respect, reinforces their feelings and appreciation for each other rather than estranging them.
CONCLUSION
At first glance, Pride and Prejudice concentrates more on social context of courtship than on love itself. However, in spite of Austen's modesty in exhibiting love, social decency is not her only concern. Once the social veils are removed the truth of love will be revealed. Austen emphasizes the cooperation of reason and passion in love. As depicted in Darcy-Elizabeth love, Austen accentuates the necessity of reason's mentorship over passion to help the lovers prevail over physical beauty and reach the realm of everlasting beauty and goodness. Initiated by sensual desire, reason illuminates the lovers' way to intellectual and spiritual beauty. Austen insists on the inference of reason in moderating and restraining passion because she believes that passion has the tendency to violate social rules of decency. Violating the rules of decency and propriety does not come to terms with the standards of Austen's polite and civilized society. Therefore, she urges her heroes and heroines to keep a balance between emotional needs and rationality to meet the social claims in their romantic relationships. Otherwise the immoderation of passion and rationality, as in Lydia-Wickham and Collins-Charlotte, would deprive the lovers of mutual everlasting happiness, violates the rules and social decency and brings about general disagreement. It is the conducts of lovers in their romantic relationships in a rational and highly civilized society that matters. Considering individuals' relationships, what concerns Austen most is morality and virtue. Austen accents "morality based on reason rather than revelation" (Lane 70).
Compared to Plato's concept of love, one finds many similarities between his and Austen's attitude towards love. For instance, both consider physical beauty as an obstacle to see the true nature of the person who is to be loved. Both consider virtue as an essential feature which promotes the true love between the lovers. Nevertheless, as the argument above shows, in spite of the fact that Austen's approach to love is Platonic, she pays more attention to decorum under the veil of social relationship and confines love in this very way. This is indicative of the fact that she acted according to the norms of her society at the time. Living in early Victorian period, she prioritized the practice of love in a real context which mattered more at the time than relating it and putting it in a context where intellectual matters would be of more significance. That is why she delineates her characters' behavior in a society bounded by norms of decency and rational inclinations. They have to grapple with an inner challenge in order to satisfy their emotional need on one hand and keep the bounds of social decency untouched, on the other hand.
|
2020-03-19T19:30:27.741Z
|
2019-05-31T00:00:00.000
|
{
"year": 2019,
"sha1": "984b92664d076256df9e83e2890df193e84068a8",
"oa_license": "CCBY",
"oa_url": "https://www.journals.aiac.org.au/index.php/IJALEL/article/download/5794/4143",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a7d1ba869140a440b9a475ae0912e19871902c05",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
}
|
246041085
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of Lung Tumor Target Volume in a Large Sample: Target and Clinical Factors Influencing the Volume Derived From Four-Dimensional CT and Cone Beam CT
Background and Purpose This study aimed to systematically evaluate the influence of target-related and clinical factors on volume differences and the similarity of targets derived from four-dimensional computed tomography (4DCT) and cone beam computed tomography (CBCT) images in lung stereotactic body radiation therapy (SBRT). Materials and Methods 4DCT and CBCT image data of 210 tumors from 195 patients were analyzed. The internal gross target volume (IGTV) derived from the maximum intensity projection (MIP) of 4DCT (IGTV-MIP) and the IGTV from CBCT (IGTV-CBCT) were compared with the reference IGTV from 10 phases of 4DCT (IGTV-10). The target size, tumor motion, and the similarity between IGTVs were measured. The influence of target-related and clinical factors on the adequacy of IGTVs derived from 4DCT MIP and CBCT images was evaluated. Results The mean tumor motion amplitude in the 3D direction was 6.5 ± 5 mm. The mean size ratio of IGTV-CBCT and IGTV-MIP compared to IGTV-10 in all patients was 0.71 ± 0.21 and 0.8 ± 0.14, respectively. Female sex, greater BSA, and larger target size were protective factors, while the Karnofsky Performance Status, body mass index, and motion were risk factors for the similarity between IGTV-MIP and IGTV-10. Older age and larger target size were protective factors, while adhesion to the heart, coexistence with cardiopathy, and tumor motion were risk factors for the similarity between IGTV-CBCT and IGTV-10. Conclusion Clinical factors should be considered when using MIP images for defining ITV, and when using CBCT images for verifying treatment targets.
INTRODUCTION
With the advent of the era of systemic targeted and immunotherapy, stereotactic body radiation therapy (SBRT) has not only resulted in favorable antitumor effects for primary early non-small cell lung cancer but also shown significant efficacy in oligometastatic lung tumors (1)(2)(3). However, lung SBRT with a combination of targeted or immunotherapy may increase the clinically meaningful risk of pneumonitis (4,5), which may result in treatment failure. Therefore, more attention should be paid to avoiding the normal tissue being unnecessarily irradiated, when increasing the local control in lung SBRT.
Accurate definition of the characteristics from different simulated images (e.g., conventional three-dimensional CT [3DCT] and four-dimensional CT [4DCT]) and accurate delineation of a reasonable internal target volume (ITV) are preconditions for the success of lung SBRT. A 4DCT scan is considered a reliable tool for simulating respiration-induced intrapulmonary motion (6)(7)(8)(9). The individual ITV derived from 4DCT has been widely used in lung SBRT. The volume encompassing the gross tumor volumes (GTVs) delineated on all phases (typically 10 phases) of the 4DCT is accepted as the standard ITV (6)(7)(8)(9). However, delineating the GTVs on all phases is time-consuming (10,11). Maximum intensity projection (MIP) displays the highest density value encountered in each pixel throughout the respiratory cycle of 4DCT (10)(11)(12)(13)(14). The MIP is usually used to generate the ITV instead of 10 phases of the 4DCT. However, the use of MIP in clinical practice has caused considerable controversy. Several studies have shown that the MIP might be a reliable tool for target definition (11,13), while other research has concluded that the MIP underestimates the size of ITV and should not be used in isolation (10,14,15). However, these results were obtained based on phantom studies and small patient collectives. There is a lack of comprehensive estimates of the ITV derived from the 4DCT MIP images in studies with large sample sizes.
On-board free-breathing cone beam computed tomography (CBCT) is a useful tool for the target localization of lung tumors (16,17). The use of CBCT provides a guarantee of precise irradiation during treatment with lung SBRT. Free-breathing CBCT can simulate lung tumor motion, to some extent, and can be used to delineate the online ITV (18)(19)(20). Although 4DCBCT has been regarded as a better choice for determination of the ITV during treatment (21,22), it has not been widely used in the clinical setting and provides poor quality CBCT image sets (23,24). Previous studies have focused on the differences in size between ITVs derived from 4DCT and CBCT. However, the impact of the target-related and clinicopathologic features on these differences has not been demonstrated completely and systematically (25). CBCT shows an inferior soft tissue contrast compared with CT due to different imaging methods. Moreover, the CBCT target volume might be more easily influenced by the clinicopathologic characteristics of the patient, such as pathological pattern, and Karnofsky Performance Status (KPS) score. Currently, 3DCT is used for conventional fractionation radiation therapy. A thorough understanding of the variation in size between the GTV on 3DCT and the ITV on 4DCT contributes to determining a reliable ITV.
In this study, we assessed the differences in volume and the similarities of the targets derived from 4DCT MIP, CBCT, and 3DCT compared to the ITV derived from 10 phases of 4DCT. The aim was to systematically evaluate the influence of the target-related and clinicopathologic features on these differences in a large sample of patients. Furthermore, we tried to establish a predictive model in relation to the similarity of the targets derived from 4DCT and CBCT based on the significant target-related and clinicopathologic features. To the best of our knowledge, these results have not been evaluated or reported in previous studies. The availability of such information may contribute to a reasonable application of the ITVs derived from 4DCT MIP and CBCT images in clinical practice.
Patient Selection and Characteristics
This study was a retrospective analysis that was approved by the Shandong Cancer Hospital and Institute ethics board, and the need for informed consent from patients was waived. In total, 195 of 438 patients who underwent lung SBRT between May 2015 and December 2019 at the Shandong Cancer Hospital and Institute were enrolled. Among the 195 patients, 11 had multiple tumors; this study included a total of 210 tumors. One hundred sixty-two targets were primary lung cancers (146 tumors) and metastases of lung cancer (16 tumors), and 48 were metastases of other solid cancers. All the patients were selected on the basis of the following criteria: 1) peripheral lung tumors or metastases; 2) 4DCT and CBCT images of adequate quality; and 3) GTV that was identifiable on CT images. Patients were excluded if they met the following criteria: 1) 4DCT or CBCT images were missing; 2) the tumors were extensive and diffuse; or 3) the tumor boundary could not be easily distinguished from the surrounding pneumonia.
CT Simulation and Image Acquisition
All patients were immobilized using vacuum bags or the Body Pro-Lok ONE ™ system (CIVCO, Coralville, IA) in the supine position with their arms raised above their head. For each patient, a conventional 3DCT scan of the thoracic region was performed, followed by a 4DCT scan during free breathing on a Brilliance Big Bore CT simulator (Philips Medical Systems, Highland Heights, OH). The 3DCT and 4DCT acquisition protocols have been reported in our previous study (26,27). The 4DCT images were sorted into 10 bins according to the phase of the breathing signal, with 0% corresponding to endinhalation and 50% corresponding to end-exhalation. MIPs of the 4DCT data sets were then generated and contained the maximum Hounsfield unit (HU) in each geometric voxel across all time-resolved datasets. The CT images were reconstructed using a thickness of 3 mm or 2 mm (tumors within 1 cm in diameter) and then transferred to the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA). Three-dimensional conformal radiotherapy (3D-CRT) or intensity-modulated radiation therapy (IMRT) treatment planning was performed based on conventional 3DCT or the average intensity projections (AIP) for lung SBRT.
Online Image Acquisition
On the linear accelerator, the patients were aligned according to skin tattoos using an in-room laser system. The CBCT images were acquired with the gantry-mounted onboard imager (Varian Medical Systems, Palo Alto, CA). The first CBCT image was acquired immediately after setup. The scan time was approximately 60 s, and approximately 650 2D kV images were captured during the full 360°rotation. CBCT images were reconstructed using a thickness of 2.5 mm. The CBCT scan was rigidly registered to the planning CT. An automatic registration of the bony anatomy was performed using a user-defined region of interest including the spinal cord. The registration was evaluated by the radiation therapists and manually corrected if necessary. Then, the registered CBCT images were automatically transferred to the Eclipse treatment planning system (Varian Medical Systems).
Target Volume Contouring
GTV-3D images and GTVs were contoured based on 3DCT images and each of the 10 4DCT phases. Internal GTV in the 10 phases (IGTV-10) was generated using the 4D tool. IGTV-MIP and IGTV-CBCT were contoured based on the MIP of 4DCT and CBCT images. All contours were performed by an experienced radiation oncologist using the same contouring protocol, as follows: 1) GTVs were delineated using a standard lung gray-scale window setting in the Aria Eclipse environment (Varian Medical Systems) (25); 2) the use of the standard mediastinum window was allowed for information purposes to avoid the inclusion of adjacent vessels and mediastinal or chest wall structures; and 3) blurring in the periphery of the tumor, representing the "partial volume effect" and "partial projection effect for moving objects", was included in the GTVs (28). Another radiation oncologist reviewed all contours and rectifed the contour if necessary. The GTVs contoured on the basis of 3DCT, CBCT, end-exhalation, MIP, and the 10 phases of 4DCT of the 132 tumors are shown in Figure 1.
Tumor Motion
The coordinates in the left-right (LR), anterior-posterior (AP), and cranial-caudal (CC) directions of the center of mass (COM) of the GTVs in the 10 phases of 4DCT were measured. The peakto-peak displacement of the COM in the three directions was calculated based on the coordinates, representing the tumor motion. The 3D motion vector (vector) of the COM was calculated as follows:
Target-Related and Clinical Factors
Target-related factors included the size, location (lobes, abutment relation, and zones), and 3D motion of the target.
The size of GTV derived from the end-exhalation of 4DCT was used to represent the size of the target. The abutment relationship referred to solitary pulmonary tumors, and tumors adjacent to the chest wall, the mediastinum, or the diaphragm. Zoning referred to the interior, intermediate, and lateral third zones of the ipsilateral lung. Clinical factors included patient sex, age, body mass index (BMI), body surface area (BSA), KPS, smoking history, pathology, and presence or absence of coexisting pulmonary disease, heart disease, hypertensive disease, or diabetes.
Dice Similarity Coefficient (DSC)
The DSC of volumes A and B was defined as the ratio of the volume of their intersection to their average volume, with a value of 1 indicating identical volumes and 0 indicating no overlap of the two volumes. It is calculated using the following formula (27,29): The inter-quartile range (IQR) was used to assign the DSCs of IGTV-MIP and IGTV-10 and the DSCs of IGTV-CBCT and IGTV-10 into a qualified group and an unqualified group. The upper quartile was chosen to be the critical value, and the two values were 0.9 and 0.75 for the DSCs of IGTV-MIP and IGTV-10, and the DSCs of IGTV-CBCT and IGTV-10, respectively. A DSC value equal to or more than the critical value was considered as qualified.
Statistical Analysis
Multiple-logistic regression models were used to explore the risk factors for DSC of IGTV-MIP and IGTV-10 and for DSC of IGTV-CBCT and IGTV-10. Backward stepwise regression based on the Akaike information criterion was used to select important variables. Once the model was established, we used it to predict risk, and the effect of the prediction was presented using a receiver operating characteristic (ROC) curve and the area under the curve (AUC). Individuals' characteristics were described and grouped by the DSC on IGTV-MIP and IGTV-10 and the DSC on IGTV-CBCT and IGTV-10. Variables were described using means [standard deviations (SD)], medians [IQR], and numbers (%), as appropriate. Differences in these variables were assessed by a two-sample t test, Wilcoxon ranksum test, and X 2 or Fisher exact test, as appropriate. All analyses were performed using R, version 4.0.4 (R Foundation for Statistical Computing, Vienna, Austria). Hypothesis tests were two-sided, and we considered p < 0.05 to be statistically significant.
The distribution of the DSC of IGTV-MIP and IGTV-10 grouped by position of the cancer is included in Figure 2A. The distribution of the DSC of IGTV-MIP and IGTV-10 was a skewed distribution, and tumors in the right lower lobe had a worse DSC. The distribution of the DSC of IGTV-CBCT and IGTV-10 grouped by the position of the cancer is included in Figure 2B. A skewed distribution was also found, and worse DSCs were observed in the right middle lobe and right lower lobe. Table 2 presents the standardized odds ratios (ORs) and 95% confidence intervals (95% CIs) between study variables and the Table 3 shows standardized ORs and 95% CIs between selected variables and the DSC of IGTV-CBCT and IGTV-10. Figure 3A. It shows that AUC was equal to 0.756, which means the prediction effect was good. The ROC curve and AUC of the DSC of the IGTV-CBCT and IGTV-10 prediction models are shown in Figure 3B. The AUC was equal to 0.834, representing a good prediction effect.
DISCUSSION
4DCT and 3DCT images acquired during simulation are usually used to generate treatment target volumes, while the CBCT image acquired before treatment is used to verify the volumes. A thorough understanding of the potential relationship between the volumes derived from 4DCT, 3DCT, and CBCT images may contribute to increased accuracy of SBRT. Previous reports have merely clarified the impact of target characteristics on the volumes on 4DCT, 3DCT, and CBCT images in phantom studies and small patient collectives. We systematically investigated the influence of target-related and clinical characteristics on the volumes in a large study population.
When comparing the difference in size between IGTV-MIP and IGTV-10, we found that IGTV-MIP size was, on average, 20% smaller than IGTV-10 size. This finding was consistent with that of previous studies (10,14,15). Borm et al. (15) showed that 4DCT MIP-based ITVs were 20.2% smaller on average than 10phase 4DCT ITVs. Muirhead et al. (10) showed a mean ITV reduction of 19% in ITV-10-phase volumes compared to ITV-MIP volumes. However, some reports are not consistent with our results (7,11). Ge et al. (7) reported that the ITV derived from MIP showed an underestimation of approximately 10% compared with that of 10-phase 4DCT. They found a mean volumetric difference between PTV-MIP and 4D PTV-10 of 7% ± 5%. These inconsistencies suggest that some potential influencing factors might have led to the differences in size between IGTV-MIP and IGTV-10. Previous studies have reported that the tumor size, motion amplitude, and abutment relationship might have an influence on this difference (7,10,14,15). Our study showed the tumor size had a positive correlation with the size ratio of IGTV-MIP to IGTV-10 (r = 0.327, p < 0.001), while tumor motion had a negative correlation to the ratio (r = -0.207, p = 0.003). Additionally, we found that tumor location and smoking history had an influence on this difference (p = 0.013 and 0.043). Other characteristics (for example, abutment relationship, pulmonary surgery, cardiopulmonary disease, primary or metastatic carcinoma, and so on) had no significant influence on the difference in size. Further analysis found a mean DSC of IGTV-MIP and IGTV-10 of 0.84 ± 0.09. We set the cutoff value as the third quartile of the DSC (0.9) to evaluate the pros and cons of the similarity between IGTV-MIP and IGTV-10 and analyzed the impact of target-related and clinical factors on the threshold. Our finding that larger targets with small tumor motion in the 3D direction had a better DSC than small targets or those with larger tumor motion supports previously published studies (8,15). However, the abutment relationship was not significantly correlated with the DSC in our study, even though the DSC for tumors adjoined to the diaphragm tended to have a poor DSC.
Additionally, female sex, BSA, BMI, and KPS were significantly associated with the DSC of IGTV-MIP and IGTV-10. Sex was an important influencing factor (std. estimate = 3D, three-dimensional; BMI, body mass index; BSA, body surface area; CI, confidence interval; DSC, Dice's similarity coefficient; GTV-EE, gross target volume end of expiration; IGTV-10, internal gross target volume from 10 phases of four-dimensional computed tomography; IGTV-MIP, internal gross target volume maximum intensity projection; KPS, Karnofsky Performance Status; OR, odds ratio; SD, standard deviation. 1.358), and female patients had a better DSC than male patients. We believe that this is because female patients tended to have a smaller tumor motion and BMI than male patients. It is possible that a larger BMI (std. estimate = -0.539) reduced the DSC of IGTV-MIP and IGTV-10 because BMI had a negative impact on the sharpness of MIP images. The ROC curve and AUC of the DSC of the IGTV-MIP and IGTV-10 prediction models showed that the AUC was equal to 0.756, indicating a good prediction effect. The significance of clinical characteristics should be highlighted when using IGTV-MIP in SBRT.
In this study, on average, IGTV-CBCT size was 29% smaller than IGTV-10 size and 9% smaller than IGTV-MIP size, which was in accordance with results reported by other authors. Vergalasova et al. (30) reported that the IGTV derived from free-breathing CBCT showed a volume underestimation of 40.1% for smaller tumors and 24.2% for larger tumors compared to the 4DCBCT-based IGTV. Liu et al. (25) found the medium IGTV-CBCT was, on average, approximately 11.8% smaller than the IGTV based on end-inhalation and endexhalation phases. Wang et al. observed that the IGTV from CBCT was 3.1-9.3% smaller than that derived from 4DCT MIP. However, Wang et al. (19) reported that the difference in size between IGTVs derived from CBCT and 4DCT 10-phases was within 8%, which was a far smaller difference than that noted in our result (29%). Some studies (30,31) have shown that irregular breathing patterns might lead to this misinterpretation.
Additionally, Wang et al. (19) hypothesized that the characteristics of the target and the patient might have an impact on the CBCT target volume. They concluded that the location of the tumor was a major source of discrepancy between ITV-CBCT and ITV-10. We believe that the relatively small number of patients (n=71) included in their study might have impacted their results. For this reason we included a larger number of tumors (n=210) and we investigated the clinical features to more accurately assess the influence of the target and patient characteristics on the DSC of IGTV-CBCT and IGTV-10.
The mean DSC of IGTV-MIP and IGTV-10 was 0.64 ± 0.17. The cut-off value was still defined as the third quartile of the DSC of IGTV-CBCT and IGTV-10 (0.75). Multivariate analysis showed that the tumor abutment relationship was an important factor impacting the DSC, particularly for tumors adjoined to the mediastinum (heart) where the DSC was worse. Additionally, we found that combining cardiopulmonary disease and larger tumor motion might reduce the DSC of the IGTV-CBCT and IGTV-10, while larger tumor size, age, and BSA might increase the DSC. The AUC of the DSC of the IGTV-MIP and IGTV-10 prediction models was 0.834 and represents a good prediction effect. Although a PTV margin is used in clinical practice, this finding indicated that an extra margin might be required to account for the discrepancy between IGTV-CBCT and IGTV-10 derived from the target-related and patient characteristics.
We also evaluated the difference in size between GTV-3D and IGTV-10 and between GTV-EE and IGTV-10 among a greater number of patients because previous studies were usually based only on a few cases. The size of the GTV-3D was 47% smaller than the IGTV-10 size, and the GTV-EE size was 50% smaller.
These results were consistent with those of previous studies (26,32). Some tumor-related and patient features may have an impact on these differences.
It should be noted that all the contouring was performed by one oncologist to avoid interobserver variability. Although a systematic intra-observer variability may be inevitable, using an oncologist who was experienced in contouring and strict contouring criteria contributed to reduced variability. Additionally, we adopted the first CBCT image to remove the impact of the tumor reduction. But, the first CBCT may not represent the interfraction variability. The amplitude and baseline of respiration motion might change throughout the treatment (21). There would be an inherent variation between IGTV-10 derived from 4D CT and IGTV-CBCT derived from treatment CBCT, which may deduce incorrection conclusions.
CONCLUSION
In a large sample, we identified the discrepancy between IGTV-MIP and IGTV-10, and between IGTV-CBCT and IGTV-10. The targetrelated factors (such as tumor motion and size) showed significant influences on the discrepancy. Moreover, several clinical factors could significantly influence the discrepancy between IGTVs derived from 4DCT, 4DCT MIP, and CBCT. The prediction models of the DSC of IGTVs derived from 4DCT and CBCT showed good predictive value. The clinical factors should be considered when using MIP images for defining the ITV and when using CBCT images for verifying the treatment targets.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Shandong Cancer Hospital and Institute ethics board. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
FL and JL contributed to the study design, the delineation and writing the manuscript, the patient enrollment. TTZ and YQ participated the data statistics and analysis and writing the manuscript. XS and ZC contributed to the patient enrollment. TZ participated in the study design and data statistics and analysis. All authors read and approved the final manuscript.
|
2022-01-20T14:20:35.592Z
|
2022-01-20T00:00:00.000
|
{
"year": 2021,
"sha1": "fbac9462a7f525d9d671eb1321e3b90911d6707d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.717984/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "fbac9462a7f525d9d671eb1321e3b90911d6707d",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231802915
|
pes2o/s2orc
|
v3-fos-license
|
Genomic and transcriptomic resources for candidate gene discovery in the Ranunculids
Premise Multiple transitions from insect to wind pollination are associated with polyploidy and unisexual flowers in Thalictrum (Ranunculaceae), yet the underlying genetics remains unknown. We generated a draft genome of Thalictrum thalictroides, a representative of a clade with ancestral floral traits (diploid, hermaphrodite, and insect pollinated) and a model for functional studies. Floral transcriptomes of T. thalictroides and of wind‐pollinated, andromonoecious T. hernandezii are presented as a resource to facilitate candidate gene discovery in flowers with different sexual and pollination systems. Methods A draft genome of T. thalictroides and two floral transcriptomes of T. thalictroides and T. hernandezii were obtained from HiSeq 2000 Illumina sequencing and de novo assembly. Results The T. thalictroides de novo draft genome assembly consisted of 44,860 contigs (N50 = 12,761 bp, 243 Mbp total length) and contained 84.5% conserved embryophyte single‐copy genes. Floral transcriptomes contained representatives of most eukaryotic core genes, and most of their genes formed orthogroups. Discussion To validate the utility of these resources, potential candidate genes were identified for the different floral morphologies using stepwise data set comparisons. Single‐copy gene analysis and simple sequence repeat markers were also generated as a resource for population‐level and phylogenetic studies.
Read pre-processing and genome assembly-Raw read quality was visually inspected with FastQC (Andrews, 2015) and then treated with Trimmomatic version 0.38 (Bolger et al., 2014) to remove adapter sequences and low-quality bases, keeping only paired-end reads with at least 80 bp for de novo assembly. Assemblies were initially conducted in CLC Genomics Workbench version 7.5.1 (QIAGEN, Hilden, Germany) and SOAP de novo version 2.04 (Luo et al., 2012a); assemblies were then scanned using BlobTools (Laetsch and Blaxter, 2017) to identify contigs originating from contaminants and finalized in MaSuRCA version 3.2.9 (Zimin et al., 2013). Genome sequences of potential contaminants were downloaded from the National Center for Biotechnology Information (NCBI) and used to map the clean reads with BBDuk (BBMap version 35.85; https:// sourc eforge.net/proje cts/bbmap/), keeping only unmapped reads for genome assembly. K-mer-based statistics were computed for the clean reads with Jellyfish version 2.2.10 (Marçais and Kingsford, 2011), GenomeScope version 1.0 (Vurture et al., 2017), and Smudgeplots version 0.2.3 (Ranallo-Benavidez et al., 2020). One assembly was generated for each of the two T. thalictroides accessions, and a third was generated by combining the data from the two accessions. Each assembly was polished using Pilon version 1.23 (Bruce et al., 2014), with two rounds of error correction. Assembly statistics were computed with Quast version 5.0.2 (Gurevich et al., 2013), and sequence repeats were identified with RepeatModeler version 1.0.11 (http://www.repea tmask er.org/Repea tMode ler/) and RepeatMasker version 4.0.9.p2 (http://www.repea tmask er.org/Repea tMask er/). Simple sequence repeat (SSR) markers and loci with di-to hexanucleotide repeats were identified in the genome and transcriptomes using the MicroSAtellite Identification tool (MISA; Thiel et al., 2003). The accuracy of the assemblies was assessed by mapping the contaminant-free clean reads back to the assembly using Bowtie2 version 2.4.1 (Langmead et al., 2012) and computing the fraction of reads that map in the correct orientation (forward-reverse) and within the length range of the insert size used to build the library; the insert size distribution was then estimated from the mapped paired-end reads using the CollectInsertSizeMetrics function in Picard version 2.23.8 (http://broad insti tute.github. io/picar d/).
Thalictrum thalictroides and T. hernandezii floral transcriptomes
Plant materials and RNA extractions-Fresh open flowers were collected from an individual of T. thalictroides (Fig. 1A) that was also used for genome sequencing (TtWT478, hermaphroditic flowers) and from T. hernandezii (Fig. 1B, Th_HWT441 hermaphroditic and Th_SWT441 staminate flowers) (Fig. 1, Appendix 1). Flowers of T. thalictroides and T. hernandezii were flash-frozen in liquid nitrogen and total RNA was extracted with TRIzol (Invitrogen, Carlsbad, California, USA), following the manufacturer's instructions. RNA quality (RIN ≥ 6.5) and concentration were determined in an Agilent 2100 Bioanalyzer and with agarose gel electrophoresis. Read pre-processing, sequence assembly, and annotation-Contaminant reads were removed using BBDuk (BBMap version 35.85) by mapping against the same contaminants detected during genome assembly (see above); de novo assembly was then conducted in Trinity version 2.8.5 (Grabherr et al., 2011) and assessed for completeness with BUSCO version 3.0 (as for the genome assembly). Two de novo transcriptome assemblies were generated, one for T. thalictroides and another for T. hernandezii (combining libraries for the two flower types). Only contigs larger than 200 bp were used in further analyses. Read mapping metrics were computed against the assemblies using Bowtie2 version 2.4.1 (Langmead et al., 2012) with the parameters --maxins 1000 --very-sensitive and Salmon version 0.14.0 (Patro et al., 2017) with the parameters quant --validateMappings --seqBias, as a measure of transcriptome accuracy. Assembled transcripts were compared against NCBI's non-redundant protein database using Diamond version 0.9.23 and assessed in MEGAN-LR version 6.14.2 (Huson et al., 2018). Polypeptides encoded by the assembled transcripts were identified with TransDecoder version 5.5.0, including BLASTP hits against the SwissProt database and profile hidden Markov model hits against the Pfam database (El-Gebali et al., 2019). Functional annotation, including identification of TAPs and clustering of shared (orthologous) genes, or "orthogroups, " was carried out as described above.
RNA library preparation-Library
Identification of candidate genes-First, we performed a threeway comparison to identify orthogroups among the T. thalictroides genome and the T. thalictroides and T. hernandezii de novo transcriptomes. We considered that orthogroups present in the T. thalictroides genome and transcriptome, but not found in the T. hernandezii transcriptome, are more likely to contain high-confidence genes involved in the development of floral traits that characterize insect-pollinated flowers (Fig. 1A). Conversely, orthogroups expressed exclusively in T. hernandezii (i.e., not found in the T. thalictroides transcriptome) that map to the T. thalictroides draft genome were considered more likely to contain high-confidence genes involved in the development of floral traits that characterize wind-pollinated flowers (Fig. 1B). Second, we analyzed T. hernandezii-specific orthogroups to identify transcripts associated with the different floral sexes, i.e., staminate (male) vs. hermaphrodite.
To that end, we computed the expression level of the de novoassembled T. hernandezii transcripts in the two T. hernandezii data sets (Ther_S and Ther_H) using Salmon version 0.14.0 with options --seqBias --validateMappings --recoverOrphans --libType A and considered a transcript as expressed when it had at least a single mapped read. For the data intersections of interest, we performed an enrichment analysis of the families of TAPs using a Fisher's exact test, correcting P values for the false discovery rate using the Benjamini-Hochberg method (Benjamini and Hochberg, 1995 and Rahnenfuhrer, 2010), with the weight0 method, correcting P values for the false discovery rate using the Benjamini-Hochberg method.
A draft nuclear genome for Thalictrum thalictroides
Genome sequencing and de novo assembly-Paired-end sequencing of the six libraries from two live accessions of T. thalictroides resulted in 49,105,897 sequenced fragments or 8.8 Gbp, after quality-trimming and contaminant removal (Appendix 2). The genome sequences of contaminants identified by BlobTools version 1.0.1 (Laetsch and Blaxter, 2017; Appendix S1), as well as mitochondria and plastids (Appendix S2), were used to map the clean reads with bbduk2, keeping only unmapped reads for further processing. Short-read data from TtWT478 had contamination from an aphid (GCF_000142985.2) and its bacterial endosymbiont (GCF_000009605.1; Appendix S1), which was removed. Metrics for the T. thalictroides draft genome assemblies were generated with MaSuRCA version 3.2.9 (Table 1; Appendices S3, S4). The best assembly, as measured by mapped reads (Appendix S4), assembly contiguity metrics (Table 1), and gene content (BUSCOs; Fig. 2), was generated by combining both accessions. This "consensus" assembly consisted of 44,860 contigs, with N50 = 12,761 bp (Table 1, Appendix S2) and 83.8% complete conserved embryophyte single-copy genes (88.5% when considering complete and fragmented BUSCOs; Fig. 2). The BUSCO estimate increased to 84.5% after gene prediction (90.9% when considering complete and fragmented BUSCOs). We confirmed with Smudgeplots (Ranallo-Benavidez et al., 2020) that the joint data set behaves like a diploid genome (Appendix S5), and the low level of duplicated BUSCOs in the consensus supports this (46/1440, or 3.2%; Fig. 2). We were able to map, on average, 5.75% more high-quality reads to the consensus assemblies than to either individual genome across the different next-generation sequencing libraries (Appendix S4). Mean contig coverage for the consensus assembly (from the two combined accessions) was 55× (median = 31.9×). Our genome size estimate from the consensus was 243.1 Mbp, comparable to the 286.4 Mbp estimate from k-mer frequency statistics in GenomeScope version 1.0; the heterozygosity estimate was 1.23% (Appendix S6).
More than one third of the genome (37.6%) could be assigned to different classes of repeat elements, with the most abundant class being the autonomous long terminal repeat retroelements, particularly from the Copia (7.7%) and Gypsy (6.8%) superfamilies (Appendix S7). SSRs were also a common occurrence in the genome; 65,651 were identified and the most common SSR was dinucleotide (Table 2; Arias et al., 2020a).
We predicted 33,624 protein-coding genes and 1936 non-coding RNA loci (Appendix S8). Orthogroup analysis resulted in 67.4% of predicted protein-coding genes forming clusters with at least one of the four reference genomes included in the analysis (Appendix S9); most were shared by all five species, providing support for our gene predictions. TransRate analysis also showed that a large number of predicted proteins in T. thalictroides had best bi-directional BLAST hits against the reference genomes of A. thaliana, S. lycopersicum, A. coerulea, and P. somniferum (Appendix S10), and 88.3% of predicted protein-coding genes had hits against InterPro member databases (Arias et al., 2020b). Functional descriptions were added for 22,603 protein-coding genes. We identified 1569 TAPs; 1258 of these were transcription factors (TFs) that can be grouped into 68 TF families, and the remaining 311 belonged to 29 families of other transcriptional regulators (oTRs) (Fig. 3, Appendix S11).
Thalictrum thalictroides and T. hernandezii floral transcriptome assembly
De novo transcriptome assemblies of T. thalictroides (Tt; GHXU00000000) consisted of 54,104 contigs (N50 = 1817 bp), while T. hernandezii (Th; GHXT00000000) had 124,707 contigs (N50 = 1703 bp), with 80.1% and 82.9% identified complete BUSCOs, respectively (Table 3). For T. thalictroides, the total (Arias et al., 2020c). There were approximately twice as many conditional reciprocal best BLAST hits in T. hernandezii compared to T. thalictroides with either of the reference transcriptomes, which is consistent with the former being a tetraploid (Table 4). A search within the Thalictrum floral transcriptomes detected 30,457 SSR markers, with approximately half of the analyzed contigs containing SSRs. The most common SSR was trinucleotide, followed by dinucleotide (Table 2; Arias et al., 2020a).
High representation of transcription factors in floral transcriptomes
The T. thalictroides and T. hernandezii floral transcriptomes contained 3541 transcripts that could be assigned into 94 TAP families; of these, MYB-related, AP2-EREBP, bHLH, and bZIP were among the top families. MADS-box genes (which include the floral organ identity genes) were found in all floral transcriptomes at approximately 2% relative abundance, and several TAP families were represented at significantly different levels in the three transcriptomes (Fig. 3).
Functional annotation of floral transcriptomes
To optimize orthogroup detection, Thalictrum transcriptomes were compared against multiple reference genomes: (1) T. thalictroides draft genome (this work); (2) A. thaliana (Brassicaceae); and (3) the two phylogenetically most closely related genomes available, A. coerulea (Ranunculaceae) and P. somniferum (Papaveraceae, Ranunculales). As a result of these comparisons, we identified 14 T. thalictroides-and 75 T. hernandezii-specific orthogroups (Appendix S12). Stepwise comparisons were conducted to (a) validate transcripts against the T. thalictroides draft genome and (b) conduct inter-and intraspecies qualitative comparisons between wind-vs. insect-pollinated and male vs. hermaphrodite floral morphologies (Fig. 1). First, we performed an inter-species transcriptome comparison, using the draft genome for validation (Fig. 4A). The three-way intersection in the Venn diagram (5204 orthogroups) represents orthologs that can be mapped to the T. thalictroides A "core" of 9556 orthogroups common to both species is represented by three of the intersecting areas (5204 + 1298 + 3054). Intersection area "a" (3477 orthogroups) comprised 6451 transcripts uniquely expressed in T. thalictroides that also mapped to the reference genome (therefore considered of high confidence). Intersection area "b" (3054 orthogroups) comprised 11,251 transcripts found exclusively in T. hernandezii and similarly validated by the reference genome. Two sets of species-specific orthogroups not found in the genome (274 and 473 orthogroups each) could represent lineage-specific expansions and/or losses, or artifacts arising from incomplete sequencing. Finally, orthogroups found in both transcriptomes but not in the genome (1298) point to limitations due to fragmentation, as many can be found in other reference genomes (Appendix S12). Second, we performed an intraspecific comparison within T. hernandezii-specific orthogroups validated by our draft genome ( Fig. 4A area "b"; 3054 orthogroups). Transcripts from male (Ther_S) and hermaphrodite (Ther_H) floral transcriptomes were compared, yielding 447 male-specific and 765 hermaphrodite-specific transcripts (Fig. 4B, areas "c" and "d", respectively).
DISCUSSION
The genome Soza et al., 2013), which we attribute to the relatively low sequencing depth and to the presence of complex or long repetitive regions (longer than the short reads) that were not recovered in our sequencing and/or assembly. Both of these limitations can be overcome in the near future with additional deep sequencing from third-generation sequencing technologies. Among the most abundant repetitive elements found, long terminal repeat transposable elements have been previously found to underlie homeotic flower mutants of T. thalictroides (Galimba et al., 2012). Our heterozygosity estimate of 1.23% is higher than that of A. coerulea (0.2-0.35%; Filiault et al., 2018) but still within the range of obligate outcrossers (Leffler et al., 2012). The number of TF families detected in the T. thalictroides draft genome is comparable to that found in the genomes of A. thaliana, S. lycopersicum, A. coerulea, and P. somniferum. The ratio of TF to oTRs was similar among the three The use of multiple reference genomes captured most orthologs in our transcriptomes, aiding in the validation of our gene predictions. Because T. hernandezii is a tetraploid (Soza et al., 2013), de novo transcriptome assembly was expected to include up to four expressed alleles per gene, thus explaining the larger number of assembled transcripts and of reciprocal best hits per reference for this species (Tables 3, 4), as well as the larger fraction of duplicated BUSCOs.
Applications: Data-mining examples for candidate genes in flower development
To test for applications of our results, we identified potential candidate genes for the morphological differences between flower types in the two species (Fig. 1). Our goal was to provide a preliminary, qualitative working list of candidate genes for future investigations of the genetic basis of distinct sexual systems (hermaphroditic vs. unisexual) and pollination modes (insect vs. wind). To that end, we first searched for known candidate genes within our comparisons (Fig. 4). Venn diagram areas "a" and "b" (Fig. 4A) represent functional orthogroups for flowers with distinct morphologies due to differing pollination modes: insect-pollinated T. thalictroides vs. wind-pollinated T. hernandezii. Venn diagram areas "c" and "d" (Fig. 4B) represent examples of transcripts expressed in flowers with distinct sexual systems: staminate flowers, with sepals and stamens, and hermaphrodite flowers, with added carpels. It is possible that a small number of these orthogroups are expressed at low levels in both species, but that due to the lack of replicates they would appear as differentially expressed. Thalictrum thalictroides has petaloid sepals that are comparatively bigger and white, upright stamens with smaller anthers, and carpels with short styles and stigmas. Thalictrum hernandezii has smaller flowers with smaller, green sepals, pendant stamens with larger anthers on longer filaments, and carpels with longer styles and stigmas (Fig. 1). Based on these phenotypic differences, we predict that the transcriptome comparisons could yield genes involved in processes such as cell elongation or cell division (longer stamen filaments and styles), increased flexibility (in pendant filaments), epidermal cell elongation (extended stigmatic papillae in wind-pollinated flowers; Di Stilio et al., 2009), or increased pollinator grip in petaloid organs, among others. A subset of the genes emerging from our comparisons fit these criteria, thus serving as validation for the usefulness of our data sets as a resource (see below).
First, we searched for previously characterized candidate genes in the T. thalictroides draft genome and the three transcriptomes. A homolog of MIXTA-like2 (ThtMYBML2, FJ487606.1) with a role in papillate cells and stigmatic papillae was identified in both species, as a full transcript in the genome assembly (GenBank: KAF5204412.1) and in the T. hernandezii transcriptome (90% protein identity, TSA:GHXT01017115), and in two fragments in the T. thalictroides transcriptome assembly (TSA:GHXU01051721 and GHXU01036124). A second candidate for differences in morphology between species is the Thalictrum STYLE2.1 ortholog, which is involved in style length in tomato (Chen et al., 2007). A Solanum query (UniProt B6CG44) was used to retrieve sequences from the T. thalictroides genome (65% protein identity, GenBank: KAF5189420.1) and the transcriptomes (65% identity, TSA:GHXU01012872; 67% identity, TSA:GHXT01076288). The presence of these candidate genes at the three-way intersection of all data sets (Fig. 4A) suggests that regulatory changes in expression levels, rather than on/off switches, likely underlie the phenotype differences.
Thalictrum hernandezii-specific orthogroups relevant to the wind-pollination syndrome (Fig. 4A, area "b") included orthologs of Arabidopsis FLAKY POLLEN 1 (FKP1) affecting pollen coat qualities (Ishiguro et al., 2010) relevant to pollen adaptations to wind pollination (less "sticky"); PECTIN METHYLESTERASE 34 (PME34), which is highly expressed in stamen filaments and relevant to long, flexible filaments and styles (Gou et al., 2012), and PME48, a promoter of pollen tube elongation and thus relevant for successful fertilization through long styles (Leroux et al., 2015); and STIGMA/ STYLE CELL-CYCLE INHIBITOR 1 (SCI1;DePaoli et al., 2014), relevant to the extended styles and stigmas. Most members of the OVULE ABORTION (OVA) family (Berg et al., 2005), relevant to sex determination, were also specific to this andromonoecious species.
Conclusions
This study provides genomic and transcriptomic resources for Thalictrum, a representative of an early-diverging lineage of eudicots with distinct floral morphologies representing diversity in sexual and pollination systems. Genomic resources for T. thalictroides and transcriptomes for T. thalictroides and T. hernandezii (Ranunculaceae) generated here increased the known set of protein-coding genes for this genus to 33,624 (predicted from genome sequence, BioProject: PRJNA439007), from approximately 132 nuclear genes, 10,461 expressed sequence tags, and 130 population sets available in NCBI databases to date. The value of these resources has been exemplified in the identification of transposable elements, molecular markers, and putative candidate genes. Future potential uses of these resources include the identification of other genes of interest and their regulatory regions (draft genome), as well as primer design to contribute to ongoing phylogenetic and population-level studies in Thalictrum (e.g., Humphrey and Ossip-Drahos, 2018;Timerman and Barrett, 2018) and in other Ranunculids. APPENDIX S11. Number and type of transcription-associated proteins (TAPs) and proportion of complete BUSCO in the genomes of Thalictrum thalictroides (TTHA), Aquilegia coerulea (ACOE), Papaver somniferum (PSOM), Solanum lycopersicum (SLYC), and Arabidopsis thaliana (ATHA).
LITERATURE CITED
APPENDIX 1. Voucher and source information for Thalictrum species in this study. Clean reads are the number of reads remaining after quality-trimming and contaminant removal (see Appendices S1 and S2 for a list of contaminants).
|
2021-02-05T05:14:42.599Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "36995f60a0ecf8f68920479297a378ed8283a8b9",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aps3.11407",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "788a36a00b4befce4411403b59c84d58afc9c060",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219708332
|
pes2o/s2orc
|
v3-fos-license
|
The minimum modulus of Gaussian trigonometric polynomials
We prove that the minimum of the modulus of a random trigonometric polynomial with Gaussian coefficients, properly normalized, has limiting exponential distribution.
Introduction
Let n 1 and consider the random trigonometric polynomial given as where i := √ −1 and {ζ j } are standard independent complex Gaussian coefficients; that is, the density of the random varible ζ j with respect to the Lebesgue measure in the complex plane is 1 π e −|z| 2 . We note that with this choice of coefficients, the polynomial P = P n is a mean-zero (complex-valued) stationary Gaussian process on T = R/2πZ with covariance kernel given by (1.1) r n (x) := E P (0)P (x) = 1 2n + 1 n j=−n e −ijx = sin n + 1 2 x (2n + 1) sin (x/2) .
In this paper we study the random variable where {η j } is an i.i.d. sequence of complex random variables. It is well known that if E log (1 + |η 0 |) < ∞ then the zeros of F concentrate uniformly around the unit circle as the degree n tends to infinity [5,13] (see [6] for a more modern perspective). For finer results and additional references, see [12] for the Gaussian coefficients case (i.e. when η j = ζ j ) or [7] for the general case.
In view of these results, it is natural to expect that the random variable m n (F ) := min |z|=1 |F (z)| tends to zero as n → ∞, and to study the order of magnitude at which this random variable decay. A particular case of this problem, when the coefficients η j are Rademacher random variables (that is, η j takes the values {±1} with equal probability), was posed already by Littlewood in [11]. In [9], Konyagin proved that in the Rademacher case, for all ε > 0 P m n (F ) 1 In a later paper, Konyagin and Schlag [8] proved that for either the Rademacher or Gaussian cases, there exists some absolute constant C > 0 such that lim sup n→∞ P m n (F ) < ε √ n ≤ Cε for all ε > 0. Note that for the case of complex Gaussian coefficients, m n (F ) is exactly m n (up to a normalization by 1/ √ n) as defined in (1.2), so Theorem 1 resolves this question for the Gaussian case. The same method of proof will also work (after some minor modifications) in the case of real Gaussian coefficients, see section 4 for more details.
1.2.
Structure of the proof. The proof of Theorem 1 is based on the observation that locally (within intervals of length much smaller than 1/n), the polynomial P n is well approximated by its linear interpolation. This observation is a consequence of a-priori bounds on the second derivative, see Lemma 2.4. In particular, by the "high-school" exercise in section 1.3, the value and the location of local minima of P n can be well predicted by linear interpolation from an appropriate net of points. Crucially, this observation also implies that points which are candidates for being global minima are well separated, see Lemma 2.11.
Introduce a net of points x α ∈ T, and set X α to be a signed version of n|P n (x * α )|, where x * α is the location of the minimum of P n (·) based on linear interpolation from (P n (x α ), P ′ n (x α )). Introduce a "good" event A α that is typical for global minima, see (2.2) for the precise definition. The global minimum nP n (·) is then well approximated by the point closest to 0 of the point process M n := α δ Xα ½ Aα . Theorem 1 is then a consequence of the fact that M n converges to a Poisson point process of intensity π/3 (the intensity is computed in Corollary 2.8). The Poisson convergence, in turn, is based on a characterization of Poisson processes due to Liggett [10], and uses a technique introduced by Biskup and Louidor in [2]: one exploits the fact that P n is a Gaussian process and that minima are well separated to deduce an invariance property of M ∞ with respect to additive i.i.d. perturbations of the points X α . The details of this argument appear in section 3.
1.3.
A high-school exercise. Suppose we are given two (non-zero) planar vectors A = (a 1 , a 2 ) and B = (b 1 , b 2 ). We want to find the distance between the origin and the straight line {A + tB | t ∈ R}. Set F (t) := A + tB and let t min be defined via the relation We denote by γ the angle between A and B. It is evident (see Figure 1) that and so t min = − A, B /|B| 2 . Now, simple algebra yields that where B ⊥ is an anti-clockwise rotation of the vector B by 90 • (see again Figure 1). In complex notation, by considering A = a 1 + ia 2 and B = b 1 + ib 2 , we have We see that cos(γ)|A| = |A − F (t min )| and that F (t min ) is the projection of the vector A onto the straight line perpendicular to B.
Notation. We write f ≪ g or f = O(g) if there exist a constant C > 0 that does not depend on n such that f ≤ Cg. We will also write f = o(g) if f /g → 0 as n → ∞. We denote by dm(·) the Lebesgue measure on C, and by C c (R) the space of continuous, compactly supported functions on R. We write N R (a, b) for the Gaussian law with mean a and variance b. For random variables X and Y , we write X law = Y if they are identically distributed. For a sequence of random variables X n , we write X n d − − → X if X n converges in distribution to X as n → ∞. Finally, for N ∈ N even we write [N] := {−N/2, −N/2 + 1, . . . , N/2 − 1}.
Point process of near-minima values
Fix some ε > 0 small (that will not depend on n; ε = 1/100 is good enough) and set N := 2⌊n 2−ε /2⌋ so that N is even. We consider N equidistributed points on the unit circle given by Denote the interval of length 2π/N centered at the point x α by I α , namely, The linear approximation for the polynomial at the point x α is Following the high-school exercise from section 1.3, we set And so, Z α is the minimal modulus (kept with a sign and scaled by n) of the linear approximation F α and Y α is the unique point such that |F α (Y α )| = |Z α |/n. The event that the interval I α produce a candidate for the minimal value is given by C 0 in the definition above is a large absolute constant which we specify in Lemma 2.6; C 0 = 10 is good enough. The event A ′ α tells us that the interval I α gives a candidate for the minimum and the event A ′′ α is just the typical values of (P (x α ), P ′ (x α )) so that the interval I α gives a candidate. We can now define the point process on R of near-minima values as Here and throughout, we consider M n as an element of the space of locally finite, integer valued positive measures on R, equipped with the local weak * topology generated by bounded, compactly supported functions. Thus, we never consider the points at infinity that are contributed by the events A c α . Morally, the linear approximations captures the global minimum of the polynomial since the second derivative is small. In what follows we make this idea precise. For β > 0, define the event Let x be the point such that |g( x)| = g ∞ , and recall that g ′ ∞ ≤ (2n + 1) g ∞ by Bernstein's inequality. Provided that |x − x| ≤ 1/4n, we have that Combining this with (2.5) and Fubini, we get the bound Now, we can use the Markov inequality with θ = 2 √ 2 and see that We now turn to compute the probability that the interval I α contributed a point to M n .
Proof. The proof is a simple Gaussian computation. By stationarity, we may assume that x α = 0. Set σ n := n(n+1)
3
, so that (P (0), P ′ (0)/σ n ) are independent standard complex Gaussian random variables. Indeed, we note that Moreover, a straight forward computation shows that E [P (0 where in the third equality we used the rotational symmetry of the Gaussian distribution. To conclude the lemma, we show that for all fixed C 0 > 0 and for n large enough. We thus get the upper bound The first probability is O(n −2 ) from the same Gaussian computation as done above and the second probability is o(n −2 ) for a large absolute constant C 0 because P ′ (0) is a complex Gaussian variable of variance bounded by n 2 ; one sees that C 0 = 10 will do. Altogether, Corollary 2.9. The sequence of point processes {M n } is tight. That is, for any interval I ⊂ R and for all n n 0 , Both Corollary 2.8 and Corollary 2.9 are immediate consequences of Lemma 2.6. We turn to prove that the extremal process M n captures the minimum modulus of our polynomial P n . Proof. Clearly, Recall the definition of the linear approximation (2.1). For each α we use Taylor expansion and see that on the event G ε , Hence, for large enough n, we have the upper bound Following the same computation we did in the proof of Lemma 2.6, we see that and a similar equality for the other term in the sum. Altogether, we see that P m n τ n − P(M n ((−τ, τ )) = 0) ≪ 1 n 1−3ε and we are done.
By Lemma 2.10, the limit distribution of m n (Theorem 1) will follow if we can show that M n converge in distribution (as a point process) to a Poisson point process with the desired intensity. This is established in what follows. We first prove that the points in the extremal process come from well separated intervals.
To prove Lemma 2.11 we will need two claims. We first prove the lemma assuming both claims, and then turn to prove each claim separately. Denote by Proof of Lemma 2.11. Applying the union bound and stationarity we see that where the last equality is due to Claims 2.13 and 2.14.
Proof of Claim 2.13. Fix some β < ǫ/2, and consider first the term α = 1 in the sum S I . Observe that for all x ∈ I 1 , which yields that on the event G β , for all x ∈ I 1 we have that Following the same computation as done in the proof of Lemma 2.6, we see that we have that Therefore, we combine (2.15) and (2.16) to conclude that on the event we have that for all x ∈ I 1 , which implies that the event A 1 does not hold. Altogether, we apply Lemma 2.4 and obtain that The treatment of the rest of the sum S I is similar, only that we do not have to impose the extra separation within the interval. We have for all x ∈ I α that Furthermore, on the event A 0 , we have the lower bound for all x ∈ I α . Recalling that β < ε/2, we combine (2.17) and (2.18) to see that on the event G β , (here we use that n −1−ε ≫ |x α−1 | 2π/N) which implies that A α does not hold. We thus have, Proof of Claim 2.14. Denote by Id the identity matrix. The random vector V α := P (0), P ′ (0) σn , P (x α ), P ′ (xα) σn is a mean zero complex Gaussian with independent real and imaginary component; the covariance matrix of both the real and imaginary parts and r n is given by (1.1). The density of the vector V α in C 4 (or R 8 ) is bounded from above by a constant multiple of Differentiating (1.1) twice and applying some trigonometric identities yield that .
Using Taylor's approximation and some algebra it is evident that for n −1−2ε ≤ |x α | ≤ ε/n we have Using (2.20), we see that which together with (2.19) implies that Furthermore, for |x α | ε/n, the density of V α is uniformly bounded from above by a constant C = C ε . Combining this observation with (2.21), we can bound the sum S II (recall (2.12)) as
Liggett's invariance and proof of Theorem 1
In this section, we prove the convergence in distribution of M n to a Poisson process of constant intensity, following a characterization of the latter due to Liggett. In this, we follow a method developed by Biskup and Louidor in their study of the two dimensional Gaussian free field [2]. For more background, see the lecture notes [1].
The first step is to rewrite P n as a sum of two independent polynomials. Denote by Q = Q n an independent copy of the random polynomial P = P n , and consider the random polynomial (of degree n) given by where M n is the extremal process (2.3) and M n is the extremal process that corresponds to the polynomial P = P n . The goal of this section is to study the relation between these two point processes. Let X α and Y α be the analogous variables to X α and Y α , which correspond to the polynomial P instead on P , see (2.2). By Corollary 2.9 (tightness of the sequence {M n }), we can find a subsequence {n k } ⊂ N so that as k → ∞. We denote the law of the limiting point process M ∞ by η, then by (3.2) it is evident that η law = M ∞ .
The following lemma is the key element in extracting the Poisson limit, which is a consequence of η having an invariance property. As usual, for f ∈ C c (R) we denote the linear statistics of a point process W by Lemma 3.4. Let η be given as above and let f ∈ C c (R) be a non-negative function. Then and G ∼ N R (0, 1/2). Here E G denotes the expectation with respect to the Gaussian variable G.
Before proving Lemma 3.4, we will need two simple results. Lemma 3.6 gives a quantitative approximation for "almost-independent" normal variables as truly independent normal variables. Lemma 3.7 tells us that with high probability, the perturbation (3.1) did not introduce any new points into the extermal process, nor did it delete the points that where present before the perturbation.
Proof. We turn to bound P |X α | ≤ K, | X α | > 2K for each α, and assume by stationarity that x α = 0. We have, To bound the probabilities P(E i ), i = 1, 2 we exploit the relation between the polynomial and its small perturbation (3.1). Denote by . We see that By Taylor expanding the square root and using the fact that |A| ≤ n −1/2 and |B| ∈ n 1−ε/2 , C 0 n √ log n on the event A α , we see that, Therefore, on the event A α ∩ {|Q(0)| ≤ n ε , |Q ′ (0)| n 1+ε }, we have that |B| = | B|(1 + o(1)) and thus Therefore, To bound P(E 2 ), we use (3.8) once more and see that on the event E 2 , which implies that Combining the bounds (3.9) and (3.10) together with the union bound, we see that Proof of Lemma 3.4. Fix f ∈ C c (R), and assume that the support of f is strictly contained in (−K, K) for some K > 0 (large enough). Let F P denote the σ-algebra generated by the coefficients {ζ j } n j=−n of the polynomial P . We have that (3.11) where here N k = 2⌊n 2−ε k /2⌋. From the estimates (3.8), we see that outside of an event of o(1) probability, we have that for all α ∈ [N], and the error term is uniform in α. Denote by G α := Re (Q(x α )P ′ (x α ))/|P ′ (x α )|. Then, conditioned on F P , the random variables {G α } are jointly normal, and each G α has law N R (0, 1/2). Using Lemmas 2.11 and 3.7, we obtain that outside of an event of probability o(1), we have for all α, α ′ ∈ {α ∈ [N] : |X α | ≤ 2K} that where r n is again as in (1.1). Putting everything together, we use Lemma 3.6 and the uniform continuity of f to see that where the o(1) term may be random (measurable on F P , but still, it is of order o(1) with probability approaching 1 as n → ∞.) Plugging into (3.11) and using (3.3) we get that Finally, we extract the Poisson limit from relation (3.5) and with that the proof of Theorem 1.
Proposition 3.13. Suppose that η is a point process on R such that (3.5) holds for all non-negative f ∈ C b (R). Then η is a Poisson point process whose intensity measure µ is a constant multiple of the Lebesgue measure.
Proof. From (3.5) we know that the law of η is an invariant measure for the transformation that consists of adding to each point in the support of η an independent mean-zero Gaussian variable of variance 1/2. Therefore, by [10,Theorem 4.11], the law of η is a mixture of Poisson processes of intensities µ that satisfy the relation (3.14) µ ⋆ N R (0, 1/2) = µ where ⋆ denotes the convolution of two measures. By the result of Deny [4, Theorem 3'] (based on Choquet-Deny [3]), we know that any solution of (3.14) is of the form where ν is a measure supported on those exponential functions e ρ (x) := e −ρx which satisfy e ρ ⋆ N (0, 1/2) = e ρ . A straight forward computation shows that, which in turn implies that ρ = 0. Thus, we conclude that the measure ν is a constant multiple of a delta point mass at ρ = 0. That is, µ is some multiple of the Lebesgue measure on R. Since convex combinations of Poisson processes with constant intensity yield a Poisson process of some (constant) intensity, we conclude that η is a Poisson process with a constant intensity, which proves the proposition.
Proof of Theorem 1. By Proposition 3.13, we know that {M n } converges on a subsequence to a Poisson process with intensity that is a multiple of the Lebesgue measure. By Corollary 2.8, we know that the limit of the intensity of M n is π/3 times the Lebesgue measure. Since the limiting process does not depend on the subsequence, we use the tightness once more and conclude that M n converge to a Poisson point process with this given intensity. It remains to apply Lemma 2.10 and see that lim n→∞ P m n τ n = lim n→∞ P (M n ((−τ, τ ))) = exp −2 π 3 τ .
Real Gaussian coefficients
In this section we briefly comment on the analogous result to Theorem 1 in the case of real Gaussian coefficients. Let {X j } be an i.i.d. sequence of N R (0, 1) random variables and consider the random trigonometric polynomial given by As before, we denote by m n (T ) = min x∈T |T n (x)|. The proof the Theorem 2 is almost identical to that of Theorem 1, only the computations are more cumbersome. The reason for this complication is that the polynomial T n is no longer stationary (as opposed to P n ). Still, T n is a complex-valued Gaussian process on T so computations are possible.
The correlations of the real and imaginary part of T = T n are given by where r n is given by (1.1) and x, y ∈ T. Recall the definition of the event that the interval I α produced a candidate for a minimal value (2.2). It follows from (4.2) that as long as |x α | ∈ [n −1+ε , π − n −1+ε ] then That is, the random variable T (x α ) scale as a standard complex Gaussian and similar computations as in Lemma 2.6 can be carried out with no problems. Still, we need to show that with probability tending to 1 the minimum of T does not occur inside {|x| ≤ n −1+ε } ∪ {|x − π| ≤ n −1+ε }, this we do in Lemma 4.3 below.
In proving that the points of the extremal process are obtained from well separated intervals (i.e. the analogous result to Lemma 2.11), we remark that proving Claim 2.13 for the real coefficients case is straight forward. To prove Claim 2.14 for the real case, one can use (4.2) while noticing that r n (x + y) ≪ r n (2x) + |x − y|r ′ n (2x) ≪ provided that |x| n −1+ε and that n −1−2ε ≤ |x − y| n −ε . The proof of Liggett's invariance (Section 3) also translates to the case of real coefficients with no problems. It remains to prove that with high probability the intervals |x| ≤ n −1+ε ∪ |x − π| ≤ n −1+ε will not contribute a point to the extermal process that corresponds to T . Since T (x) and T (x + π) have the same distribution, it suffices to consider the interval centered around 0. Proof. By repeating the exact same argument as in Lemma 2.4, we can show that for any ε > 0 P T ′ ∞ n 1+ε ≪ exp (−n ε ) .
|
2020-06-17T01:01:13.105Z
|
2020-06-16T00:00:00.000
|
{
"year": 2020,
"sha1": "b623dcc957d3e49a401e41a15f495fa3993e1b1c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.08943",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b9fdbc75b8ed55697fb6c716fa58d1790dd80704",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
236386489
|
pes2o/s2orc
|
v3-fos-license
|
The role of reading comprehension in mathematical modelling: improving the construction of a real-world model and interest in Germany and Taiwan
To solve mathematical modelling problems, students must translate real-world situations, which are typically presented in text form, into mathematical models. To complete the translation process, the problem-solver must first understand the real-world situation. Therefore, reading comprehension can be considered an essential part of solving modelling problems, and fostering reading comprehension might lead to better modelling competence. Further, ease of comprehension and involvement have been found to increase interest in the learning material, and thus, improving reading comprehension might also increase interest in modelling. The aims of this study were to (a) determine whether providing students with reading comprehension prompts would improve the modelling sub-competencies needed to construct a model of the real-world situation and their interest in modelling and (b) analyze the hypothesized effects in two different educational environments (Germany and Taiwan). We conducted an experimental study of 495 ninth graders (201 German and 294 Taiwanese students). The results unexpectedly revealed that providing reading comprehension prompts did not affect the construction of a real-world model. Further, providing reading comprehension prompts improved students’ situational interest. The effects of providing reading comprehension prompts on the construction of a real-world model were similar in Germany and Taiwan. Students’ interest in modelling improved more in Germany. An in-depth quantitative analysis of students’ responses to reading prompts, their solutions, and their interest in the experimental group confirmed the positive relation between reading comprehension and modelling and indicated that the reading comprehension prompts were not sufficient for improving reading comprehension. Implications for future research are discussed.
Introduction
Mathematical modelling competence is an important part of mathematical literacy. However, research on modelling has demonstrated that students encounter various difficulties when solving modelling problems (Blum, 2015). Even at the beginning of the solution process, learners often struggle to understand the real-world situation and to structure and simplify the given information (Blum, 2015;Kintsch & Greeno, 1985;Krawitz et al., 2017;Wijaya et al., 2014). In order to overcome these barriers, they need modelling sub-competencies to construct a structured and simplified mental representation of the real-world situation, here called the real-world model (Kaiser & Brand, 2015). Consequently, teaching methods for modelling problems have often included elements that are aimed at improving the construction of a real-world model (Greefrath et al., 2018;Kaiser & Brand, 2015;Schukajlow et al., 2012). Reading comprehension plays a decisive role in the construction of a real-world model (Leiss et al., 2010) because the modelling problems encountered in the classroom are often presented in text form. Also, the process of solving modelling problems in everyday life often includes gathering and interpreting information presented in text form (e.g., newspapers, timetables, books). Hence, reading comprehension is often required to understand the real-world situation, and consequently, interventions that address students' reading comprehension seem to be promising for fostering the ability to construct a realworld model and thereby improving one's overall modelling competence. However, there has not been much research that has focused on the effects of reading interventions on modelling competence and modelling sub-competencies. In particular, there has been a lack of experimental interventional studies in the field .
Further, students' motivation plays a decisive role in the learning process in mathematics (Middleton & Spanias, 1999;Schukajlow et al., 2017). One important motivational variable is students' interest in the learning material. Interest has been found to enhance students' learning, to predict academic decisions such as students' course choices in high school, and to have a positive effect on mathematics achievement (Heinze et al., 2005;Hidi & Harackiewicz, 2000). Hence, it is important to investigate students' interest in modelling and to consider interventions for improving interest. One promising approach for triggering situational interest that we examined in our study involves facilitating the reading comprehension of the given texts or problems (Schraw et al., 1995;Wade et al., 1999).
The present article analyzes the effects of a reading intervention on students' modelling sub-competencies to construct a real-world model and on interest in two different educational environments. Prior research identified various perspectives on modelling, which differ in the aims they pursue with modelling and which can be related to different cultural backgrounds (Kaiser & Sriraman, 2006). Also, the value assigned to modelling has been found to differ from country to country. We selected Germany and Taiwan because the educational environments in Germany and Taiwan are very different from each other. We targeted different educational environments in this study in order to determine whether the findings held for the different educational contexts that the students had been exposed to.
Following these considerations, the aims of the present study were (a) to test the effects of a reading intervention on the construction of a real-world model and on interest in modelling and (b) to examine whether the effects of the reading intervention on the construction of a realworld model and interest in modelling were similar in the two educational environments.
Further, we conducted an in-depth analysis of students who participated in the reading intervention. Thereby, we focused on students' reading comprehension ability and examined its relations to the construction of a real-world model and students' interest in modelling.
2 Modelling competence, interest in modelling, and reading comprehension in Germany and Taiwan
Modelling competence
The core of mathematical modelling is the translation of a real-world problem into a mathematical model with the aim of solving the problem. The process of modelling is typically depicted as a cyclic process that moves from the real world to the mathematical world and back to the real world, passing through different phases that are required to solve the problem (see, e.g., Blum & Leiss, 2007;Galbraith & Stillman, 2006;Verschaffel et al., 2000). Demonstrating the willingness and ability to solve real-world problems through mathematical modelling is referred to as mathematical modelling competence (Kaiser, 2007). More specifically, we refer to an analytic understanding of modelling competence that is based on different sub-competencies (a description of different modelling strands can be found in Kaiser & Brand, 2015). Modelling subcompetencies include-among metacognitive and social competencies-competencies that are related to the different phases of the modelling cycle (Kaiser, 2007;Maaß, 2006;Niss et al., 2007), namely: the competencies to (1) understand the real-world situation and construct an initial mental representation of the real-world situation (called the situation model); (2) structure and simplify the situation model, the resulting mental representation of which is referred to as the real-world model; (3) mathematize the realworld model, resulting in a mathematical model; (4) apply mathematical procedures to find a mathematical result; (5) interpret the mathematical result at the end of the solution process; and (6) validate the result with regard to the real-world situation.
The modelling sub-competencies needed to construct a real-world model
In the present article, we focus on the modelling sub-competencies needed to construct a realworld model. These sub-competencies are further explained and illustrated using the example of the Parachuting modelling problem presented in Fig. 1.
First, students have to construct a situation model. Therefore, they have to understand the information, presented here in the form of text that is accompanied by a table and a picture. Fig. 1 The parachuting modelling problem adapted from Schukajlow and Krug (2014b, p. 500) Second, the learner has to transform his or her situation model into a real-world model (Fig. 2). This means the learner has to simplify the situation model by making an assumption about the wind speed (e.g., assume that there is a strong wind blowing). He or she has to structure the information by separating important from unimportant information (e.g., identifying the important information that the horizontal shift per each thousand meters of descent in strong wind conditions is 340 m during free fall and 3060 m while gliding) and construct relationships between the pieces of important information (e.g., connecting the information that the parachutist free falls about 3000 m to the information that the horizontal shift per each thousand meters is 340 m).
Research on modelling has shown that solving modelling problems is demanding, and students are often already struggling at the beginning of the modelling process in trying to construct a real-world model (Blum, 2015;Kintsch & Greeno, 1985;Krawitz et al., 2017;Leiss et al., 2010;Wijaya et al., 2014). In the study by Wijaya et al. (2014), more than one third of students' errors in solving modelling problems were related to the construction of a real-world model. These difficulties emphasize the need for research to examine interventions that can improve students' modelling sub-competencies to construct a real-world model while solving modelling problems.
Students' interest in modelling
Interest is considered a person-object relationship that refers to both the psychological state of attention and affect toward a particular topic (situational interest) and an enduring predisposition to reengage with the topic over time (individual interest) (Hidi & Renninger, 2006). Interest is a domain-or content-specific motivational variable that combines affective and cognitive qualities (Harackiewicz et al., 2016;Schiefele et al., 1992). Theories of the development of interest propose that students pass through several phases as their interest develops: from unstable and triggered situational interest to stable and well-developed individual interest (Hidi & Renninger, 2006). If a person repeatedly experiences situational interest with respect to a particular topic, he or she may also develop individual interest in the topic over time. Hence, the environment can contribute to the development of individual interest by stimulating situational interest and building on prior Fig. 2 Illustration of a real-world model for the parachuting problem under the assumption that a strong wind is blowing individual interest. For mathematical modelling, this means that if learners repeatedly experience situational interest when solving modelling problems, they are likely to develop individual interest in modelling. For modelling problems, different aspects can be sources of interest, namely, students may be interested in the process of modelling, the content, or the intramathematical problem. As affect and modelling competence are related to each other (Chamberlin, 2019;Schukajlow & Krug, 2014a), enhancing interest in modelling is also beneficial for students' modelling competence. The important role that affect plays in modelling has been acknowledged in modelling research, and predictors such as authenticity, meaningfulness, and contexts have been discussed (Di Martino, 2019;Goldin, 2019). Several studies have addressed the question of how interest in solving mathematical problems can be enhanced. Building connections to reality is one approach that can be used to increase students' interest in mathematics because problem contexts can be a source of students' interest in working on the problems. However, the study conducted by Rellensmann and Schukajlow et al. (2017) showed that students do not perceive problems connected to reality per se as more interesting than intramathematical problems. In this line of research, studies have investigated whether personalizing the problems increases students' interest in working with the problems (Bates & Wiest, 2004;Høgheim & Reber, 2015). The results have shown that tailoring the context to students' personal interest has benefits for students' situational interest in working with the problems. However, personalized problems are often constructed with the help of digital tools and therefore are not easy to implement in classrooms. Our approach focuses on text-based interest (i.e., situational interest that comes from reading a text). As the problem is described in a textual format, we considered factors that are claimed to trigger text-based interest, such as meaningfulness, ease of comprehension, involvement, text cohesion, novelty, and emotiveness (Mitchell, 1993;Palmer, 2009;Schraw et al., 1995). Empirical results from factor analysis and correlational analysis have supported the importance of these sources of students' situational interest (Mitchell, 1993;Palmer, 2009;Schraw et al., 1995). For the present study, we consider involvement and ease of comprehension to be particularly important. Involvement refers to the extent to which students feel they are active participants. Ease of comprehension refers to how easy it is to understand a text. We discuss both sources in the context of reading comprehension and modelling in the next section.
Reading comprehension and its impact on constructing a real-world model and on interest in modelling
Reading comprehension is defined as the active process of building an adequate mental representation of a text (Durkin, 1993;Kintsch, 1986). Texts in mathematics often include discontinuous elements such as tables, figures, or formulae. If the text is accompanied by pictures, an integrated mental representation is built on the basis of the text and pictures (Schnotz & Bannert, 2003). Reading comprehension can be claimed to be one of the sub-competencies needed to understand the real-world situation because the situation is often presented in a textual format in the classroom or in everyday contexts involving textual information, such as newspaper articles, product information, reports, and many others. Hence, reading comprehension can also be considered a sub-competency that is necessary for constructing a real-world model because structuring and simplifying the given information directly depend on an adequate understanding. The importance of reading comprehension for modelling has been acknowledged in research on modelling (Leiss et al., 2010;Leiss et al., 2019), and theoretical descriptions of the modelling process have been built on research on text comprehension (Kintsch & Greeno, 1985). Empirical findings have supported the positive relation between reading comprehension and modelling competence (Krawitz et al., 2017;Leiss et al., 2010;Leiss et al., 2019;Vilenius-Tuohimaa et al., 2008). Hence, interventions that address students' reading comprehension seem to offer a promising approach for fostering modelling sub-competencies to construct a real-world model and thereby improving overall modelling competence. However, hardly any intervention studies have tried to enhance modelling competence by fostering reading comprehension, and the few existing ones have not been successful (Hagena et al., 2017;Krawitz et al., 2017). Thus, further investigations are necessary to identify the conditions under which reading interventions are beneficial for modelling. One approach for enhancing reading comprehension is to present questions that address important pieces of information and their relations given in the text, referred to here as reading comprehension prompts. The impact of questions on reading comprehension is widely acknowledged in reading research, and answering questions is considered an important strategy for boosting reading comprehension. In particular, reading research has shown that reading comprehension prompts can guide readers' attention to important aspects of the text (Ge & Land, 2003) and increase their engagement with the text because the contents of the text are more actively processed when the reader has to answer questions about the contents (National Reading Panel, 2000). However, questions are not as beneficial per se. The impact strongly depends on factors such as the type of question (Cerdán et al., 2009) and readers' reading proficiency (van den Broek et al., 2001). Working on high-level questions was found to be more beneficial for reading comprehension than working on low-level questions (Cerdán et al., 2009). The results from the study by van den Broek et al. (2001) suggest that more proficient readers benefit more from questions, whereas less proficient readers can suffer from having to answer questions because of an increase in cognitive demand from having to think about them. However, these findings were based on scientific or narrative texts, and one question that remains unanswered is whether reading comprehension prompts are also beneficial for the text included in modelling problems. Research on modelling has provided initial indications that reading comprehension prompts might foster the construction of a real-world model and thereby enhance modelling competence. In the study conducted by Schukajlow et al. (2015) and similarly also in the study by Hankeln and Greefrath (2020), students received a scaffolding instrument called a solution plan to guide their modelling processes. The solution plan consisted of prompts referring to the different phases of the modelling cycle, including prompts to trigger reading comprehension ("Read the text precisely! Imagine the situation clearly!"). The results showed that using the solution plan was beneficial for students' modelling competence, but the specific role of reading comprehension prompts could not be derived from the data as it was not clear which prompts were responsible for the positive effect on modelling competence.
Further, reading comprehension prompts might increase students' interest in modelling because they address two important sources of situational interest: involvement and ease of comprehension. First, reading comprehension prompts might trigger involvement because, by working with the prompts, students become more actively involved in the reading process. Second, reading comprehension prompts might affect ease of comprehension because they are suggested to facilitate reading comprehension, and reading research has indicated that if texts become easier to understand, they are perceived as more interesting (Schraw et al., 1995;Wade et al., 1999). Modelling problems often place high demands on students' reading comprehension, and thus, their ease of comprehension may be compromised. This is a potential reason for the unexpected findings that students perceive modelling problems as similar to (Schukajlow et al., 2012) or even less interesting (Rellensmann & Schukajlow, 2017) than problems with no connection to reality. Consequently, we expected that reading comprehension prompts would increase students' interest in modelling.
Educational environments in Germany and Taiwan
One of the aims of the present study was to investigate the role of the educational environment the students had been exposed to. Comparing students from educational environments that are very different, such as Germany and Taiwan, can provide indications for the validity of the theoretically assumed relationships between reading comprehension, modelling competence, and interest in modelling. This section discusses differences in the educational environments of the two countries that we analyzed in the present study.
Students from East Asian countries-among them Taiwanese students-have been found to perform extremely well in international comparative studies of mathematics achievement such as TIMSS and PISA. However, there are some indications that modelling and applications play only minor roles in Taiwanese compared with German mathematics education. In Germany, modelling competence is embedded in the curriculum as one of six compulsory competencies (KMK, 2004), whereas it is not explicitly mentioned in the Taiwanese curriculum (Ministry of Education in Taiwan, 2003). Further, Taiwanese textbooks seem to focus on intramathematical tasks, as Taiwanese textbooks were found to contain the lowest proportion of real-world problems for geometry problems when compared with textbooks from Singapore, Finland, and the USA . This result is reflected in students' reports, as students from Taiwan reported that they encountered real-world problems in their math classes less often than German students (OECD, 2014). The few comparative studies that have analyzed students' achievement in modelling have pointed out that Western students are more experienced in solving modelling problems. Chang et al. (2020) showed that German students had higher modelling competence than Taiwanese students when the students from the two countries were on the same level of intramathematical competence. This difference was particularly remarkable for students with a low level of intramathematical competence.
Further, German and Taiwanese students' interest in modelling might also differ. Because they have less experience with modelling problems, Taiwanese students may find it more interesting to work on modelling problems than German students because novelty is an important source of situational interest (Palmer, 2009). However, more experience with modelling problems could also lead to greater interest in working on the problems because they might perceive the problems as more meaningful, which is also known as a source of situational interest (Mitchell, 1993). Little is known about differences in students' interest in modelling problems in different countries. As interest in modelling is related to interest in mathematics, the first indications of students' interest in modelling in Germany and Taiwan can be derived from the results of PISA 2012, where students' interest in mathematics was assessed. The results indicate that German students have a higher interest in mathematics compared with Taiwanese students (OECD, 2013), which is surprising given the much higher mathematical performance of Taiwanese learners. As cognitive and affective theories such as theories of modelling competence and theories of interest do not depend on education environments, we expected that reading comprehension prompts would have similar effects on the construction of a real-world model and interest in modelling in Germany and Taiwan.
Hypotheses and path-analytical model
On the basis of theoretical considerations and the prior empirical findings described above, we developed the following hypotheses: Hypothesis 1 (reading comprehension prompts) The presentation of reading comprehension prompts will positively affect the modelling subcompetencies needed to construct a real-world model and students' interest in solving modelling problems: a) Presenting reading comprehension prompts will lead to higher scores on the subcompetencies needed to construct a real-world model. b) Presenting reading comprehension prompts will lead to a higher interest in solving modelling problems.
Hypothesis 2 (educational environment) The effects of presenting reading comprehension prompts on the construction of a real-world model and on interest will be similar in both educational environments.
a) The effect of presenting reading comprehension prompts on the construction of a realworld model will be similar in the two educational environments (Germany and Taiwan). b) The effect of presenting reading comprehension prompts on interest in modelling will be similar in the two educational environments (Germany and Taiwan).
The hypothesized path model (Fig. 3) links reading comprehension prompts-that were operationalized by presenting questions (reading comprehension prompts group vs. control group)-with the outcome measures (construction of a real-world model and interest in modelling) while controlling for intramathematical competence. Educational environment (Germany vs. Taiwan) was included as a moderator of the effects of reading comprehension prompts on the outcome variables. on the other construct (e.g., construction of a real-world model) or the moderating effects of a construct (educational environment) on the direct effects. The two paths from the control variable (intramathematical competence) to the outcomes illustrate that the investigated effects were controlled for intramathematical performance.
Sample and procedure
The present sample involved 495 ninth graders, including 201 German students from nine classes from high-track schools (German gymnasium; 50% female, mean age = 14.96 years) and 294 Taiwanese students from 12 classes in which all performance levels were taught (52% female, mean age = 14.89 years). Prior studies (e.g., Chang et al., 2020;OECD, 2019) have demonstrated that Taiwanese students have much higher intramathematical competence than German students. In order to balance these differences and improve the comparability of the groups regarding this important background variable, we collected data from German high-track schools and regular Taiwanese schools. We further compared the intramathematical competence of the two groups to control for whether the sampling strategy led to the intended result (see Section 4).
In each of the 21 classes, students were randomly assigned to an experimental condition (reading comprehension prompts group; RPG) or a control condition (control group; CG). Students in both conditions worked on a paper-and-pencil modelling test. In the RPG, reading comprehension prompts were used to trigger reading comprehension. Accordingly, students in the RPG received reading comprehension prompts that referred to the textual descriptions of the realworld situations (called situational descriptions). Students first read the situational description, then worked on two corresponding reading comprehension prompts, and subsequently worked on two modelling problems. Two sample pages from the RP test booklet from the context "Parachuting" are presented in the Appendix (Fig. 8). This procedure took 60 min. After completing all tasks, students worked on the intramathematical problems for 20 min. Students in the CG followed the same procedure, but they did not receive any reading comprehension prompts.
Reading comprehension prompts were operationalized as questions that referred to the information presented in the situational descriptions. The modelling problems were presented on a separate page after the reading comprehension prompts in order to reduce the risk that students would work on the modelling problems before answering the reading comprehension prompts. Responding to the reading comprehension prompts was aimed at helping students focus on important objects and on important relations between the elements given in the situational description. Consequently, one of the two questions for each situational description targeted important information, and the other question targeted relations between the given pieces of information. For example, for the parachuting situation, the situational description was the textual description of the parachutist's jump, including how he or she was carried off target by the wind (text and table presented in Fig. 1). The first reading comprehension prompt was "What is the horizontal shift per each thousand meters of descent while gliding when a parachutist is carried by a light wind?" (correct answer: "540 m"). This question referred to information provided in the table. The second reading comprehension prompt was "What is the horizontal shift per each thousand meters of descent when a parachutist is carried by a strong wind at about an altitude of 2,500 meters?" (correct answer: "340 m"). This question addressed the relations between and the interpretation of the given pieces of information. To respond, learners have to use the information given in the text to interpret the altitude of 2500 m as the free fall phase and then use the table to read out the horizontal shift for strong wind conditions during free fall.
The reading comprehension prompts were tested in a pilot study (Krawitz et al., 2017) and subsequently revised with a focus on the theoretically expected benefits of asking questions on reading comprehension. These benefits include addressing important pieces of information and their relations given in the text and thereby increasing students' engagement with the text (see Section 2.4). In the reading comprehension prompts, we decided also to address information that is not needed to solve the modelling problems because the aim of the prompts was to enhance the understanding of the situation and not to provide clues about which data should be used to solve the modelling problems.
The modelling sub-competencies needed to construct a real-world model and intramathematical competence
The modelling test included eight modelling problems that referred to the four situational descriptions (two modelling problems for each situational description). All situational descriptions used in the study were similar in length. Six modelling problems were adapted from previous studies (Blum, 2011;Schukajlow & Krug, 2014b), and two modelling problems were developed in this study. The modelling problems could be solved with methods, such as applying the Pythagorean Theorem or by drawing a scaled diagram. The decision to limit the mathematical content area of the modelling problems was made to improve the fit between the modelling test and the intramathematical test. One modelling problem is presented in Fig. 1. Another modelling problem that referred to the same situational description was: "For his last jump, a parachutist glided about 1,600 meters after he had opened his parachute. Using the above clue, make reasonable assumptions about what kind of wind conditions most likely prevailed during this jump. Find a solution and clearly provide reasons for your answer." In order to measure the modelling sub-competencies needed to construct a real-world model, students' solutions to the eight modelling problems were analyzed for whether the solution was based on a correct real-world model of the situation (scored 1) or not (scored 0). The use of problems that require students to work on all phases of the modelling process prevented us from asking questions that may have confused the students because they are not used to describe their construction of the real-world model. For example, for the modelling problem that went with the parachuting situation presented in Fig. 1 ("What possible distance might the parachutist move during the entire jump, including free fall and gliding?"), students had to make an assumption about the wind conditions and link this assumption to the information presented in the text and table (see Fig. 2). Figure 4 presents a student's solution that was scored as correct real-world model because the student selected the important information needed to solve the problem, made an assumption about the wind condition, and correctly assigned the data to the respective objects. The accuracy of the real-model was estimated based on students' written solutions (see Fig. 4).
In the solution presented in Fig. 5, the student assumed that a light wind was blowing but interpreted the side deviation as the distance traveled. Hence, the data were incorrectly assigned to the objects. Such a solution was scored as an incorrect real-world model. Fig. 4 Example of a student's solution that was scored as a correct real-world model The scale reliability (Cronbach's alpha) for measuring the construction of a real-world model was 0.616. Two coders were involved in scoring the German test booklets, and six coders coded the Taiwanese part of the sample. At least 20% of the test booklets in each country were used to calculate intercoder reliability. Two coders scored the solutions for each item. The intercoder agreement between two coders (Cohen's κ) was 0.694 or higher, indicating a substantial level of agreement. The coding was carried out by university students who completed a training and received a coding manual. When differences occurred, the coders discussed their judgments and made a consensual decision to choose one code.
Students' intramathematical competence was assessed as students' ability to solve intramathematical problems on the topic of the Pythagorean Theorem. On the intramathematical test, students were asked, for example, to calculate the length of the diagonal of a rectangle with a length of 3 cm and a width of 4 cm or to judge whether a given figure (nonright triangle) represents the Pythagorean Theorem. The scale consisted of 10 items, and its reliability (Cronbach's alpha) was 0.815.
Interest in solving modelling problems
We used task-specific questionnaires in the present study in order to take into account the statelike nature of situational interest and the task-sensitivity of the construct (Knogler et al., 2015). We adapted the task-specific scale used in prior studies (Rellensmann & Schukajlow, 2017;Schukajlow et al., 2012). For each of the four situational descriptions, after working on two modelling problems, students were asked whether they were interested in working on these problems. Using a 5-point Likert scale (1 = not at all true, 5 = completely true), students' responses indicated the extent to which they agreed with the following statement: "It was interesting to work on the problems '[Name of the situational description, e.g., Parachuting]'." The scale consisted of 4 items, and its reliability (Cronbach's alpha) was 0.835.
Translation of the material
The material that was adopted from prior studies was translated into the English language. New material was directly developed in the English language. The English material was translated into German and Chinese. The second author of the paper, who understands all three languages, checked for the compatibility of the German and Chinese translations.
Data analysis
Means, standard deviations, and Pearson correlation coefficients, which are presented in Table 1, were calculated using SPSS. All estimation and data fitting procedures for testing the hypothesized Fig. 5 Example of a solution that was scored as an incorrect real-world model path-analytic model (Fig. 3) were carried out with Mplus (Muthén & Muthén, 1998. The variance-covariance matrix was analyzed by using maximum-likelihood estimation with robust standard errors. The reported p values for the effects of the reading intervention were one-tailed because our expectations were directional, but they were two-tailed for the effects of educational environment. To examine clustering effects produced by the nonindependence of students nested in classes (n = 21), we calculated the intraclass correlation coefficient (ICC) for intramathematical competence. The ICC (0.31) indicated that the intramathematical competence of students from the same classes was more similar than that of students from different classes. Thus, we used the "TYPE = COMPLEX" Mplus analytic option to account for the clustering effects (Stapleton, 2006). The treatment variables were dummy coded (RPG = 1 and CG = 0). The model included 13 free parameters and 495 participants. The ratio of participants to parameters was about 38 (495/13) and hence above the critical value of 5 for obtaining solid results (Kline, 2005). The model was fully saturated so that the fit indices were noninformative (i.e., CFI = 1; SRMR = 0).
Overall results
First, we conducted a preliminary analysis of the sample and tested it for differences between German and Taiwanese students and between the conditions (RPG and CG) in order to obtain some indication of the comparability of the groups. For intramathematical competence, the results indicated that there were no differences between the German and Taiwanese students (Germany: M = 0.520 SD = 0.251; Taiwan: M = 0.512 SD = 0.300), t(473.266) = 0.305, p = 0.760, nor were there differences between the students from the different experimental conditions (RPG: M = 0.518 SD = 0.284; CG: M = 0.512 SD = 0.279), t(493) = − 0.269, p = 0.788. These results justified the randomized assignment of students to the reading comprehension prompts and control conditions in our sample. Further, this preliminary analysis indicated that German and Taiwanese students were comparable concerning an important cognitive prerequisite: students' intramathematical competence.
The estimates of the path model that we created to test our hypothesis were based on the correlation matrix presented in Table 1. Figure 6 presents a graphical representation of the estimates. The means, standard deviations, and correlations of the study variables are presented separately for each experimental condition (RPG and CG) and educational environment (Germany and Taiwan) in Table 3 in the Appendix.
Effects of reading comprehension prompts on the construction of a real-world model and on interest in modelling
Regarding our reading intervention, we expected positive effects of presenting reading comprehension prompts on the construction of a real-world model (Hypothesis 1a) and on students' interest in solving modelling problems (Hypothesis 1b). The analysis partially supported our hypothesis. Presenting reading comprehension prompts did not affect the construction of a real-world model (β = − 0.064, p = 0.220, one-tailed), but it positively affected students' interest in solving modelling problems (β = 0.344, p < 0.01, one-tailed).
Educational environment as a moderator of the effects of reading comprehension prompts
We further expected that the hypothesized positive effects of presenting reading comprehension prompts on the construction of a real-world model (Hypothesis 2a) and on interest in modelling (Hypothesis 2b) would be similar for the two educational environments of the students. There were no country-specific differences in the effect of presenting reading comprehension prompts on the construction of a real-world model (β = − 0.001, p = 0.996). Further, contrary to our expectations, the path analysis revealed an effect of the country on the effect of reading comprehension prompts on students' interest in modelling (β = − 0.295, p < 0.05), indicating that presenting reading comprehension prompts is more beneficial for interest in solving modelling problems for German students than for Taiwanese students. An analysis of the effects of reading comprehension prompts on interest in the respective educational environment revealed significant positive effects in German but not in Taiwanese students (Germany: β = 0.352, p < .01; Taiwan: β = 0.051, p = 0.309).
In-depth analysis of the reading comprehension prompts condition
We conducted an in-depth analysis to investigate the role of reading comprehension in determining the modelling sub-competencies necessary to construct a real-world model and interest in modelling in the group of students who participated in the reading intervention. The aims were, first, to validate the positive relation between reading comprehension and modelling competence that is assumed in modelling theories and, second, to obtain an indication of why providing reading comprehension prompts only partly supported our hypotheses on the positive effects of reading comprehension prompts on the construction of a real-world model and interest in modelling. The research questions for the in-depth analysis were: 1. Is reading comprehension positively related to the construction of a real-world model and interest in modelling? 2. Are the effects of reading comprehension on the construction of a real-world model and interest in modelling similar in different educational environments?
Method of the in-depth analysis
To conduct our analysis, we assessed students' reading comprehension in addition to other measures. Reading comprehension was measured by scoring the answers to the reading comprehension prompts in the experimental condition (N = 245). Students received a score of 1 if they responded correctly to a reading comprehension prompt and a score of 0 if they responded incorrectly or did not responded at all (see the examples in Section 3.1). This scale ranged from 0 to 8 because we provided eight reading comprehension prompts in our study. The scale reliability (Cronbach's alpha) was 0.759. Interrater agreement was calculated on a subset of at least 20% of the participants with sufficient agreement (Cohen's κ ≥ 0.745). The path model included 13 free parameters and 245 participants. Hence, the ratio of participants to parameters was above the critical value of 5. We followed the same statistical approach that we used for the analysis of our primary hypotheses in the prior section.
Results of the in-depth analysis
The model parameter estimates were based on the correlation matrix presented in Table 2. Means, standard deviations, and correlations of all variables are also presented in this table. Figure 7 presents a graphical representation of the estimates.
We found a positive impact of reading comprehension on the construction of a real-world model (β = 0.425, p < 0.01, one-tailed). However, reading comprehension did not affect students' interest in solving modelling problems (β = 0.060, p = 0.376, one-tailed). Further, contrary to our expectations, educational environment was found to moderate the effect of reading comprehension on the construction of a real-world model (β = − 0.635, p < 0.05). The analysis of the effects of reading comprehension on the construction of a real-world model and interest in modelling in the respective educational environment revealed that reading comprehension had a positive impact on the construction of a real-world model for German students (β = 0.235, p < 0.01, one-tailed) but not for Taiwanese students (β = − 0.002, p = 0.488, one-tailed). The effect of reading comprehension on interest in modelling did not differ between German and Taiwanese students.
Discussion
In this study, we hypothesized and tested the effects of reading comprehension in two different educational environments on the modelling sub-competencies needed to construct a real-world model and students' interest in solving modelling problems while controlling for intramathematical competence. Reading comprehension was experimentally manipulated by providing students with reading comprehension prompts, operationalized as questions that addressed the real-world situations in the situational descriptions of the modelling problems. An in-depth analysis of the students who received reading comprehension prompts was conducted to investigate the impact of students' reading comprehension on the construction of a real-world model and interest in modelling in different educational environments and while controlling for students' intramathematical competence.
Effects of reading comprehension on the construction of a real-world model and interest in modelling
Contrary to our expectations, the construction of a real-world model was similar for students who were provided with reading comprehension prompts when compared with their peers who solved problems without reading comprehension prompts. The positive impact of questions on reading comprehension found in the domain of reading research (McKeown et al., 2009;Rickards, 1976) did not hold for students' mathematical modelling competence. An in-depth analysis of the relation between reading comprehension and modelling showed that students who correctly answered the reading comprehension questions were better at constructing a real-world model, indicating the importance of reading comprehension for modelling proposed in prior research. Apparently, providing reading comprehension prompts is not enough by itself. Rather, the quality of a student's engagement with the reading comprehension prompts, indicated by accurate answers, seems to enhance the modelling sub-competencies to construct a real-world model and thereby also enhances overall modelling competence. But why did the reading comprehension prompts fail to improve students' abilities to construct a real-world model in our study? A possible explanation is that students might have answered the reading comprehension prompts superficially without putting effort into reprocessing the text, and thereby, the benefits of presenting reading comprehension prompts could not take effect (Bråten et al., 2014;Pressley et al., 1989). Consequently, one implication from our study is that reading comprehension instructions should prompt students' ability to process the description of the real-world situation in the text (Pearson et al., 1992). We suggest that future studies should expand the presentation of reading comprehension prompts by teaching students how to use them in longer and more comprehensive interventions. Another reason could be that the reading comprehension prompts guided learners' attention to specific information and thereby did not enhance their understanding of the whole situation. It might be more beneficial to use more general reading comprehension prompts, such as "What is the text about? Write a short summary in your own words," or specifically for the parachuting situation, "Explain what horizontal shift means here and describe the factors that influence horizontal shift." Further, the cognitive cost of answering the prompts may have inhibited the positive effect of the reading comprehension prompts (van den Broek et al., 2001). These inhibiting effects might be particularly strong for readers with low proficiency levels who were not able to answer the questions. In addition, time constraints might have affected the results because students in the RPG had the same amount of test time as students in the CG. The positive relation between reading comprehension and the construction of a real-world model found in our study adds to previous findings and indicates the importance of reading comprehension for modelling activities. The correlation of 0.339 found in our study is consistent with findings from previous studies (.198 in a study by Krawitz et al., 2017; .282 by Plath & Leiss, 2018;and .486 by Leiss et al., 2010). Differences in the magnitudes of correlations between different studies can be explained by the specifics of the analysis. In the present study, we focused on the construction of a real-world model and not on modelling competence as a whole. Our study expanded the prior results by indicating the relevance of reading comprehension for the construction of a real-world model. However, future studies are necessary to investigate whether reading comprehension affects modelling or vice versa.
In line with our expectations, we found a positive effect of presenting reading comprehension prompts on students' interest in modelling. Even if presenting reading comprehension prompts did not directly improve modelling competence, it had a positive impact on students' perceptions of modelling. The in-depth analysis provided initial hints about the importance of different sources of interest in modelling. We found no effect of the accuracy of reading comprehension on students' interest in modelling. This finding indicates that ease of comprehension, which was found to be a source of situational interest in prior studies (Mitchell, 1993;Schraw et al., 1995), did not enhance interest in modelling in our studies. Consequently, other sources of situational interest such as students' level of involvement, which was triggered by the reading comprehension prompts, were potentially responsible for the positive effect on interest in modelling.
Educational environment as a moderator of the effects of reading comprehension
We collected data in Germany and Taiwan in order to validate the effects of a reading intervention on modelling and interest in modelling in two educational environments that are very different from each other. We expected that reading comprehension prompts would enhance modelling competence and interest in modelling for students in both educational environments.
There were no differences regarding the effect of presenting reading comprehension prompts on the construction of a real-world model. However, the in-depth analysis showed that reading comprehension is a significant predictor of the construction of a real-world model for German but not for Taiwanese students. One explanation for this result is that perhaps the Taiwanese students tended to fail to construct a real-world model even when they managed to understand the situation. In order to successfully construct a real-world model, students also need to structure and simplify the information, which also includes making assumptions. Students seem to lack meta-knowledge about modelling, particular the knowledge that solving modelling problems often requires learners to make assumptions (Krawitz et al., 2018), and it was found to be a particular strength of students educated in Germany compared with students educated in other countries. This is presumably because German students have more experience working with modelling problems (Chang et al., 2020;Hankeln, 2020).
The effect of presenting reading comprehension prompts on students' interest in modelling differed between the two educational environments such that the German students benefitted more from the reading comprehension prompts. A potential explanation is that different levels of interest in mathematics from German and Taiwanese students (OECD, 2013) caused this effect. Further studies should focus on the conditions in which fostering reading comprehension is beneficial for interest in modelling.
Limitations
In the present study, reading comprehension was assessed by rating students' answers to the reading comprehension prompts. This allowed us to measure reading comprehension in a domain-specific way. For modelling, it is important that reading comprehension is measured in a mathematics-specific way (e.g., reading numerical information presented in tables) (Leiss et al., 2010), and hence, construct validity could be increased compared with the use of a general reading comprehension test. However, because of this assessment, we did not collect any information about reading comprehension in the control condition. Thus, we could not determine whether the reading comprehension prompts led to better reading comprehension in the experimental condition compared with the control condition.
Another limitation is that the scores for the construction of a real-world model were found to be low, which might have resulted in floor effects. However, solving modelling problems is known to be a demanding activity, and students' low scores on the construction of a real-world model reflect the use of demanding modelling problems to measure modelling competence. The construction of a real-world model was assessed by coding solutions to modelling problems. Results may have been different if the problems had focused on only this subcompetency. However, artificial tasks might have to be used if students are going to be asked to construct a real-world model. Further, the reading comprehension prompts and the modelling problems were presented on separate pages, but we do not know whether the students in the RP condition followed the given order and answered the reading comprehension prompts before working on the modelling problems. Future studies should include a treatment check.
Another limitation addresses the use of a questionnaire for measuring students' interest in modelling. Students were asked to rate how interesting it was to work on the respective task. We do not know which aspects of the tasks they referred to when they made their judgments. It is possible that different cultural and societal factors influenced the reports of the German and Taiwanese students, and thus, the comparison of these measures should be treated with caution. This measure also does not provide information about which aspects of the presented modelling problems the students referred to. German students might have referred to their interest in the real-world context, which is triggered by engagement with the reading comprehension prompts, whereas Taiwanese students might have referred to the task format of modelling problems themselves, which they found interesting because of its novelty. Further studies, particularly ones including qualitative approaches, are necessary to make more elaborative statements.
Conclusion
Our study shows that students' interest in modelling but not their modelling competence can be improved by presenting reading comprehension prompts. However, the findings differ for learners in Germany and Taiwan, indicating the relevance of educational environments for research in modelling. Consequently, reading comprehension is an essential but not sufficient condition for modelling. Students' experience with modelling seems to play a decisive role. Hence, we suggest that, in addition to students' reading comprehension, students' meta-knowledge about modelling, particularly the knowledge that modelling problems often require assumptions, should be addressed in modelling research and practice.
|
2021-07-27T00:06:05.774Z
|
2021-05-20T00:00:00.000
|
{
"year": 2021,
"sha1": "3262aa98f3e1550a3f2249c5061ae01804c1cc99",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10649-021-10058-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "6af66e116335325e1a1f0570ca926056a5d926e1",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
253247919
|
pes2o/s2orc
|
v3-fos-license
|
The work-related stress experienced by registered nurses at municipal aged care facilities during the COVID-19 pandemic: a qualitative interview study
Background Stress can originate from many different unsatisfying work situations. Registered nurses working in municipal care have experience of work-related stress in different ways. Aim The purpose of this study was to describe the work-related stress experienced by registered nurses caring for older people at municipal aged care facilities. Methods Qualitative semi-structured interviews according to Polit and Beck were carried out in clinical work at six different municipal aged care facilities in Sweden. Twelve registered nurses participated in the study. Results The results outlined in one main central theme: Feelings of inadequacy and dissatisfaction contribute to work-related stress and three categories: Difficulty coping with work tasks, Insufficient support, Work-related stress affects private lives. Areas identified were lack of time, staff shortages, high number of patients, lack of communication and teamwork in the working group, showing that inadequacy and dissatisfaction can contribute to work-related stress. This can contribute to work-related stress, and it can be a result of problems in the organizational and social work environment. Conclusion This study showed the everyday experiences of registered nurses’ stress at work. The reasons that registered nurses experience a heavy workload were found to be similar in several municipal care facilities. Future interventions should consider the areas of stress found in this study to reduce the risk of further increasing the work-related stress experienced by registered nurses working in municipal aged care.
Background
The World Health Organization, WHO (2017) has classified stress as the "health epidemic of the 21st century" and stated that work-related stress is the reaction that people may have when presented with work demands and pressures beyond their experience and capabilities. Personal stress arises in a wide range of work situations when employees feel they haveinsufficient support from managers and colleagues and inadequate control over work processes [1]. Stressors of emotional or physical tension are described by Yao et al. [2] as any event or thought that trigger people to feel frustrated, angry, or nervous.
Previous research points out that registered nurses working in municipal care experience different workrelated stressors, impacting their physical and mental well-being [3,4]. Understaffing and the lack of good relationships at work between colleagues, such as different views and experience, can negatively affect mental health [3,5]. Registered nurses' health in older people's care affects the quality of patient care and safety. Working under severe stress is one of the reasons for absence from work [6]. Additionally, Fang et al. [7] describe how registered nurses who feel a significant commitment to their work can become too committed, leading to increased work-related stress. Work-related stress is the most important contributor to poor work satisfaction [7]. According to Hassan et al. [8] and van Steijn et al. [9], work-related stress contributes to work-related diseases as anxiety and depression.
The setup of the organization and lack of support from management both in emergency care [6] and care of older people in a local municipality complex and may cause stress [10]. Work-related stress is an increasing problem associated with dysfunctional workplaces [11]. Anshasi et al. [12] described the fact that there is a significant need for competence development in older people's care among nursing staff in municipal care facilities. Organizations must improve the work situation to increase an employee's health-related quality of life without stress. Programs to improve quality of life without stress should focus on promoting continuous education for their employees, positive relationships between colleagues, social support, and stress-reduction courses [13].
Work-related stress among registered nurses raises significant concerns about patient safety, nurses' attitudes toward their patients in nursing homes and aged care facilities, and the quality of aged care [14,15]. The internal stressors can contribute to the work sometimes being carelessly done, leading to an increased risk of making serious mistakes [16]. People perceive and handle stressful situations in different ways because humans are different and experience things differently [16]. Bittinger et al. [17] describe work-related stress as dangerous and harmful for registered nurses' work situation. Counteracting this may be the key to reduce the adverse effects working in a stressful situation with the risk of reducing quality of life.
Furthermore, studies have shown that a long period with a heavy workload can cause many other negative impacts, such as critical incidents or high staff turnover among registered nurses [14,15]. According to Antonovsky [18] the experience of feeling connected, the Sense of Coherence, determines what degree of mental health the individual can feel. In the demand-controlsupport model by Karasek and Theorell (1990), they describe the relationship between health and working life stress. This model shows which conditions in the work environment can decrease stress. The main concepts are demand, control, and social support. The authors emphasize the importance of a social network in reducing workrelated stress and how it can help to reduce the risk of illness in the workplace [19].
Staff shortages and understaffing in nursing are problematic and can negatively affect the quality of care. Nurses should have satisfactory working conditions and therefore it is essential to identify sources of stress affecting registered nurses [20]. People are suffering from work-related stress conditions and there is a need to be more proactive in reducing work-related stress [1].
Work-related stress for a registered nurse has proven to be a danger to patient safety and quality of care as well as to the nurse's own physical or mental well-being and it can lead to them becoming overworked and a decreased quality of life. To understand work-related stress conditions, it is essential to study nurses' experiences of workrelated stress. According to our knowledge, few studies have focused on stress among nurses in municipal care.
Aim
The purpose of this study was to describe the registered nurses' experiences of work-related stress in the care of older people at municipal aged care facilities.
Design
This study was an empirical qualitative study based on semi-structured interviews [21]. The data was analyzed with qualitative content analysis according to the description by Graneheim and Lundman [22]. Qualitative content analysis can help and show conflicting opinions or unsolved issues regarding the meaning and use of concepts, procedures, and interpretation. Using this method provides an important overview of the main concepts related to qualitative content analysis [22].
Sample selection
The selection of participants was made using convenience sampling because the data was collected in six different municipal aged care facilities in Sweden [21]. Inclusion criteria for participation in the study were that registered nurses should have worked for at least one year or more in an aged care facility. No criteria for exclusion were applied. All participants were registered nurses who had between 1-40 years' nursing experience which included the experience of working in aged care at a municipal care facility. Twelve registered nurses participated in the study. Seven of the participants were women and five were men. They were all aged between 25-70. Seven had advanced nurse practitioner qualifications and five had a registered nurse qualification. The selected sample is representative of this group characteristic according to gender and age distribution and experiences. The aim was to obtain a good heterogeneity, and this was easily achieved due to the multiple diversity of the gender, age, and experience of our participants in the conducted interviews. Also, to maximize the variety of the participants and respond to the relevant research question. Before starting the interviews' , written information was provided, and a written consent form was signed by the manager of the respective facility. After consideration, the interested registered nurses contacted the authors and accepted participation by signing and returning a letter of consent.
Data collection
The interviews were carried out at the participant's workplace. All the participants in the study were informed that they could discontinue the participation at any time without stating the reason. According to Polit & Beck [21] a semi-structured interview is an interview based on several open questions of the phenomenon being investigated. Eight open questions were chosen to be relevant to the study with one or more follow-up questions on the same question to gain a deeper insight into the area of interest studied. The questions were about stress situations at work and covered their daily work in general but also more detailed discussions of when and why stressful situations occurred and how they handled them. The interview included guide the following questions: 'Describe your working experiences as a nurse in a municipal care for old people' , 'describe what work-related stress means for you´ and 'does the workrelated stress affect your quality of life? Describe how' . All interviews were conducted in Sweden, in March 2021. Informed consent was obtained, both orally and in writing, before the start of the interview. One of the authors (CA) was responsible for booking interview times with the participants. They also carried out all the individual interviews. The data collection was done through the help of an interview guide according to [21]. All the interviews took place in private rooms and non-disturb sign was placed outside the door to minimize the risk of errors during the interviews. The individual interviews lasted about 4045 minutes and were recorded using a digital voice recorder and transcribed verbatim.
Data analysis
The qualitative content analysis method used was according to Graneheim and Lundman [22]. The interviews were listened to several times by two authors (CA, AJ) independently and transcribed into text. The transcribed text content was divided into areas based on the research questions. Parts of the content related to the aim were set as meaning units that were then condensed and coded. See an overview in Table 1. The codes were compared to identify and describe variations and similarities in the textual content to answer the aim of the study. Parts of the text content that responded to the aim were removed and set as meaning units that were then condensed and coded in the end. The codes were compared to identify and describe variations and similarities in the textual content to answer the study's aim, as was described by Graneheim and Lundman [22]. To ensure the credibility of the codes, they would be checked against condensations and meaning units. Then the codes would be sorted into categories based on similar content, and these categories would be analyzed to answer the aim of this study. As described by Polit and Beck [21], a transcript must be accurate and fully reflect the content of the interview. The data were analyzed through a qualitative content analysis at both manifest and latent levels, according to Graneheim and Lundman [22].
This study was approved by the ethics committee at Dalarna University: HDa dnr 7.1.1 2021 /159.
Results
The result describes the registered nurses' experiences of work-related stress. All interviews were conducted in Sweden, in March 2021.The analysis resulted in one Table 1 Examples of meaning units, condensed meaning units and codes
Meaning units
Condensed meaning
Codes
Then you can get physical problems such as, gastritis or palpitations or that you sweat or something and you feel stressed.
Stress can cause various physical problems such as gastritis, palpitations, or sweating.
Stress can cause physical problems. There is also a stress and pressure, and you have patient safety and must prioritize as the managers say prioritize, prioritize and it is difficult and what should we prioritize when we have our system that the municipality has established.
Stress and pressure, patient safety, managers say prioritize, but the municipality has established a system. Stress due to prioritization during low staffing periods.
central theme "Feelings of inadequacy and dissatisfaction contribute to work-related stress" three categories and ten subcategories that responded to the study's purpose. See an overview in Table 2.
Feelings of inadequacy and dissatisfaction contribute to work-related stress
The results indicated that the registered nurses experienced different types of work-related stress. The main theme "Feelings of inadequacy and dissatisfaction contribute to work-related stress" was divided into three categories: Difficulty coping with their work tasks, Insufficient support, and Work-related stress affects private lives.
Difficulty coping with work tasks
Some of the difficulties connected to coping with their work were according to the registered nurses: Lack of time, Understaffing, Lack of control, Difficulties in prioritizing work, Heavy workload.
Lack of time
The participants described that they did not have enough time and resources to keep up with everything that should be done. Meanwhile, the number of patients was not decreasing, and there were always people in need of care. According to the participants, stress meant not having time to do the things they needed to do and sometimes they needed to be done quickly, depending on the situation. This problem was described as a new kind of stress that may not be there initially, but this was caused by a consistent lack of time.
"I may have to be fast depending on what happens to the patient and then I might feel further stress that was not there earlier" (Interviewee n.6).
Understaffing
The participants experienced too few staff in relation to the number of work duties and patients, which contributed to stress at work. They described the fact that they were understaffed, and it appeared that they were worried about not being able to do their work satisfactorily. Understaffing made them feel inadequate because they had more to do than they could handle. They described how the understaffing put tremendous pressure on them, leading to a more significant amount of stress.
"We are very… very understaffed and at the same time we have new nurses who should get the right introduction, they should have the right support" (Interviewee n.3).
Lack of control
The participants describe the lack of control as one of the causes which increased stress among them. The participants felt having no control over their work. They had no autonomy in the workplace, and this could likely lead to work-related stress. Nevertheless, there were times when the participants felt that they lost control, and not having control for them meant doing things more slowly, because they felt that the tasks needed more time or not having a plan B in the event of unexpected incidents.
"Everything that you feel that you do not have control over can create stress" (Interviewee n.2).
Difficulties in prioritizing work
Participants said they felt they had to prioritize in the workplace constantly as the workload increased and eventually, that could lead to decreased patient safety. Every time they had to prioritize, they experienced this as a stressful situation because they felt that they had not done enough for the patients and their relatives.
The participants described how learning to prioritize goes beyond knowing how to use time properly. Making the wrong choice when prioritizing and planning can be problematic and be stressful. The participants said person-centered care should be prioritized to a greater extent to reduce stress in the workplace.
"Learning to prioritize can reduce stress. To do one thing at a time" (Interviewee n.10).
Heavy workload
The participants described how they had a large amount of work during a workday, which lead to increased stress. The results showed that participants who experienced a high workload and stress level at their workplace could have increased dissatisfaction with their work and either changed their place of work or left the profession, which increased the workload for those who remained.
Insufficient support
Insufficient support described how the participants experienced stress and what work-related stress meant to them. Their responses were divided into three subcategories: Insufficient team collaboration, Insufficient knowledge and training, Insufficient management support.
Insufficient team collaboration
The registered nurses felt that the physicians' work in the care team was insufficient, and this could contribute to experiences of work-related stress. It also emerged that the participants experienced difficulties contacting a physician and getting a response. They needed to call the doctor several times so they could get advice about a particular patient. This situation was stressful because participants described cases when there was urgent to get in touch with a physician and they could not.
"Then it can be stressful if you need a consultation from a doctor. You call them several times and no feedback. It can also be stressful" (Interviewee n.12).
The participants described how support and cooperation between colleagues needed to be improved. They outlined the importance of competence and the cooperation between the various healthcare providers to work towards the same goal.
Insufficient knowledge and training
The participants revealed that working with patients made them doubt their professional nursing skills. Some of the participants mentioned that they wanted further education in their profession in order to be more proactive and feel confident in their work. The amount of work overload and the responsibility for other nursing staff with insufficient knowledge made them feel stressed, and the participants made clear that, this was a cause of work-related stress.
"This uncertainty about their skills. I have no further education to lean on. I would like that. This too this is not going to be good. Even if you try" (Interviewee n.11).
Insufficient management support
From the interviews, it emerged that the participants experienced a lack of support from the management, which contributed to an increased feeling of stress among them. According to the registered nurses, some of the managers did not have a degree in nursing, and they did not understand the nurse's perspective or know what the role or the duties of the registered nurse. The participants described how it was important to have a manager who was on site, empathetic and was willing to find compromises with the registered nurses. According to the participants, this could help to reduce work-related stress. The participants said that a supportive management could lead to less stress being perceived as less among the registered nurses.
Work-related stress affects private lives
In this category, participants described how the registered nurses experienced stress and how it affected their private life daily. In addition, they also talked about how their private life was affected in relation to the ongoing covid-19 pandemic. The two subcategories were: Impact of workrelated stress on the nurses' private lives and Impact of the pandemic on nurses' private lives.
Impact of workrelated stress on the nurses' private lives
The participants reported feeling tired and sometimes irritated, and they described how this could also affect their quality of life. The participants described how working as a registered nurse was often stressful. They perceived that stress at work caused negative consequences for both physical and psychosocial well-being. Fatigue was experienced as a significant problem for the participants. It was noticeable during working hours, but it also affected the person in their spare time. The participants described how they felt fatigued, drained of energy, and they often did not remember things, which frightened them. They felt there was a lot of stress at work. For some of them, it felt as if it could be challenging to maintain a professional approach at work, and they tried to hide their feelings and to do the best for the patient or the relatives of the patient, but it was difficult to do. Meanwhile, they described how this mix of feelings also affected the quality of their own life.
"I get tired, tired in my body, tired in my head, tired in general… I get gloomy" (Interviewee n.2).
Impact of the pandemic on nurses' private lives
During the period when the interviews were conducted, a situation arose that had not existed before, and which led the participants feeling more stress. The situation was Covid19, and it has changed how everyone works. During a pandemic situation, there was added stress. The participants had significant concerns about the risk of being exposed to the disease at work or even spreading it to loved ones, but they were also uncertain about the future in their workplace. It was a dilemma that created working-related stress among the registered nurses in municipal care. For the participants, this new stressful environment, and the constant stream of new guidelines every week during the outbreak of Covid19 led to high work-related stress for the registered nurses. As a result, the workload became heavier, and the amount of stress increased. During the interviews, they also revealed that many and constant changes to rules and recommendations made the healthcare providers feel even more stressed about not having control over their work.
"It was a day when everything really exploded at once, I got a call related to the Covid19 epidemic and there was one person who was infected and then within a quarter of an hour there was another who was infected, and all hell broke loose at once and everything had to be done now" (Interviewee n.4).
Discussion
Experiences of work-related stress of registered nurses working in municipal aged care facilities can cause problems in the organizational and social work environment. The registered nurses described that stress occurred as they had "Difficulties coping with their work task" and they could not perform their duties. These findings are in line with White et al. [16] who pointed out that lack of time contributed to dissatisfaction at work, which also significantly increases the risk of threats to patient safety and quality of care. The registered nurses in this study described that understaffing forced them to more duties than they could handle. The risk of decreased quality of care because of understaffing in nursing was also brought up by Semachew et al. [20] and entails inevitable, negative consequences, especially for the quality of care and work satisfaction among registered nurses.
The registered nurses in this study described the difficulties of prioritizing what needed to be done the most at work because they had difficulties coping with so many work tasks. They described feeling a lack of control because of the lack of time and heavy workload caused by lack of support from the management, which leads to increased stress. The job demand-control-support model by Karasek and Theorell [19] identifies the conditions at work that predispose employees to work-related stress. The nurses in this study stated that it was essential to have an empathetic manager present who could understand the registered nurses. They also reported how the quantity of their work led to increased stress. According to Karasek and Theorell [19], however, it is not always excessive workload that can cause harmful stress or risk of illness. A lack of knowledge and poor communication can have the same effect. The registered nurses in these interviews described a lack of communication and support from their managers in the nursing facilities and a lack of company training programs. They also expressed a need for support from the nursing management. These concerns also feature in Karasek and Theorell's job demand-control-support model. According to Karasek and Theorell [19], support mechanisms are fundamental when trying to manage job-related stress, and social support means support from colleagues, management, family, and friends. The main idea of the model is that demand for work, control over work processes, and social support within the workplace all relate to the individual's well-being. To combat conditions that can lead to physical and psychological illness is necessary. The authors emphasize the importance of social networks in reducing work-related stress and how they can help to reduce the risk of illness in the workplace.
The registered nurses in our study experienced that the communication in the team suffered when the registered nurses experienced work-related stress and that it harmed the patient care. The job demand-controlsupport model by Karasek and Theorell [19] provides that in a culture that prevents work-related stress, it is essential to have social support to fulfill the work tasks. Dagget et al. [23] also described that not enough support from management could lead to difficulties in workers doing their job correctly and eventually increase the work-related stress among the health care providers. The registered nurses in our study who experienced a heavy workload and stress were considering quitting their job. These findings have similarities with previous studies by Carlesi et al. [14] and Chiang et al. [15], who showed that registered nurses left their jobs due to work-related stress and work-related fatigue and because of the inferior quality of patient care. According to Karasek and Theorell [19], it is essential to have control over the work situation to feel good at work. The findings suggest that if nurses felt as if they had more support and understanding from management, their work-related stress may reduce. However, this seems challenging due to limited support from both management and colleagues.
The findings in this study showed that not having control over the work and communication problems were stressful. In concordance with the study of Josefsson (2012), the organization must function in a way that creates conditions for registered nurses to perform riskfree care for older people. Routines must be clear and unequivocal [6]. The stressful work situation described by the participants in this study contributes to sleeping difficulties. The tiredness could lead to a source of irritation and could also negatively affect the quality of life of the nurses and risk patient safety. Fatigue and sleeping disorders describe as problematic. Previous research provides work-related problems from stress such as neck disorders, sleep disorders, headache, and fatigue and how they affect the quality of life of registered nurses [6]. Nowrouzi et al. [13] argue that by improving employees' health-related quality of life without stress, intervention programs must incorporate each ward's context. These programs should promote social support, sleep quality, exercise, and managing smoking habits. The registered nurses experienced more work-related stress than usual during the interviews due to the Covid-19 pandemic. According to the participants, Covid-19 has changed how they work in some situations and has led to more stress. The primary concerns were the risk of being exposed to the disease at work and managing various incidents at the workplace. The pandemic led to frustration and feelings of not doing the work satisfactorily. Antonovsky [18] argues that what is essential for an individual's mental health is dealing with reactions to stress. These stress responses can be positive, keeping people alert to danger, motivated, or adaptable to new situations. Stress is not an illness itself, but when experienced frequently, it can increase the risk of mental health conditions such as depression, anxiety, and various addictions. Antonovsky [18] shows that by thinking from a salutogenic perspective, the individual can focus on the positive and learn to deal with stress differently.
Strength and limitations
As described by Polit and Beck [21] the trustworthiness of studies with a qualitative design can be debated, in terms of their dependability, conformability, credibility and transferability. The study was based on a qualitative approach and was performed using individual interviews with the participants, who were selected with a convenience sample. It may be a weakness because they work in organizations that may have similar working conditions that may limit the variation in the content of the interviews. However, the participants were very variable in age, sex, education and time they had worked as registered nurses. They worked at six municipal aged care facilities in a major city. Therefore, they should be an adequate variation which is reflected in the data and could be transferable to other similar care facilities. A strength of the study was that the used method was chosen because the authors were interested in investigating subjective experiences of work-related stress in the care of older people in municipal aged care facilities. The choice of method is supported by Polit and Beck [21] who describe that qualitative method is suitable for studying people's experiences, behaviors, and feelings therefore this method was chosen for this study. The reason why a semi-structured interview was chosen instead of, for example, a structured interview was to get a deeper insight into the phenomenon and ensure that all topics were covered. The study's credibility refers to the extent to which research is trustworthy in the data collection and analysis. To prove this, each step in the method was described, and examples of how the analysis was performed were illustrated in the method section. Before starting the interviews, all the interviewed persons received information regarding the purpose of the study, the approach, and how confidentiality would be managed and that they could discontinue their participation at any time if they wished. This information was given both in writing and orally [24]. A strength of the study was that it included seven women and five men, which is representative of this type of research study. The registered nurses who participated had a good age distribution and different lengths of work experience within municipal care and inpatient care. This could also be seen as a strength in the study. Regarding the dependability of the study, the focus has been on the participants' experiences of work-related stress. The data collection was done through the help of an interview guide, and the interviews were recorded by the authors with a digital voice recorder. The interviews were done in Swedish and then translated into English. Since English is not the author's native language, this could be a limitation for the study. Answers that were unclear or could be interpreted in several ways were listened to a few times to get a clearer understanding of their statements.
According to Polit & Beck [21] dependability is about the fact that the study can be replicated with similar informants, with similar conditions, and get similar results. In qualitative research, it is also important to discuss confirmability during the study. The authors have been faithful and transparent that all the data obtained during the interviews are original and no data has been modified to change meaning. However, in qualitative studies, it is difficult to completely achieve confirmability as a researcher. However, striving for confirmability should be the ambition for all authors in qualitative research [22].
This manuscript was written during a pandemic outbreak. During the interviews, the registered nurses described the Covid-19 pandemic as a situation that added more stress to their daily work, which changed the way of working.
Conclusion
In this study, experiences of work-related stress among a number of registered nurses were described and factors contributing to stress identified. Better collaboration in the care team and having an understanding manager is of great importance for reducing work-related stress. When the registered nurses who experienced stress received support to help deal with their situation, better conditions were created in order to provide a meaningful everyday life for the individual. With more of the world's population suffering from work-related stress conditions, we must all be more proactive in reducing work-related stress.
The study can provide a basis for further research about work-related stress experienced by registered nurses in the care of older people at municipal aged care facilities. Future interventions should focus on introducing new and effective strategies for managing stress in the workplace and if possible, on a larger scale.
WHO
The World Health Organization.
|
2022-11-02T20:15:52.617Z
|
2022-11-02T00:00:00.000
|
{
"year": 2022,
"sha1": "67cb602fe6efbfdf6f5f34b76cec33bc04d9b7be",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "24d7e6015bc17f15abc2e7993db8baae8c8cec24",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
262168024
|
pes2o/s2orc
|
v3-fos-license
|
The Impact of Abnormal Lipid Metabolism on the Occurrence Risk of Idiopathic Pulmonary Arterial Hypertension
The aim was to determine whether lipid molecules can be used as potential biomarkers for idiopathic pulmonary arterial hypertension (IPAH), providing important reference value for early diagnosis and treatment. Liquid chromatography–mass spectrometry-based lipidomic assays allow for the simultaneous detection of a large number of lipids. In this study, lipid profiling was performed on plasma samples from 69 IPAH patients and 30 healthy controls to compare the levels of lipid molecules in the 2 groups of patients, and Cox regression analysis was used to identify meaningful metrics, along with receiver operator characteristic curves to assess the ability of the lipid molecules to predict the risk of disease in patients. Among the 14 lipid subclasses tested, 12 lipid levels were significantly higher in IPAH patients than in healthy controls. Free fatty acids (FFA) and monoacylglycerol (MAG) were significantly different between IPAH patients and healthy controls. Logistic regression analysis showed that FFA (OR: 1.239, 95%CI: 1.101, 1.394, p < 0.0001) and MAG (OR: 3.711, 95%CI: 2.214, 6.221, p < 0.001) were independent predictors of IPAH development. Among the lipid subclasses, FFA and MAG have potential as biomarkers for predicting the pathogenesis of IPAH, which may improve the early diagnosis of IPAH.
Introduction
According to the latest "2022 ESC/ERS Guidelines for the diagnosis and treatment of pulmonary hypertension" [1], PAH is categorized into five types, of which IPAH is one of the most common types and belongs to the first category of PAH.The pathogenesis of PAH involves genetics, inflammation, immunity, metabolism, and other aspects, which also leads to the complexity and diversity of PAH, and the corresponding diagnostic and treatment options also differ [2,3].Idiopathic pulmonary arterial hypertension (IPAH) is a progressive disease that impacts the precapillary pulmonary vasculature, but the specific risk factors that contribute to IPAH remain unknown [4].IPAH causes elevated cardiac afterload, which can eventuate in right heart failure and mortality.Despite recent advances in therapies that target the pulmonary vasculature, IPAH continues to be a life-threatening disease, with newly diagnosed patients having a 3-year survival rate of approximately 60% [5,6].However, the initial clinical manifestations of IPAH are non-specific and difficult to diagnose, which often leads to patients missing the best treatment time.While current targeted drugs for IPAH have notably enhanced the survival and quality of life for some patients, a considerable portion of patients do not benefit from these drugs, and some experience a poor prognosis with no significant improvement in their quality of life [4].Therefore, more accurate and specific biomarkers are needed to improve early screening and diagnosis rates of patients, which will be clinically important in improving the prognosis of IPAH patients.
Lipid molecules are important biomolecules and are the most abundant substance in plasma, where the lipid in cell membranes accounts for 50% of its weight [7].A large number of lipids are present in endoplasmic reticulum, Golgi apparatus, mitochondria and lysosomes [8].The diverse structure of lipids contributes to their significant biological functions.These functions include a crucial role in regulating various life processes such as cell growth and differentiation, apoptosis, energy conversion between cells and tissues, material transport, information recognition, and signal transmission [9,10].As a result, changes in lipid metabolism and lipid actions significantly impact the physiological functions of cells, which contributes to the development of pathological disorders in organisms [11].Altered lipid content is frequently linked to metabolic disease, cardiovascular disease, tumor formation, and neurological disease.Recent research reveals alterations in lipid content correlates with abnormalities in the levels, activities, and gene expression patterns of multiple enzymes, which contributes to the progression of various diseases [9,12].However, few reports investigate the relationship between lipid metabolism and IPAH.Earlier studies noted a significant reduction in plasma HDL-C levels among IPAH patients [13,14], with lower levels of HDL-C correlating with higher mortality rates in these patients [15].These reports combined with information from other studies on lipid metabolism in cardiovascular diseases [15][16][17], led us to speculate that other abnormal lipids may also contribute to the progression of IPAH, and that specific lipids, or types of lipids, may be useful in detecting the onset of IPAH.
The objective of this study was to determine the distribution and level of lipid molecules in the plasma of individuals with IPAH and healthy individuals using liquid chromatography-mass spectrometry.Additionally, this study highlighted the potential of lipid molecules as biomarkers to predict IPAH.
Population Characteristics
We screened healthy people matched with age and sex from the physical examination center as controls.Table 1 showed no significant differences (p ≥ 0.05) in age, sex and body mass index between the patients and the control group.The proportion of IPAH patients with WHO PAH functional classification (WHO-FC) grade III-IV was 59.4%, and the 6 min walking distance was (393.9 ± 104.8) m.The hemodynamics, biochemical indexes and targeted drug therapy of IPAH patients are listed in Table 1.
Differences in Lipid Content between IPAH Patients and Healthy Controls
To screen out the differences between the two groups, the orthogonal partial least squares discrimination analysis (PLS-DA) model was used for supervised multidimensional statistical analysis of the original data matrix rows.The further away the sample distribution point, the greater the difference.As shown in Figure 1A, sample distribution points of the test IPAH group and the healthy control group were distinguishable, indicating that there was indeed a difference in lipid content between the IPAH patient and the healthy person.The response permutation test (RPT) evaluated the model to ensure that there was no overfitting.As illustrated in Figure 1B, this study distinguished the differential levels of lipids between the IPAH group and the healthy control group using a computational approach.
The 14 Lipid Subclasses in IPAH Patients and Healthy Controls
A total of 588 lipid species were detected by the liquid chromatography-mass spectrometer (LC-MS), which were classified into 14 subgroups.The triacylglycerol (TAG) group contained the largest number of lipid molecules, while very few lipid molecules of the sphingosine (SS) and sterol (St) lipid groups were detected (Figure 1C).
Levels of the 14 Lipid Subclasses in IPAH Patients and Healthy Controls
By comparing the levels of the 14 lipid subclasses in IPAH patients and the healthy control group, we found that the levels of FFA, MAG, diacylglycerol (DAG), TAG, phosphatidic acid (PA), phosphatidyl ethanolamine (PE), SS and St in IPAH patients were a significantly higher compared to those in healthy controls (p < 0.001).The levels of LPS, PG, PI and PS also showed an increasing trend in patients (p < 0.05).LPA and SM were similar in both groups.Figure 2A-N shows the distribution of each lipid subclass between IPAH patients and healthy controls.
The 14 Lipid Subclasses in IPAH Patients and Healthy Controls
A total of 588 lipid species were detected by the liquid chromatography-mass spectrometer (LC-MS), which were classified into 14 subgroups.The triacylglycerol (TAG) group contained the largest number of lipid molecules, while very few lipid molecules of the sphingosine (SS) and sterol (St) lipid groups were detected (Figure 1C).
Levels of the 14 Lipid Subclasses in IPAH Patients and Healthy Controls
By comparing the levels of the 14 lipid subclasses in IPAH patients and the healthy control group, we found that the levels of FFA, MAG, diacylglycerol (DAG), TAG, phosphatidic acid (PA), phosphatidyl ethanolamine (PE), SS and St in IPAH patients were a significantly higher compared to those in healthy controls (p < 0.001).The levels of LPS, PG, PI and PS also showed an increasing trend in patients (p < 0.05).LPA and SM were similar in both groups.Figure 2A-N shows the distribution of each lipid subclass between IPAH patients and healthy controls.
Levels of Lipid Subclasses and FFA between Males and Females
Studies show that the incidence of IPAH is strongly related to gender [18][19][20].This study also explored whether there were differences in the levels of these lipid molecules between males and females (Table 2).Surprisingly, there were no significant differences in lipid levels between males and females in either the healthy control or IPAH patient groups.However, this study found statistically significant differences in eight lipid molecules between healthy control males and male IPAH patients, and in ten lipid molecules between healthy control women and female IPAH patients.However, the differential lipids between males and females were not completely consistent between the two subgroups.For example, TAG and St showed significant differences only among males, and the lipid level of male IPAH patients was higher than that of healthy control males.LPS, PG, PI and PS were significantly different only among female IPAH patients, and the level of LPS in female IPAH patients was higher than that in healthy control females.FFA, MAG, DAG, PA, PE and SS were significantly different between males and females IPAH patients, and in each case the levels in IPAH patients were higher than healthy controls.This study found no significant statistical difference in LPA and SM between male and female subgroups.These results suggested that sex differences exist in the levels of some
Levels of Lipid Subclasses and FFA between Males and Females
Studies show that the incidence of IPAH is strongly related to gender [18][19][20].This study also explored whether there were differences in the levels of these lipid molecules between males and females (Table 2).Surprisingly, there were no significant differences in lipid levels between males and females in either the healthy control or IPAH patient groups.However, this study found statistically significant differences in eight lipid molecules between healthy control males and male IPAH patients, and in ten lipid molecules between healthy control women and female IPAH patients.However, the differential lipids between males and females were not completely consistent between the two subgroups.For example, TAG and St showed significant differences only among males, and the lipid level of male IPAH patients was higher than that of healthy control males.LPS, PG, PI and PS were significantly different only among female IPAH patients, and the level of LPS in female IPAH patients was higher than that in healthy control females.FFA, MAG, DAG, PA, PE and SS were significantly different between males and females IPAH patients, and in each case the levels in IPAH patients were higher than healthy controls.This study found no significant statistical difference in LPA and SM between male and female subgroups.These results suggested that sex differences exist in the levels of some lipid molecules.
Logistic Regression Analysis of IPAH Occurrence of Different Types of Lipids
We chose to use the incidence of IPAH as the dependent variable in the logistic regression analysis, and included the lipid level data of healthy controls and IPAH patients.The influence of lipid subtypes alone on the risk of IPAH was determined.The indicators with statistical significance in the univariate logistic regression analysis were also included in the multivariate logistic regression analysis.The indicators with the most independent predictive value were then selected.Figure 3 displays the results of univariate regression analysis where FFA, MAG, DAG, PE, and SS had a significant predictive value for the risk of developing IPAH.LPS, PA, PG, PI, PS, and sterols (St) also had a significant predictive value for the risk of developing IPAH.This study found that higher levels of certain lipid subclasses had a significant predictive value for the risk of developing IPAH.The highest odds ratio (OR) value was found for MAG.An increase in the levels of the MAG subclass was associated with a higher risk of developing IPAH (OR: 3.711, 95%CI: 2.214-6.221,p < 0.0001).
However, the metabolic mechanism of the human body is complex and changeable, and there may have been interactions between various lipids.Therefore, we performed multi-factor logistic regression, that included adjustment for age, sex and BMI, and the final results showed that FFA (OR: 1.208, 95%CI: 1.509-1.378,p < 0.01), MAG (OR: 3.494, 95%CI: 2.023-6.034,p < 0.0001) were independent predictors of IPAH risk.
ROC Analysis of FFA, MAG and Their Combined Detection to Predict IPAH
To further assess the predictive effect of FFA and MAG on the risk of IPAH, we performed ROC analysis and evaluated their predictive effect with the aid of AUC.As shown in Figure 4, the AUC of FFA is 0.789, the AUC of MAG is 0.862 and the AUC of FFA and MAG is 0.851.Next, according to the highest Youden index, the cut-off values with the highest sensitivity and specificity of FFA, MAG and FFA and MAG were selected to define the high and low levels of FFA, MAG and FFA and MAG.To further validate the predictive value of these two lipids on the incidence of IPAH, we defined the optimal cut-off values of FFA and MAG as "high" or "low".We defined the optimal cut-off values of FFA and MAG as "high level" or "low level".Then, we
ROC Analysis of FFA, MAG and Their Combined Detection to Predict IPAH
To further assess the predictive effect of FFA and MAG on the risk of IPAH, we performed ROC analysis and evaluated their predictive effect with the aid of AUC.As shown in Figure 4, the AUC of FFA is 0.789, the AUC of MAG is 0.862 and the AUC of FFA and MAG is 0.851.Next, according to the highest Youden index, the cut-off values with the highest sensitivity and specificity of FFA, MAG and FFA and MAG were selected to define the high and low levels of FFA, MAG and FFA and MAG.
ROC Analysis of FFA, MAG and Their Combined Detection to Predict IPAH
To further assess the predictive effect of FFA and MAG on the risk of IPAH, we performed ROC analysis and evaluated their predictive effect with the aid of AUC.As shown in Figure 4, the AUC of FFA is 0.789, the AUC of MAG is 0.862 and the AUC of FFA and MAG is 0.851.Next, according to the highest Youden index, the cut-off values with the highest sensitivity and specificity of FFA, MAG and FFA and MAG were selected to define the high and low levels of FFA, MAG and FFA and MAG.To further validate the predictive value of these two lipids on the incidence of IPAH, we defined the optimal cut-off values of FFA and MAG as "high" or "low".We defined the optimal cut-off values of FFA and MAG as "high level" or "low level".Then, we To further validate the predictive value of these two lipids on the incidence of IPAH, we defined the optimal cut-off values of FFA and MAG as "high" or "low".We defined the optimal cut-off values of FFA and MAG as "high level" or "low level".Then, we grouped the lipid level data of 69 IPAH patients and 30 controls, after which we calculated the proportion of IPAH patients in each group.
The results of the chi-square test were statistically significant in each group (p < 0.001).The percentage of IPAH patients in the "high FFA level" group (85.94%) was higher than that in the "low FFA level" group (Figure 5A).Similarly, the percentage of IPAH patients in the "high MAG level" group was 93.33%, which was higher than that in the "low MAG level" group (Figure 5B).When the two lipids were evaluated together, the group with high levels of FFA and MAG had a 100% diagnosis rate of IPAH patients.When both FFA and MAG levels were low, patients had the lowest diagnosis rate of 10.53% (Figure 5C).Finally, the FFA and MAG level data were combined and the 99 samples were regrouped according to the predicted probability obtained by logistic regression (Figure 5D).The incidence of IPAH in the low-level group was reduced to 24.24%, and the incidence in the high-level group was 92.42%, 88.40% of the patients were included, indicating that the results were more sensitive and accurate when FFA and MAG were combined to predict the risk of IPAH.
grouped the lipid level data of 69 IPAH patients and 30 controls, after which we calculated the proportion of IPAH patients in each group.
The results of the chi-square test were statistically significant in each group (p < 0.001).The percentage of IPAH patients in the "high FFA level" group (85.94%) was higher than that in the "low FFA level" group (Figure 5A).Similarly, the percentage of IPAH patients in the "high MAG level" group was 93.33%, which was higher than that in the "low MAG level" group (Figure 5B).When the two lipids were evaluated together, the group with high levels of FFA and MAG had a 100% diagnosis rate of IPAH patients.When both FFA and MAG levels were low, patients had the lowest diagnosis rate of 10.53% (Figure 5C).Finally, the FFA and MAG level data were combined and the 99 samples were regrouped according to the predicted probability obtained by logistic regression (Figure 5D).The incidence of IPAH in the low-level group was reduced to 24.24%, and the incidence in the high-level group was 92.42%, 88.40% of the patients were included, indicating that the results were more sensitive and accurate when FFA and MAG were combined to predict the risk of IPAH.
Discussion
We included 69 IPAH patients and 30 healthy control subjects and performed a relative quantitative analysis of lipid content in this batch of 99 samples using LC-MS technology on a lipidomics platform.The lipids detected in this experiment comprised fourteen subclasses: FFA, MAG, DAG, TAG, LPA, LPS, PA, PG, PE, PI, PS, SM, SS, St, and a total of 588 lipid molecules.A series of single-and multi-dimensional statistical analyses processed the original data.The lipid subtype and lipid molecule levels were compared in IPAH patients after the onset of the disease, with healthy controls.TAG, LPS, PA, PG, PE, PI, PS, SS, St class lipid levels were significantly higher than that of healthy controls, indicating that these types of lipid molecules have a potential significance in the occurrence and development of IPAH disease.
The mean age of IPAH patients was (36.4 ± 10.0) years and female patients accounted for 79.7% of the patients, which suggests that IPAH is more prevalent in young and
Discussion
We included 69 IPAH patients and 30 healthy control subjects and performed a relative quantitative analysis of lipid content in this batch of 99 samples using LC-MS technology on a lipidomics platform.The lipids detected in this experiment comprised fourteen subclasses: FFA, MAG, DAG, TAG, LPA, LPS, PA, PG, PE, PI, PS, SM, SS, St, and a total of 588 lipid molecules.A series of single-and multi-dimensional statistical analyses processed the original data.The lipid subtype and lipid molecule levels were compared in IPAH patients after the onset of the disease, with healthy controls.TAG, LPS, PA, PG, PE, PI, PS, SS, St class lipid levels were significantly higher than that of healthy controls, indicating that these types of lipid molecules have a potential significance in the occurrence and development of IPAH disease.
The mean age of IPAH patients was (36.4 ± 10.0) years and female patients accounted for 79.7% of the patients, which suggests that IPAH is more prevalent in young and middleaged female patients.The results was consistent with the demographic characteristics reported in the largest IPAH retrospective study in China (161 cases) [21].The 187 cases of IPAH reported in the Scottish Registry study (1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989)32 clinical centers) had a maleto-female ratio of 1:1.7 [22].The different sex distribution of IPAH may be attributed to abnormal metabolism of sex hormones or to differences in the immunity systems between males and females.
The incidence of IPAH is higher in females than males but whether this is also reflected in sex differences in plasma lipid levels was uncertain.This study found no significant differences in the levels of lipid subclasses between male and female healthy controls or between male and female IPAH patients.However, TAG, St only showed significant differences between males, and the patient males were higher than healthy males PI; LPS, PG, PS only showed significantly different among the healthy females and IPAH females.Additionally, FFA, MAG, DAG, PA, PE, SS were significantly expressed between females and males.Studies provide evidence that there are gender differences in the distribution of lipids in populations and in pathologies [23][24][25].Xuewen Wang et al. suggested that the network of hormone action might be an important regulator of lipid metabolism [24].Martina Ambrož et al. suggested that there were significant differences in the lipid profile of type 2 diabetes (T2D) patients between males and females across their lifespan [26].However, the lipid profile between males and females in the IPAH patients compared with healthy controls needs further study.
The outcome of logistic regression suggested that FFA, MAG, DAG, PE, SS, LPS, PA, PG, PI, PS and St could predict the incidence of IPAH.The multivariate logistic regression analysis revealed that FFA and MAG were independent predictors of IPAH.This finding is consistent with previous research suggesting that metabolic remodeling is present in IPAH [27].This study found that MAG was an independent risk factor for IPAH.This is consistent with previous research indicating that MAG plays a significant role in cardiovascular disease and may serve as a potential therapeutic target [28], but its role in IPAH was unknown.Our results showed that MAG might be of significance in the development of IPAH.Consistent with previous studies [29,30], we also showed FFA was increase in IPAH.Previous reports, also found that FFA was associated with pulmonary hypertension [31].High concentrations of circulating free fatty acids can lead to significant intracellular lipid accumulation, which can in turn trigger the production of reactive oxygen species and metabolic dysregulation.These processes culminate in cell death, inflammation, and tissue damage, which may contribute to the development and progression of various diseases, including cardiovascular disease [32,33].Recent studies showed that circulating free fatty acids are increased by nearly twofold in patients with IPAH compared to healthy subjects, irrespective of other cardiovascular risk factors.These studies suggest that elevated levels of free fatty acids may play a significant role in the development and progression of IPAH [29,33].Consistent with this, metabolic profiling of plasma from patients with IPAH showed that insulin resistance (IR) strongly correlates with altered lipid metabolic profiles.This further supports the notion that dysregulated lipid metabolism may contribute to the development and progression of IPAH [30,33].The authors of this study found that, similar to atherosclerotic lesions in coronary artery disease (CAD), plexiform lesions in IPAH contain proinflammatory lipids, including oxidized low-density lipoprotein.These lipids may contribute to the recruitment of inflammatory cells and disruption of vascular cell function, which may promote the development and progression of plexiform lesions in IPAH [33].The results showed increased FFA is a risk for developing IPAH.Currently, there are no reports regarding the effect of FFA on the incidence of IPAH and this requires further study.
The results of ROC analysis showed that individually, FFA and MAG had very good sensitivity.When FFA and MAG were combined there was high accuracy in predicting IPAH incidence.Particularly noteworthy was that in the samples of this study, all the patients with increased FFA and MAG levels were IPAH patients, indicating that the combined analysis of the two lipids can accurately predict the risk of IPAH.The ROC figure suggested that FFA was significant for predicting the prevalence of IPAH.We suggest that FFA and MAG might be the important lipid subclasses in the development of IPAH, but their mechanism of action in IPAH requires further investigation.
This study has some limitations.The sample size of this study was small, with a limited number of patients and healthy controls, which could adversely affect the generalizability of the findings.Other potential confounding factors were medication use, comorbid conditions, and lifestyle factors, which may have influenced the results.Further studies with larger sample sizes and more comprehensive analyses are needed to confirm and expand upon the findings of this study.This study measured the lipid levels of patients when they were admitted to the hospital, and therefore changes in lipid levels during drug treatment were not assessed.Additionally, this study tested the changes in lipid content in various subgroups but further research is needed to identify the specific lipids that change in IPAH within the subclasses.Most important is to determine the underlying mechanisms whereby lipids play a role in the pathogenesis of IPAH.
Study Design and Subjects
This study included a total of 69 IPAH patients admitted to Shanghai Pulmonary Hospital from May 2013 to April 2019.Of these patients, 14 were male and 55 were female.The inclusion criteria specified a mean pulmonary artery pressure (mPAP) > 20 mmHg and pulmonary vascular resistance (PVR) < 3 wood units (WU), as per the 2022 (ESC/ERC) guidelines, as determined by right heart catheterization.Exclusion criteria were patients with pulmonary hypertension of known etiology, congenital left-to-right intracardiac shunts, portal hypertension, human immunodeficiency virus (HIV) infection, and patients who received hormone therapy (such as thyroid hormones, anabolic steroids, corticosteroids) in the past, or whose hormone production was suppressed by medication.Thirty age-and sex-matched healthy controls (6 males and 24 females) were also selected.Inclusion criteria were: Healthy, no previous history of other lung diseases and related illnesses, no history of related lung diseases in family members, and no drug or alcohol dependence.
This study adhered to the principles outlined in the Declaration of Helsinki and received approval from the Ethics Committee of the Shanghai Pulmonary Hospital (number: K20-195Y).All participants provided informed consent before their inclusion in this study.
Clinical Data Collection
The diagnostic criteria for IPAH were in accordance with the "2022 ESC/ERS Guidelines for the diagnosis and treatment of pulmonary hypertension".Patient data consisted of demographic information, 6 min walking distance (6MWD), World Health Organization functional class (WHO FC), and N-terminal fragment of pro-brain natriuretic peptide (NT-proBNP).Hemodynamic parameters included mean pulmonary arterial pressure (mPAP), mean pulmonary artery wedge pressure (mPAWP), mean right atrial pressure (mRAP), pulmonary vascular resistance (PVR), cardiac output (CO), and cardiac index (CI).Other laboratory parameters and treatment history were also recorded.
Blood Sample Collection
Blood samples were collected from all subjects in the morning after overnight fasting using an ethylenediaminetetraacetic acid anticoagulation tube.After standing for 30 min at 24 • C, the blood was centrifuged at 3500 rpm for 5 min in a 4 • C centrifuge and then the plasma layer was isolated.Lipids were extracted from plasma using dichloromethane extraction.
GC-MS Analysis
The liquid chromatography-mass spectrometry analysis was carried out using an EXION LC high-performance liquid chromatography in tandem with triple the quadrupole mass spectrometry system (HPLC-TripleQuad TM 6500) as the instrument platform (AB SCIEX USA).The chromatography-mass spectrometry acquisition conditions were as follows: positive and negative ion detection mode, BEH amide HILIC column (100 mm × 2.1 mm i.d., 1.7 µm; Waters); mobile phase A was H 2 O/ACN (5:95, v/v, 10 mM ammonium acetate) and mobile phase B was H 2 O/ACN (50:50, v/v, 10 mM ammonium acetate).The flow rate used in the liquid chromatography-mass spectrometry analysis was 0.50 mL/min, with an injection volume of 5 µL and a column temperature of 40 • C. The sample mass spectrometry signals were obtained in both the positive ion (ESI+) and negative ion (ESI−) mode, and the data acquisition mode was MRM scanning.
Statistical Analysis
The table for normally distributed measures is presented as the mean ± standard deviation, while the table for non-normally distributed measures is shown as the median (interquartile range).Count data are presented as the sample size (percentage).Differences between the groups were assessed using the independent-samples t-test for normally distributed measures as well as the rank sum test for non-normally distributed measures.The statistical differences between multiple groups were analyzed using one-way ANOVA.The χ2 test was utilized to assess the between-group differences in count data.Predicted risk of morbidity was tested by one-way and multi-way logistic regression analysis.This study evaluated the ability of plasma lipid assessment to predict the increased risk of morbidity in patients using the receiver operator characteristic curve (ROC), which includes the area under the curve (AUC), sensitivity, specificity, and optimal cut-off value.A p value < 0.05 was considered to be statistically significant.Data were analyzed using SPSS 24 statistical software.
Conclusions
In conclusion, FFA and MAG lipid subclasses have potential as biomarkers for predicting IPAH.The high levels of FFA and MAG in plasma of IPAH patients suggest a potential role for FFA and MAG in the pathogenesis of IPAH.However, further studies are needed to determine whether FFA and MAG dysregulation are a cause or consequence of IPAH.Additionally, prospective studies are needed to determine whether targeting these lipid subclasses can be an effective therapeutic approach for the prevention and treatment of IPAH.Abnormal serum lipid distribution in patients with IPAH is an important element worthy of study, and its complex pathogenic mechanism remains unelucidated.In this study, we initially explored the potential relationship between lipid metabolites and IPAH.Therefore, our study suggested that IPAH patients exhibit a different distribution pattern of serum lipids, which may serve as a potential biomarker to aid in clinical diagnosis.
Figure 4 .
Figure 4. ROC analysis of FFA, MAG and their combined detection to predict IPAH.FFA: free fatty acid, MAG: monoacylglycerol, FFA and MAG, FFA and MAG combined prediction probability based on logistic regression; AUC, area under the curve.
Figure 4 .
Figure 4. ROC analysis of FFA, MAG and their combined detection to predict IPAH.FFA: free fatty acid, MAG: monoacylglycerol, FFA and MAG, FFA and MAG combined prediction probability based on logistic regression; AUC, area under the curve.
Figure 4 .
Figure 4. ROC analysis of FFA, MAG and their combined detection to predict IPAH.FFA: free fatty acid, MAG: monoacylglycerol, FFA and MAG, FFA and MAG combined prediction probability based on logistic regression; AUC, area under the curve.
Figure 5 .
Figure 5. Sample grouping based on FFA and MAG lipid levels and sample grouping based on FFA and MAG joint prediction probability.(A) Sample grouping based on FFA lipid levels.(B) Sample grouping based on MAG lipid levels.(C) Sample grouping based on FFA and MAG lipid levels.FFA and MAG combined group; Chi-square test.(D) Sample grouping of FFA and MAG joint prediction probability.FFA and MAG, FFA and MAG jointly predict the probability group.Chi-square test.
Figure 5 .
Figure 5. Sample grouping based on FFA and MAG lipid levels and sample grouping based on FFA and MAG joint prediction probability.(A) Sample grouping based on FFA lipid levels.(B) Sample grouping based on MAG lipid levels.(C) Sample grouping based on FFA and MAG lipid levels.FFA and MAG combined group; Chi-square test.(D) Sample grouping of FFA and MAG joint prediction probability.FFA and MAG, FFA and MAG jointly predict the probability group.Chi-square test.
Funding:
This research was funded by the Program of Natural Science Foundation of Shanghai, grant number 21ZR1453800 and 201409004100; Fundamental Research Funds for the Central Universities, grant number 22120220562; the Program of Shanghai Pulmonary Hospital, grant number FKLY20005; the National Natural Science Foundation of China, grant number 82370057.Institutional Review Board Statement:This study was conducted in accordance with the Declaration of Helsinki, and approved by the ethics committee of Shanghai Pulmonary Hospital (K20-195Y).
Table 1 .
Baseline characteristics of patients.
Table 2 .
Levels of lipid subclasses between males and females.
|
2023-09-24T16:00:27.922Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "003074669c9292d74f3d51ad368b2256071f677f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/18/14280/pdf?version=1695111481",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11b4cc7b11e56e36708a9f8ef34d1b2d7f37291d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12111545
|
pes2o/s2orc
|
v3-fos-license
|
Molecular detection of Babesia capreoli and Babesia venatorum in wild Swedish roe deer, Capreolus capreolus
Background The epidemiology of the zoonotic tick-transmitted parasite Babesia spp. and its occurrence in wild reservoir hosts in Sweden is unclear. In European deer, several parasite species, including Babesia capreoli and the zoonotic B. venatorum and B. divergens has been reported previously. The European roe deer, Capreolus capreolus, is an important and common part of the indigenous fauna in Europe, as well as an important host for Ixodes ricinus ticks, the vector of several Babesia spp. in Europe. Here, we aimed to investigate the occurrence of Babesia spp. in roe deer in Sweden. Findings Roe deer (n = 77) were caught and sampled for blood. Babesia spp. was detected with a PCR assay targeting the 18S rRNA gene. The prevalence of Babesia spp. was 52 %, and two species were detected; B. capreoli and B. venatorum in 44 and 7.8 % of the individuals, respectively. Infection occurred both in summer and winter. Conclusions We showed that roe deer in Sweden, close to the edge of their northern inland distributional range, are infected with Babesia spp. The occurrence of B. venatorum in roe deer imply that it is established in Sweden and the zoonotic implication of this finding should be regarded to a greater extent in future.
Conclusions:
We showed that roe deer in Sweden, close to the edge of their northern inland distributional range, are infected with Babesia spp. The occurrence of B. venatorum in roe deer imply that it is established in Sweden and the zoonotic implication of this finding should be regarded to a greater extent in future.
Background
The tick-transmitted intraerythrocytic parasite Babesia is maintained in zoonotic cycles between vertebrate hosts and tick vectors [1] and most zoonotic species are maintained in wildlife reservoirs. Various Babesia species have been detected in a wide range of different mammal species [1]. However, the occurrence in natural mammal hosts is still incompletely known for several zoonotic species [1]. The most prevalent zoonotic species, Babesia microti, is mainly reported from USA, and is maintained in various rodent reservoir hosts. In Europe, most human cases are attributed to the species B. divergens that is mainly associated with cattle. Moreover, also B. venatorum is known to infect humans in Europe [2,3]. This species mainly utilizes roe deer as reservoir hosts [4]. Primarily Babesia spp. are of veterinary importance and cause severe economic losses in cattle and other domestic animals worldwide [5][6][7][8]. However since several species are also known to infect humans, babesiosis is considered as an emerging zoonosis in parts of the world [1,[9][10][11].
In European deer several Babesia spp. has been reported, including B. capreoli, B. venatorum and B. divergens [4,12,13]. There are some uncertainties as to what extent B. divergens is found in deer. Several samples have previously been sequenced and published on public databases as B. divergens or "B. divergens-like". Recent re-sequencing of such samples have however convincingly identified them as the closely related B. capreoli [13]. However, actual B. divergens is found in red deer from Ireland [12]. Babesia capreoli is highly similar to B. divergens and the two species only differ at three nucleotide positions at the 18S rRNA gene (99.83 % nucleotide similarity) [13]. The two species are considered as indistinguishable based on morphological characteristics, sequencing is therefore necessary to identify these species [12,13].
The roe deer (Capreolus capreolus) is the most common deer species in Sweden and occur in moderate to high population densities in the southern third of the country while population density gradually declines along a northern and western gradient to become completely absent in the north-western part of the mountain range [14]. Babesia capreoli has previously been reported from Swedish roe deer during the 1970's based on microscopic findings in blood samples [15]. However, no confirmation with molecular methods of these findings has been performed.
In the present study we investigated the prevalence of Babesia and diversity of species in roe deer in two sites in south-central Sweden by using molecular tools.
Sampling areas
Blood samples were taken from trapped roe deer at two different study sites, 150 km apart in southern Sweden; Bogesund (59°24′N, 18°12′E) is located at the inner reaches of the Stockholm Archipelago, surrounded by water and covered by highly productive mixed coniferous and deciduous forest and farmlands with high deer densities [16]. Grimsö Wildlife Research Area (59°60′N, 15°16′E) has a roe deer population with much lower density, and colder and longer winters due to its inland location. The area consists primarily of coniferous forest interspersed with bogs, mires and fens [17].
Roe deer capture
A total of 48 adult and juvenile roe deer (> 7 months old) were captured in box-traps from January to March 2014 and blood samples were taken from the jugular vein. Captured deer were marked using ear-tags with unique ID numbers and colours to keep track of individuals. In addition to the adult and juvenile animals a total of 38 neonate roe deer fawns (1-40 days old) were sampled from May 15th to July 3rd during 2013. The blood was collected from the fawns' tarsal vein.
Ethical approval
The marking and handling of roe deer in this study were approved by the Ethical Committee on Animal Experiments, Uppsala, Sweden (Approval Dnr: C302/2012).
Total nucleic acid extraction and PCR
Total nucleic acid (DNA as well as RNA) was extracted with the PAX gene Blood RNA kit (PreAnalytix, Qiagen/ BD) following the manufacturer's recommendations (without adding DNAse). Subsequently, cDNA was synthesized and the total DNA concentration was diluted to 10 ng/μl. PCR detection of Babesia spp. was carried out with the primers BJ1 5′-GTC TTG TAA TTG GAA TGA TGG-3′ and BN2 5′-TAG TTT ATG GTT AGG ACT ACG-3′ [18] with the cycling conditions as described in Casati et al. [18]. These primers amplify 411-452 bp of the 18S rRNA gene. PCR was performed in a GeneAmp® PCR System 9700 (Applied Biosystems). Sanger sequencing of the purified amplicons was performed and the obtained sequences were subjected to nucleotide BLAST searches on the NCBI database (http://www.ncbi.nlm.nih.gov).
Results and discussion
We show with molecular methods that two Babesia spp. occur in wild roe deer in Sweden, B. capreoli and B. venatorum. This is, to the best of our knowledge, the first molecular detection of Babesia spp. in any wildlife species in Sweden. In total we obtained 86 blood samples from 77 individual roe deer. Nine individuals were re-captured on separate occasions. Most of them within a month of the first capture. Out of the recaptured individuals, two went from uninfected to infected, one individual lost the infection and two individuals went from being infected with one Babesia spp. to being infected with the heterologous Babesia spp., demonstrating the dynamic nature of Babesia infection in wild animals. Calculations of prevalence is based on the first capture of each individual. In total 52 % of the individuals (40 out of 77) were infected with Babesia spp. The prevalence of B. capreoli in the individuals were 44 % (34/77) and the prevalence of B. venatorum was 7.8 % (6/77). Babesia capreoli is the dominating Babesia species in Swedish roe deer in the investigated areas with a remarkably high prevalence, however, consistent with findings in central Europe that also reported high infection rates in roe deer [19]. Detailed information about the number of samples from animals caught in the different areas and the number of infections are presented in Table 1. The obtained sequences were all 100 % identical to the published B. capreoli sequence FJ944827 and clearly differed from B. divergens sequence U16370. Babesia capreoli and B. divergens differ from each other by only three nucleotides on the 18S rRNA gene, on positions 631, 663 and 1637 [13]. The two first positions are included in the DNA fragment amplified by the primers used in this study [16].
The B. venatorum sequences from the Swedish roe deer were identical to sequence KF724377 found in a human infection in China [20]. The sequences obtained [13], and no reports of infections in humans have been published. This species is therefore not likely to be a threat to other species than the natural hosts [13]. Infection have been reported from several different deer species ( [13] and references herein). Contrastingly, B. venatorum apparently has a broader host range and is also capable of infecting humans, it is also known to infect chamois (Rupicapra rupicapra) and ibex (Capra ibex) in the Alpine region [21], and has also been found in a captive reindeer (Rangifer sp.) in the Netherlands [22]. Several human cases of B. venatorum have been reported from Europe and more recently from China [2,3,23,24] and the zoonotic potential of this species requires further investigation to correctly estimate risks for humans and perhaps domestic animals. Babesia venatorum has been reported from questing ticks in Norway [20] and a recent study on Babesia spp. in Ixodes ricinus in Sweden reported that 1 % of the investigated ticks were infected with B. venatorum [25]. Interestingly no ticks in that study were infected with B. capreoli, contrasting to the high prevalence found in roe deer in the present study. To better understand the importance of Babesia spp. as an infectious agent in Sweden there is a need to investigate the occurrence in several wild and domestic mammal species as well as in humans.
|
2016-05-04T20:20:58.661Z
|
2016-04-19T00:00:00.000
|
{
"year": 2016,
"sha1": "d7dbe0c0346427b1b3caf9950091ab1a67e273ba",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-016-1503-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d7dbe0c0346427b1b3caf9950091ab1a67e273ba",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
117261234
|
pes2o/s2orc
|
v3-fos-license
|
Image of the braid groups inside the finite Iwahori-Hecke algebras
We determine the image of the braid groups inside the Iwahori-Hecke algebras of type A, when defined over a finite field, in the semisimple case, and for suitably large (but controlable) order of the defining (quantum) parameter.
Introduction
The point of this paper is to enhance our understanding of the connection between braid groups and Hecke algebras of type A. This interplay has been at the core of the definition of the Jones and subsequently HOMFLYPT polynomial of knots and links, and is the source of the most classical linear representations of the braid groups. Because of that, it has also been used for the purpose of inverse Galois theory -in that case, with coefficients a finite field. Our aim here is to understand better the image of the braid group inside the (group of invertible elements of) the Hecke algebra, and especially to describe the finite group which is the image of the braid group inside the Hecke algebra over a finite field. We first review briefly what is known.
The closed image of the braid group inside the Hecke algebra over the complex numbers has been essentially determined in the first decade of the century. In this setting, it had been proved earlier by Jones and Wenzl that the Hecke algebra representations provided unitary representations of the braid group for suitable parameters. Using this, the closed image in these unitary cases was determined in [FLW]. Simultaneously and independently, the third author in his 2001 doctoral thesis (see [M0]), introduced a Lie algebra subsequently identified (see [M2]) with the Lie algebra of the algebraic closure of B n in the generic (but not necessarily unitary) case. When the representation is known to be unitary, the algebraic closure determines the topological closure. On the other hand, the approach of [FLW] provides more precise information on specific values of the papers, specifically when the parameter is a root of 1. Finally, other proofs and sources of justification, sometimes in a broader context, for the unitary structures have been provided in [M1] and [M3], part IV.
Back to the finite field situation, the classical "strong approximation" results suggest that, "most of the time", we should get for images groups of q -points of the algebraic groups defined above. This assertion is very vague because the algebraic groups are not a priori defined over and because there is a parameter involved in the definition of the Hecke algebra that prevents the direct use of these classical results. Also, there is the question of unitarity which needs some work to be translated into the finite fields case. Nevertheless, the first and third author proved in [BM] that we can get the expected result for the quotient of the Hecke algebra known as the Temperley-Lieb algebra, under only a few conditions, the most restrictive of these being that the corresponding Hecke algebra is semisimple. By Date: January 3, 2014. 1 classical results from representation theory this last condition can be made precise in terms of the order of the parameter inside × q and in terms of the number n of strands. In this paper we extend this to the full Hecke algebra, under the same conditions. For technical reasons we found it more handy to deal with the commutator subgroup B n of B n instead of B n itself. Since B ab n ≃ this does not diminish the strength of the results, and at the same time makes many proofs and statements more readable.
We now state the main result. We let E n denote the set of partitions on n which are not hooks. We choose some total ordering < on E n . Let b(λ) = max{i; λ i ≥ i} denote the length of the diagonal of the Young diagram associated to λ ⊢ n, and ν(λ) = 1 if (n − b(λ))/2 is even, ν(λ) = −1 otherwise. Without loss of generality we assume that q = p (α) and denote GL(λ) the group of linear automorphisms of the q -vector space associated to the representation of H n (α) indexed by λ. Because of the existence of explicit matrix models recalled below, we know that these representations are indeed defined over q = p (α). In §3 we attach to each λ ∈ E n a classical subgroup G(λ) of GL(λ) which contains the image of B n . Letting N denote the dimension of the representation attached to λ, we have the following, where we use the classical notations of e.g. [W]. In particular Ω + N (q) is the commutator subgroup of the orthogonal group for a form of '+' type, meaning that is has Witt index 0.
We recall that the Hecke algebra H n (α) for α ∈ × q can be defined as the quotient of the group algebra q B n of the braid group B n by the relations (σ i + 1)(σ i − α) = 0, where the σ i are the usual Artin generators of B n . The algebra H n (α) is semisimple when the order of α is greater than n, and this provides an isomorphism H n (α) × ≃ λ⊢n GL(λ).
Then, our main theorem states the following.
The additional condition, that the order of α is not 2, 3, 4, 5, 6, 10, was expected, for the image of B 3 in these cases may in general factorize through the quotients of B 3 by the relations σ r i = 1 for r ∈ {2, 3, 4, 5}, which are imprimitive reflection groups of rank 2 (see [C]). We explain the plan of the proof. The price for using the commutator subgroup instead of the full braid group is that we need a few additional technicalities that we gather in §2. The first step of the actual proof is then to get a description of the algebraic groups involved here in very explicit terms. For this we use Hoefsmit's combinatorial matrix models in order to define the expected orthogonal and symplectic forms as well as the expected diagonal embeddings (see §3). Using the vanishing of the Brauer group of the finite fields we show how to convert the unitarity property into a well-defined algebraic group over a smaller field ( §4). Then we proceed by an induction argument ( §5) in order to prove that the image of B n is what we expect it is. By [BM] we know it for n ≤ 5. We first show that, assuming the result for some n ≥ 5, we can determine the image of B n+1 inside every single irreducible representation of the Hecke algebra. For this, our main tool is a theorem of Guralnik and Saxl on subgroups of finite classical groups acting irreducibly on the underlying vector space (notice that this theorem depends on the classification theorem of finite simple groups). Then, as in [BM], we glue the pieces together in order to get the result for n + 1 using Goursat's lemma. Finally, we indicate how the proof needs to be modified in case the order of the parameter implies that a unitary structure is involved.
Generalizations of this work can be expected in two directions. One of them is to look at what happens for the generalized braid groups associated to other (real or complex) reflection groups. The "generic image", that is the Zariski closure over a field of characteritic 0 and for the generic values of the parapeters, has been computed in [M3]. Moreover, the unitarity property has been proved for all Coxeter groups and most of the complex reflection groups, and is conjectured to hold in general (see [M3], part III, §6). However the interplay between the unitarity property and the algebraic structure, when looked at carefully, presents some additional difficulties for complex reflection groups, see [M3], part IV, §5 and remark 5.9 there. When the reflection group if not rational, there is moreover a specialization issue, because the base ring [q, q −1 ] needs to be replaced by a ring of Laurent polynomials over a larger ring of algebraic integers. Finally, for exceptional complex reflection groups, even the basic structure theorems for the Hecke algebra are still conjectural (see [M4] for an overview and recent results). Even in the Coxeter case, quite a few tools we used here however cannot be applied directly in the more general context. Moreover, the Hecke algebras may involve several parameters, and also because of that the unitarity property may be more tricky to handle. As an example of what may happen, let us mention that the image of the generalized braid goup of Coxeter type H 4 should be quite interesting, because the representations of the reflection groups can be defined only over É( √ 5), and because there is a Spin 8 group appearing in the description of the generic image.
A second natural direction is to try to understand what happens in the non-semisimple case, that is when the order of α is lower or equal to n. As far as we know, this is yet a completely unexplored territory, also over the complex numbers when α is a root of 1.
Preliminaries on braid groups
We let B n denote the commutator subgroup of the braid group B n on n strands, and always identify B n−1 with the subgroup of B n fixing the last strand.
Lemma 2.1. If n ≥ 4 then B n is the normal closure of B n−1 .
Proof. Recall that the abelianization morphism ℓ : B n → is given by s i → 1. From the Reidemeister-Schreier method or even elementary group theory we know that B n is generated by the s k 1 s j s −k−1 1 for j ≥ 1, k ∈ . When j > 2 we have s k 1 s j s −k−1 1 = s j , which proves that B n is generated by B n−1 and s n s −1 1 . Now the braid relation s n−1 s n s n−1 = s n s n−1 s n implies s n = s n−1 s n s n−1 (s n−1 s n ) −1 hence s n s −1 1 = (s n−1 s n )s n−1 s −1 1 (s n−1 s n ) −1 = (s n−1 s −1 1 s n s −1 1 )s n−1 s −1 1 (s n−1 s −1 1 s n s −1 1 ) −1 belongs to the normal closure of B n−1 and this proves the claim.
In order to use known representation-theoretic results for the braid group, we shall need to lift isomorphisms between the restrictions of these representations to B n . This will be done by applying the following general lemma.
We shall also use the following result.
Proof. Without loss of generality we can assume that K is algebraically closed. Let S i = ϕ(s i ). If one of the S i is 1, the same holds for the others since they are all conjugated one to the other, hence ϕ = 1. Also note that if two consecutive S i commute, then the braid relation implies S i = S i+1 , and this implies that all the S i are equal, and therefore that ϕ(B n ) is abelian. This is because every pair (s i , s i+1 ) is easily seen to be conjugated to any other pair (s j , s j+1 ) by an element of B n .
Let i denote a primitive 4-root of 1, and E the image of i 0 0 −i ∈ SL 2 (K) inside PSL 2 (K).
We let T denote the images of the diagonal matrices of determinant 1 inside PSL 2 (K), and T ′ the images of the antidiagonal matrices. We first assume that S 1 is semisimple. Then all the S i are semisimple. Up to conjugation, we can assume that S 1 ∈ T . Then the centralizer of S 1 is T , unless S 1 = E in which case it is T ∪ T ′ . If the centralizer is T , then S 3 , S 4 ∈ T and we get S 3 S 4 = S 4 S 3 and this implies that ϕ(B n ) is abelian. If not, we have S 1 = E hence S 2 1 = 1 and therefore S 2 i = 1 for all i. It follows that ϕ factorizes through a morphism S n → PSL 2 (K). If ϕ(B n ) is not abelian the morphism S n → PSL 2 (K) is into. But for n ≥ 5 this contradicts Dickson's theorem (see e.g. [S] ch. 3 theorem 6.17). This proves the statement under the assumption that S 1 is semisimple.
If not, S 1 is unipotent and we can assume that S 1 is upper triangular. Then its centralizer is made of the image inside PGL 2 (K) of upper-triangular matrices. It follows that S 3 and S 4 commute, and we conclude as before.
The statement we are mostly interested in is the following one.
Proof. A presentation of B n has been obtained by Gorin-Lin in [GL], Theorem 2.1. We use it here. The group B n is generated by elements p 0 (= s 2 s −1 1 ), By abuse of notation, we identity these generators with their images under ϕ, and we show that they all become trivial. First note that, if one of the q i is 1, then all the others are equal to 1 by relation (7), and then b = 1 by (3), p 0 = p 1 by (5) and p 0 = 1 by (6). Conversely, p 0 = 1 ⇔ p 1 = 1 by (5), and in this case b = q 3 by (1), and (2) (1). Now note that we have a morphism B n−2 → B n defined by s i → q i+2 . By the above proposition we get that the q i , i ≥ 3 commute one to the other, and therefore are all equal to some element q.
= qp 1 hence q = 1, a contradiction which proves the claim.
The main factorisation
We recall that H n (α) is semisimple as soon as the order of α ∈ × q is greater than n. Moreover, in this case its simple modules are absolutely semisimple (see e.g. [Mat], cor. 3.44), and they are in 1 − 1 correspondence with the partitions of n. We now recall from [GP] explicit matrix models for these irreducible representations.
A combinatorial Gelfand model of H n (α) is given by a q -vector space V with basis all the standard tableaux of size n. For each partition λ ⊢ n, we denote V λ the linear span of the standard tableaux of shape λ.
The action of the r-th generator on a standard tableau Ì is given by the following rules (i) If r and r + 1 lie in the same row of Ì, then s r .Ì = αÌ ; (ii) if r and r + 1 lie in the same column of Ì, then s r .Ì = −Ì ; m is in line i and column j, and Ì r↔r+1 is the tabeau obtained from Ì by interchanging r and r + 1.
Notice that (Ì r↔r+1 ) ′ = (Ì ′ ) r↔r+1 . Moreover, if we let i denote the row, j denote the column where r lies, and similarly u, v for r + 1, one checks easily that where Ì ′ denotes the transposed of Ì. We define a bilinear form on V by the formula and r k (Ì) denotes the row of Ì in which lies k.
is nondegenerate. Its restriction to V λ is symmetric if ν(λ) = 1, and skew-symmetric otherwise. When it is symmetric, it has Witt index 0.
If r and r + 1 lie in the same row or the same column of Ì 1 , the LHS and RHS are both 0 unless Ì 2 = Ì ′ 1 , and in that case the verification of the formula is immediate. If not, the LHS and RHS are again both 0, except in two cases that we consider separately. In the first one, we have Ì 1 = Ì, Ì 2 = Ì ′ . In that case we have (s r .Ì 1 |s r .Ì 2 ) = (s r .Ì|s r Ì ′ ) and, since In the other case we have Ì 1 = Ì, Ì 2 = Ì ′ r↔r+1 . In this case (s r .Ì 1 |s r .Ì 2 ) = (s r .Ì|s r .Ì ′ r↔r+1 ) and, since s r .
hence the equations hold in both cases because of the elementary properties of w, namely w(Ì r↔r+1 ) = −w(Ì). We now prove (iii). The non-degeneracy of ( | ) follows from the decomposition of V λ as an orthogonal direct sum of planes spanned by pairs Ì, Ì ′ , on which ( | ) is clearly nondegenerate. We consider now the possible symmetry of the restriction of ( | ) to some V λ with λ = λ ′ . We proved in [M2], Lemme 6, that w(Ì)w(Ì ′ ) only depends on the shape λ of Ì, and is equal to ν(λ). Since we get the conclusion. Finally, the computation of the Witt index in the symmetric case is an immediate consequence of the direct sum decomposition in hyperbolic planes already mentioned.
Proof. We check that the actions of the LHS and RHS coincide on every standard tableau Ì of shape λ. When s r .Ì is proportional to Ì, this directly follows from the formula w(Ì)w(Ì ′ ) = ν(λ). Otherwise, we restrict the action of s r to the linear span of Ì, Ì r↔r+1 and consider its matrix w.r.t. the basis(Ì, Ì r↔r+1 ). It is On the other hand, we have which proves the formula.
As a consequence, we get Lemma 3.4. If the order of α is > n, and n ≥ 2, then the following are true.
If the restrictions of R λ and of the dual representation of R µ to B n are isomorphic, then λ = µ ′ .
Proof. We prove (i) by induction on n, the cases n ≤ 5 being a consequence of [BM]. Let U be a B n -stable subspace of V λ ⊗ q k, for some extension k of q . By the branching rule and the induction assumption, the action of B n−1 on V λ is semisimple, and the decomposition of V λ as a direct sum of simple modules for B n−1 is also a decomposition in a sum of simple modules for B n−1 . From this it follows that every simple submodule of U for the action of B n−1 is also B n−1 -stable, hence U , being semisimple, is also B n−1 -stable. Since B n is generated by B n−1 and B n it follows that U is B n -stable hence U = V λ and this concludes the proof of (i). We now prove (ii). By lemma 2.2 and because the abelianization of B n is given by ℓ : q , and this for all b ∈ B n−1 . This implies that the spectrum of R µ (s 1 ), which is {−1, α}, is also equal to {−u, uα}. Since we assumed α 2 = 1 this is possible only if u = 1 hence R µ = R λ , which is excluded because these two representations of the Hecke algebras are non-isomorphic by assumption.
The proof of (iii) is similar, once we notice that the restriction to B n of the dual representation of R µ is isomorphic to the restriction of R λ ′ , by the above results.
We now let λ = λ r = [n − r, 1 r ] and we want to compare R λr with Λ r R λ 1 . Any standard tableau of shape λ r can be indexed by the set of indices I = {i 1 , . . . , i r } ⊂ {2, . . . , n}, assuming i 1 < · · · < i r , where each i k is the content of the unique box of the diagram in line k + 1. We let v I denote the corresponding standard tableau, and we let v i = v {i} . Note that, when {k, k + 1} ⊂ I, then ct(v I : k)/ct(v I : k + 1) only depends on the number of boxes lying between k and k + 1 inside the hook-shaped tableau v I , and therefore only on k. From this we get by explicit computation that, if k ∈ I but k + 1 ∈ I, then On the other hand, to such an I = {i 1 < · · · < i r } we can associate meaning that, if we identify these two vector spaces via v I ↔ u i , we have (Λ r R λ 1 )(s r ) = α r−1 R λr (s r ). Therefore, we get (Λ r R λ 1 )(g) = α (r−1)ℓ(g) R λr (g) for all g ∈ B n , and the following Now assume that q = p (α) but p (α+α −1 ) = q . In that case there exists an involutive field automorphism ε : x →x of q defined byᾱ = α −1 . We define a hermitian form on V by where c(k), respectively r(k), denote the column, respectively row, of Ì where k lies.
Proposition 3.6. The action of B n on V is unitary with respect to the above hermitian form. The restriction of this hermitian form on every subspace V λ is nondegenerate.
Proof. We need to check that s r Ì 1 , s r Ì 2 = Ì 1 , Ì 2 for all standard tableaux Ì 1 , Ì 2 . If r lies in the same row or the same column of Ì 1 or Ì 2 then the equality simply follows from αᾱ = (−1) 2 = 1. If not, then we can assume that Ì 2 is either Ì = Ì 1 or Ì r↔r+1 , and thus we only need to check that the action of s r on the plane spanned by Ì, Ì r↔r+1 is unitary with respect to the induced hermitian form. We express s r in the basis (Ì, Ì r↔r+1 ). In order to check the unitarity, up to a harmless exchange of Ì and Ì r↔r+1 , we can assume that r Ì (r) < r Ì (r + 1). Then we get where (i, j) and (u, v) are the coordinates of r and r + 1 inside Ì, respectively. It remains to and S r is the 2 × 2 matrix representing the action of s r , then we have DS r = t S r −1 D, and this is straightforward. The nondegeneracy statement is obvious.
In these circumstances, we have that Lemma 3.7. We assume that the order of α is > n, p (α + α −1 ) = q and λ, µ ⊢ n with dim V λ > 1. If n ≥ 3, then the restrictions to B n of R λ and R µ * are isomorphic iff λ = µ.
Proof. Because of the unitary structure we get that the restrictions to B n of R λ and of its conjugate-dual R λ * are isomorphic. Under the assumptions of the lemma this means that the restrictions of R λ and R µ are isomorphic, and this implies λ = µ by lemma 3.4.
Representation-theoretic technicalities
We also need to consider the set of elements that preserve both a unitary and an orthogonal/symplectic form. If ϕ denotes a nondegenerate bilinear form over N q we let OSP N (ϕ) denotes the group of isometries of this form ; if ψ is an hermitian form, we let U N (ψ) denote its group of isometries. We will use the following property, which is probably folklore.
Proposition 4.1. Let q = u 2 , ϕ a nondegenerate bilinear form over N q , ψ a nondegenerate hermitian form over N q . If G ⊂ OSP N (ϕ) ∩ U N (ψ) is absolutely irreducible, then there exists x ∈ GL N (q) and a nondegenerate bilinear form ϕ ′ over N u such that x G ⊂ OSP (ϕ ′ ). Moreover, ϕ ′ is (skew-)symmetric if and only if ϕ is so.
Proof. We let R : G → GL N (q) denote the natural inclusion, and we consider it as a linear representation of G. We set Γ = Gal( q / u ) = {Id, ε} and use both notations ε(x) =x. We have R * ≃ R and R * ≃ R hence R ≃ R. As a consequence there exists P ∈ GL N (q) such that R(g) = P R(g)P −1 for all g ∈ G. It follows that P P commutes to every R(g). By Schur's lemma and the absolute irreducibility of G we get P P ∈ ( × q ) Γ = × u . Since the norm map × q → × u is surjective, we have P P = λλ for some λ ∈ × q and thus, replacing if needed P with P λ −1 , we may assume P P = Id. Then Id → Id, ε → P defines an element in Z 1 (Γ, GL N (q)). By Hilbert's theorem 90 it follows that there exists S ∈ GL N (q) such that P = SS −1 . Then, setting R ′ (g) = S −1 R(g)S, we have R ′ (g) ∈ GL N ( u ). Moreover, R ′ (g) preserves the bilinear form deduced from ϕ : in matrix form, if W denotes the matrix of ϕ in the canonical basis of N q , we have t R(g)W R(g) = W for all g ∈ G, hence R ′ (g) preserves the bilinear form ϕ S given by the matrix W S = t SW S ∈ GL N (q). Since R ′ (g) ∈ GL N (u) it also preserves all theW λ = λW S + λW S for all λ ∈ × q . Since W S = 0, there exists λ ∈ × q such thatW λ = 0, for otherwise λ/λ = µ/µ for all λ, µ ∈ × q , and this would imply u = q. ThenW λ for such a λ defines a bilinear form ϕ ′ over N u , and we have R ′ (g) ∈ OSP(ϕ ′ ) for all g ∈ G, hence x G ⊂ OSP(ϕ ′ ) for x = S −1 . The last part of the statement is a consequence of our construction of ϕ ′ .
Proof of the main theorem
We let E n denote the set of partitions on n which are not hooks. From section 3 we know that the morphism B n → H n (α) × ≃ λ⊢n GL(λ) factorizes though the morphism where OSP ′ (λ) = G(λ) denotes the commutator subgroup of the group of isometries of the bilinear form defined in section 3. In particular, when λ = λ ′ , p = 2 and when the action of the braid group on V λ preserves an orthogonal form, then OSP ′ (λ) denotes the group classically denoted Ω + N (q) (see [W]), where N = dim V λ . We assume that p (α + α −1 ) = q = p (α) and, as in [BM], that the order of α ∈ × q is not 2, 3, 4, 5, 6, 10. Theorem 1.1 in that case states that Φ n is surjective when the order of α is in addition greater than n. For n ≤ 5 this is a consequence of [BM]. We then proceed by induction on n, assuming that Φ n−1 is surjective and n ≥ 6. We first prove that each of the composites R λ of Φ n with the projection on the quasi-simple factor attached to λ is surjective. For this, let λ ∈ E n . If λ has at most two rows or at most two columns this is a consequence of [BM], so we can assume that λ contains [3, 2, 1], hence dim V λ ≥ 16. Moreover, for n = 6 the only case to be taken care of is λ = [3, 2, 1]. Finally note that, since n ≥ 6, our assumptions imply that α has order at least 7, hence q ≥ 8.
We use the notation µ ⊂ λ to indicate the inclusion of the corresponding Young diagrams, namely that µ i ≤ λ i for all i's. By the induction assumption, we know that • if λ = λ ′ , there exists µ ⊂ λ of size n − 1 such that µ ′ ⊂ λ and such that µ ⊃ [3, 2] or µ ⊃ [2, 2, 1] (this is because λ is equal to the union of the µ's of size n − 1 contained in it such that µ ⊃ [3, 2] or µ ⊃ [2, 2, 1]). In particular µ = µ ′ . Since µ is not a hook, by the induction assumption it follows that the image of B n−1 contains a direct factor SL(µ) and in particular some SL 2 (q) acting naturally on some 2-dimensional subspace and some SL 3 (q) acting naturally on some 3-dimensional subspace. • if λ = λ ′ and there exists [3, 2] ⊂ µ ⊂ λ of size n − 1 with µ = µ ′ , then µ ′ ⊂ λ. By the induction assumption the image of B n−1 contains a subgroup acting on a subspace of dimension 2 dim V µ as {x ⊕ t x −1 | x ∈ SL dim Vµ (q)}. Since dim V µ ≥ 3 it contains in particular a subgroup acting on a subspace of dimension 4 as {x⊕ t x −1 | x ∈ SL 2 (q)}, and a subgroup acting on some 6-dimensional subspace as {x ⊕ t x −1 | x ∈ SL 3 (q)}. • if λ = λ ′ and there does not exists [3, 2] ⊂ µ ⊂ λ of size n − 1 with µ = µ ′ . In this case it is easily checked that λ is a square diagram, hence the restriction of λ to S n−1 is irreducible, and that the corresponding diagram µ satisfies µ = µ ′ , µ ⊃ [3, 2, 1].
Since the restriction to S n−1 is irreducible one can check that OSP(µ) = OSP(λ) hence, since G ⊂ OSP ′ (λ), we get G = OSP ′ (λ) and this case does not need to be considered further. We notice that {x ⊕ t x −1 | x ∈ SL 2 (q)} contains the element hence R λ (B n ) contains in all cases an element x such that [x, V λ ] = (x − 1)V λ has dimension 2, this being obvious when it contains a natural SL 2 (q). We then use the following result of [GS], for V = V λ , G = R λ (B n ).
Theorem 5.1. ( [GS], theorem 7.A) Let V be a finite dimensional vector space of dimension d > 8 over an algebraically closed field of characteristic p > 0. Let G be a finite irreducible subgroup of GL(V ) which is primitive and tensor-indecomposable on V . Define ν G (V ) to be the minimum dimension of [βg, V ] = (βg − 1)V for g ∈ G, β a scalar with βg = 1. Then either ν G (V ) > max(2, √ d/2) or one of the following holds: (i) G is classical in a natural representation (ii) G is alternating or symmetric of degree c and V is the deleted permutation module of dimension c − 1 or c − 2. (iii) F * (G) = U 5 (2) with p = 2, d = 10.
Note that (iii) does not occur because d ≥ 16. If G contains a natural SL 2 (q), then G is tensor-indecomposable by the following lemma.
Proof. Let r denote the order of g. Since has characteristic p, we know that r is coprime to p. Assume by contradiction that G is tensor-decomposable. Then, g could be written g 1 ⊗ g 2 , hence g r = 1 implies that g r 1 = t and g r 2 = t −1 for some t ∈ × . Since r is prime to p, X r − t ±1 has no multiple root and thus g 1 , g 2 are semisimple.
If G does not contain a natural SL 2 (q), then it contains a twisted-diagonal embedding of SL 2 (q) and therefore an element which is conjugated to diag(ζ, ζ, ζ −1 , ζ −1 , 1, . . . , 1) with ζ of order q − 1. It is therefore tensor-indecomposable by the following lemma.
Proof. We let g denote the element of the statement, and assume by contradiction that g = g 1 ⊗ g 2 with g 1 ∈ GL a ( ), g 2 ∈ GL b ( ), ab = d and a, b ≥ 3. Since d ≥ 16 we can assume a ≥ 3, b ≥ √ d ≥ 4. As in the proof of the previous lemma, the order condition imply that g 1 and g 2 are semisimple. Let λ 1 , λ 2 , . . . and µ 1 , µ 2 , . . . denote the eigenvalues of g 1 and g 2 , respectively. Up to reordering we can assume λ 1 µ 1 = ζ.
We now want to rule out case (ii) of theorem 5.1. For this, we first consider the case where G contains a natural SL 2 (q). In particular, it contains an element g of order q − 1 such that dim[g, V ] = 2. In case G ⊂ S m and V is the deleted representation of S m of dimension N = m − 1 or N = m − 2 we notice that, the order of g being coprime to p, it acts as a semisimple endomorphism on the permutation representationṼ of S m ; since the composition factors ofṼ are V together with one or two copies of the trivial module, we get that dim[g,Ṽ ] = dim [g, V ]. But the condition dim[g, V ] ≤ 2 implies that g ∈ S m has order at most 3, a contradiction since q ≥ 8. The other case is when G contains a twisted-diagonal embedding of SL 2 (q). In this case it contains an element g conjugated to diag(ζ, ζ, ζ −1 , ζ −1 , 1, . . . , 1) of order q − 1 ≥ 7. We similarly get that, since dim[g, V ] ≤ 4, the order of g can be at most 6, a contradiction.
Next we want to show that the action of G on V is primitive. We start by ruling out the monomial case. If G ⊂ × q ≀ S N then we use the fact SL 2 (q) has a p-Sylow of order q, all of whose elements h satisfy dim[h, q 2 ] ≤ 1, and therefore G contains an elementary abelian p-subgroup of order q, whose elements h satisfy dim[h, V ] ≤ 2. By the Sylow theory these p-subgroups are conjugated inside × q ≀ S N ⊂ GL(V ) to a p-subgroup of S N , since the order of ( × q ) N is coprime to p. This means that S N contains an elementary abelian p-subgroup H of order q such that, for all h ∈ H dim[h, V ] ≤ 2.
We then use the following lemma.
Lemma 5.4. Let G be an elementary abelian p-subgroup of S N of order p r . Then G contains an element which is a product of at least r disjoint p-cycles.
Proof. By the permutation action we can identify S N and thus G with a subgroup of GL N ( ).
Since G is commutative, it is conjugated to a group of diagonal matrices, and therefore can be identified with a subgroup of µ N p , where µ p denotes the group of p-th roots of 1 in . Let ζ ∈ µ p be a primitive p-th root of 1. Every g ∈ G is a product of m cycles, with m equal to the multiplicity of ζ in the spectrum of g. We thus need to prove that there exists g ∈ G ⊂ µ N p having at least r components equal to ζ.
Identifying µ p with p such that ζ → 1, we get a structure of p -vector space on µ N p , and the lemma follows from the following one.
Lemma 5.5. Let K be a field, V a r-dimensional subspace of K N . There exists v ∈ V having at least r entries equal to 1.
Proof. Let e * 1 , . . . , e * N denote the dual canonical basis of K N , and J ⊂ {1, . . . , N } of maximal cardinality containing an element v with e * i (v) = 1 for all i ∈ J. If |J| < r, the intersection of the hyperplanes Ker(e * i ) for i ∈ J and V would contain a non-zero element w. Moreover, we have an element v ∈ J such that e * i (v) = 1 for all i ∈ J. Then e * i (v + βw) = 1 for all β ∈ K and i ∈ J. Since w = 0 there exists i 0 ∈ J such that e * i 0 (w) = 0. Therefore, we can find β such that e i 0 (v + βw) = 1, and this contradicts the maximality of J.
By lemma 5.4 the group H contains a product h of r disjoint p-cycles. Since dim[h, V ] = (p − 1)r we get (p − 1)r ≤ 2, contradicting assumption q > 4.
We now want to rule out the non-monomial imprimitive case. Assume by contradiction that Notice that in both cases G contains such an element. The rank of t − 1 is at most 2. Assume also that t ∈ H 1 × · · · × H m . Up to reordering we can assume H t 1 = H 1 . Since t has order p we can thus assume H t i = H i+1 for 1 ≤ i ≤ p − 1, H t p−1 = H p . We let U 1 ⊕ · · · ⊕ U m be the direct sum decomposition corresponding to the wreath product. Let v 1 ∈ U 1 \ {0}. By completing the family (v 1 , tv 1 , . . . , t p−1 v 1 ) we get a basis on which t acts by a matrix of the form M p ⊕ X where M p is the circulating matrix of order p and X is some matrix of size N − p. We have (M − 1) 2 = 0 but (M p − 1) 2 = 0 whenever p ≥ 3. Assuming this, we get t ∈ H 1 × · · · × H m . Notice that the induction assumption implies that R λ (B n−1 ) is a direct product of quasi-simple groups containing elements of that type. Because these elements are not semisimple, they moreover do not belong to the centers of these groups. It follows that R λ (B n−1 ) is normally generated by these elements hence is included in H 1 × · · · × H m , which is normal in H ≀ S m . Since B n is normally generated by B n−1 (see lemma 2.1) this proves that R λ (B n ) ⊂ H 1 × · · · × H m , contradicting the irreducibility of R λ . It then remains to examine separately the case p = 2. If dim U 1 ≥ 3, we can pick a linearly independent family v 1 , v ′ 1 , v ′′ 1 ∈ U 1 and, by completing the family (v 1 , tv 1 , v ′ 1 , tv ′ 1 , v ′′ 1 , tv ′′ 1 ) we get a basis on which t acts by a matrix of the form M p ⊕ M p ⊕ M p ⊕ X for some X and we get that the rank of t − 1 is at least 3, a contradiction that proves dim U 1 ≤ 2. In case t is a transvection, the same contradiction proves dim U 1 = 1, and we are reduced to the monomial case that we already did. If we cannot choose t to be a transvection, we have p = 2, dim U 1 = 2. Under our assumption we know q = 2. Let us consider two 2 -linearly independent elements a 1 , a 2 ∈ q , and elements t 1 , t 2 ∈ G whose Jordan form in some common basis is Let us assume t 1 , t 2 ∈ H 1 × · · · × H m . By the same argument as above with t = t 1 , we can assume that U = U 1 ⊕ U 2 is t 1 -stable, with t 1 (U 1 ) = U 2 and therefore t 1 (U 2 ) = t 2 1 (U 1 ) = U 1 , and t 1 (U i ) = U i for i ≥ 3. Using the same argument for t 2 we can also assume that U ′ = U a ⊕ U b is t 2 -stable with t 2 exchanging U a and U b for some a = b. Since I = Im (t i − 1) is independent of i, we have I ⊂ U ∩ U ′ . We prove that U r ⊂ I for every r. When r ∈ {1, 2, a, b} this is clear because t i acts as 1 on such a U r . But U r ⊂ I = Im (t i − 1) ⊂ Ker(t i − 1) for all i implies r ∈ {1, 2, a, b} since each of the U r for r ∈ {1, 2, a, b} is not stable by at least one of the two t i 's.
Then, U ∩ U ′ containing the 2-dimensional subspace I but no U r , we have U = U ′ . It follows that t 1 t 2 (U r ) = U r for all r, hence t = t 1 t 2 ∈ H 1 × · · · × H m . We can thus resume the previous argument : since R λ (B n−1 ) is normally generated by such elements, and because B n is normally generated by B n−1 , we would get R λ (B n ) ⊂ H 1 × · · · × H m , contradicting the irreducibility of R λ .
This proves that G is primitive, tensor-indecomposable, and we ruled out cases (ii) and (iii) of the theorem.
Theorem 5.1 implies that G is a classical group over a finite subfield q ′ of q . We first show that q ′ = q. We use the following lemmas, where SU m (q) denotes, in case q is a square, the unitary subgroup of SL m (q).
Lemma 5.6. For all m ≥ 2, the field generated over p by {tr(g); g ∈ SL m (q)} is q . For all m ≥ 3, the field generated over p by {tr(g); g ∈ SU m (q)} is q .
Proof. We start we the case SL m (q) and argue by contradiction. Suppose that {tr(g); g ∈ SL m (q)} generates a proper subfield q ′ with q ′ ≤ √ q. Since the action of SL m (q) on its natural representation is absolutely irreducible, it would be conjugate inside GL m (q) to some subgroup of GL m (q ′ ) (see e.g. [I], theorem 9.14), and therefore to some subgroup of SL m (q ′ ) since SL m (q) is perfect. But |SL m (q)| > |SL m (q ′ )| for m ≥ 2, a contradiction. In the SU m (q) case, {tr(g); g ∈ SU m (q)} would generate a proper subfield q ′ with q ′ ≤ √ q. Since the action of SU m (q) on its natural representation is again absolutely irreducible, it would be conjugated inside GL m (q) to some subgroup of GL m (q ′ ) by the same argument, and therefore to some subgroup of SL m (q ′ ) since SL m (q) is perfect. Then the order of SU m (q) is at most as soon as m ≥ 3, a contradiction.
Note that a similar statement does not hold for SU 2 (q), for in that case every element of the group has a trace of the form ζ +ζ ∈ √ q .
If λ = λ ′ then we know G ⊂ OSP(λ), hence the only possibility left by Theorem 5.1 is that G = OSP ′ (λ). If λ = λ ′ , then G cannot preserve any nontrivial bilinear form, since R λ , viewed as a representation of B n , is not isomorphic to its dual by Lemma 3.4, and neither can it preserve an hermitian form, because it is also not isomorphic to its conjugatedual. This last property is because the restriction to B 3 does not have this property when p (α + α −1 ) = p (α), as is shown in [BM]. Theorem 5.1 thus implies G = SL(λ). Now, we now recall Goursat's lemma, which describes the subgroups of a direct product, and that we need in the sequel.
Lemma 5.7. (Goursat's lemma) Let G 1 and G 2 be two groups, H ≤ G 1 × G 2 , and denote by π i : H → G i . Write H i = π i (H) and H i = ker(π i ′ ), where {i, i ′ } = {1, 2}. Then there is an isomorphism ϕ : We now can prove that Φ n is surjective. We choose a good ordering on the elements of E n such that λ ≤ λ ′ , with the additional condition that the 2-rows diagram are smaller than the others. By numbering the partitions λ ∈ E n such that λ ≤ λ ′ we can prove by induction on n that, for a given λ 0 , the composite of Φ n with the projection of its target domain onto is surjective. For λ 0 the minimal element of E n , G λ 0 = SL n−1 (q). By the results of [BM] this composite is surjective whenever λ 0 is a 2-rows diagram. We use Goursat's lemma with G 1 = G λ 0 and G 2 = G(λ 0 + 1), where we let as in the introduction G(µ) = SL(µ) if µ = µ ′ , and G(µ) = OSP ′ (µ) otherwise. We let P G(µ) denote its image in the projective linear group. We know that H 1 = G 1 and H 2 = G 2 , and we get an isomorphism ϕ : H 1 /H 1 → H 2 /H 2 , which induces a surjective morphismφ : Assume that H 1 /H 1 ≃ H 2 /H 2 is not abelian. Then H 2 /H 2 has for quotient P G(µ) and we get a surjective morphismφ : H 1 ։ P G(λ 0 + 1). Let now µ ≤ λ 0 , and consider the restrictionφ µ ofφ to G(µ). Assume it is non-trivial. Since the image of the center is mapped to 1, it factorizes through an isomorphismφ µ : P G(µ) → P G(λ 0 + 1). But this implies that the image of B n inside G(µ) × G(λ 0 + 1) is included inside H = {(x, y) |ȳ =φ r (x)}, wherē x,ȳ denote the canonical images of x, y.
Let then R λ : B n → PGL(λ) denote the projective representation deduced from R λ . By the very description of H we have R λ 0 +1 (b) =φ(R µ (b)) for all b ∈ B n , whereφ is the composite Note that H 1 /( × q ∩ H 1 ) ⊂ PGL(µ), and clearly Imφ ⊃ P G(λ 0 + 1). From this one deduces that the restriction ofφ to P G(µ) is non-trivial, hence induces an isomorphism ψ between the simple groups ψ : P G(µ) → P G(λ 0 + 1). Since dim µ ≥ 16 no triality phenomenon can be involved and thus, up to a possible linear conjugation of the representations R µ , R λ 0 +1 , we get (see [W] §3.7.5 and §3.8) that ψ is either induced by a field automorphism Φ ∈ Aut( q ), or, in case λ = λ ′ , by the composition of such an automorphism with X → t X −1 . In the first case we let S = R µ , in the second case we let S : g → t R µ (g −1 ).
In both cases, we have R λ 0 +1 (b) = Φ(S(b)) = S Φ (b) for all b ∈ B n , with S Φ : g → Φ(S(g)), meaning that the two representations of B n afforded by R λ 0 +1 and S Φ are projectively equivalent, that is there is z : B n → × q such that R λ 0 +1 (b) = S Φ (b)z(b) for all b ∈ B n . Since B n is perfect for n ≥ 5 (see [GL]) we get z = 1 ; this proves that the restrictions of R λ 0 +1 and S Φ to B n are isomorphic. In particular, their restrictions to B 3 are isomorphic. The restrictions of R λ 0 +1 and S to B 3 are direct sums of the irreducible representations of the Hecke algebra for n = 3, restricted to the derived subgroups. There are three such irreducible representations, of dimensions 1 and 2, corresponding to the partitions [3],[2, 1], [1, 1, 1]. Note that these restrictions have to contain a constituent of dimension 2, for otherwise the image of B 3 would be trivial, hence s 1 and s 2 would have the same image (as s 1 s −1 2 ∈ B 3 ), which easily implies that the image of B n is abelian, contradicting the irreducibility.
We thus have R µ (b) = S(b) for all b ∈ B n−1 . Note that S, viewed as a representation of B n , is isomorphic to R λ for λ equal to either λ 0 + 1 or possibly to its transpose. By Lemma 3.4 we get that the only possibility is µ = λ 0 + 1, since E n contains λ 0 + 1 hence not its transposed if different. But this is a contradition which proves that eachφ µ is trivial, hence so isφ, and this contradicts its surjectivity. Therefore, H 1 /H 1 ≃ H 2 /H 2 is abelian. It follows that each H i contains the commutator subgroup of H i . Since both of the H i are perfect we get the conclusion by induction on λ 0 .
First of all, the preliminary analysis of the partitions imply that we can assume that the image of B n−1 contains a copy of SU 2 (q) acting either on a 2-dimensional subspace, or on a 4-dimensional subspace via the twisted action x ⊕ x −1 . Therefore, there is an x ∈ G originating either from a toric element or from a unitary transvection of SU 2 (q) such that dim(x − 1)V = 2. Moreover, G is tensor-indecomposable by Lemmas 5.2 and 5.3, provided we know that SU 2 (q) contains a semisimple element of order > 2 and this holds because 1 + √ q > 2. Case (ii) is ruled out in a similar way. If G contains a natural SU 2 (q) and therefore some g with dim[g, V ] ≤ 2 of order 1 + √ q we conclude as in the non-unitary case. If G contains instead a twisted-diagonal SU 2 (q), we similarly get an element g of order 1 + √ q with dim[g, V ] ≤ 4 providing the same contradiction as in the non-unitary case, as soon as 1 + √ q ≥ 7, which is our assumption here. For ruling out the monomial case, we assume again G ⊂ × q ⋊ S N , and we notice again that G contains some natural or twisted-diagonal SU 2 (q), and one of its p-Sylow subgroups induces as in the classical case an elementary abelian p-subgroup H of S N with dim[g, V ] ≤ 2 for all g ∈ H, but this time of order √ q ≥ 6. This again provides a contradiction by the same argument.
The argument for the non-monomial imprimitive case applies here verbatim when p ≥ 3 and, when p = 2 we can similarly pick two 2 -linearly independent elements t 1 , t 2 ∈ G originating from some p-Sylow subgroup of SU 2 (q), because we have √ q > 2.
This proves again that G is primitive, tensor-indecomposable, and we rule out cases (ii) and (iii) of Theorem 5.1.
Applying theorem 5.1, we get again that G is a classical group over a subfield q ′ of q . A consequence of lemma 5.6 is that, whenever λ contains a partition µ of size n − 1 but not its transpose µ ′ , then G contains a natural SU 3 (q) and thus q ′ = q. Otherwise, we have λ = λ ′ , and therefore G is a subgroup of some OSP( √ q). Moreover, it contains a twisteddiagonal SU 3 (q), and therefore q ′ has to contain all the tr(g) + tr(g) for g ∈ SU 3 (q), hence all the β + β for β ∈ q , that is √ q . The remaining part of the argument is then completely similar to the first case (and actually easier).
|
2014-01-05T16:53:08.000Z
|
2014-01-05T00:00:00.000
|
{
"year": 2014,
"sha1": "b87405952dbcabcd77d777b86a452b90d7295580",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1401.0913.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b87405952dbcabcd77d777b86a452b90d7295580",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.